Writing a mutliple time step 4D NetCDF file by sequentially writing cubes with singleton time dimensions

102 views
Skip to first unread message

rsignell

unread,
Jun 17, 2013, 8:12:25 AM6/17/13
to scitoo...@googlegroups.com
I'm trying to write a multiple time step 4D NetCDF file by sequentially writing 4D cubes with singleton time dimesion.
Each cube has shape=(1, 40, 24, 28) and I'm trying to aggregate these into a dataset that will have 600 time steps along the unlimited time dimension.

I'm getting a:
RuntimeError: NetCDF: Invalid argument
when I try to write the 1st time step:

http://nbviewer.ipython.org/5777643

Does anyone see the problem?

Thanks,
Rich

Andrew Dawson

unread,
Jun 17, 2013, 8:55:47 AM6/17/13
to scitoo...@googlegroups.com
Hi Rich

What version of iris are you using? I can't reproduce this issue using 1.4.0rc1, the file saves without a problem. I'm not sure exactly what bug you are seeing, or what change fixed it. I know that GRIB missing value handling has been improved in the latest 1.4.0 release, but I'm not sure if this is the problem you are seeing. Anyway, would you be able to upgrade to the recent 1.4.0 release and see if this problem persists?

Andrew

Andrew Dawson

unread,
Jun 18, 2013, 4:04:33 AM6/18/13
to scitoo...@googlegroups.com
This problem no longer occurs in iris v1.4.0 or above:

On 17 June 2013 19:00, rsignell wrote:
... 
I updated Iris to 1.5dev, and yes, the problem is solved.
... 

I thought this should go in the group archive in case some one else sees the same thing. 

rsignell

unread,
Jun 18, 2013, 6:11:35 PM6/18/13
to scitoo...@googlegroups.com
Okay, past my bug and on to a technical question:

Can I use Iris to append addtional singleton time dimension 4D slices (1, 40, 24, 28)  [time, z, lat, lon] to my existing 1 time step NetCDF written by Iris?

I can do it using NetCDF4:

nc = netCDF4.Dataset('cfsr.nc','r+')
for i in range(10):
    url
='http://nomads.ncdc.noaa.gov/thredds/dodsC/modeldata/\
cmd_ocnh/2009/200905/200905%2.2d/ocnh01.gdas.200905%2.2d00.grb2'
% (i+1,i+1)
   
print url
    cubes
= iris.load(url)
    t
= cubes[4]
    slice
=t.extract(iris.Constraint(longitude=lambda cell: -77.+360. < cell < -63.0+360.,latitude=lambda cell: 34. < cell < 46.0))
    nc
.variables['Potential_temperature'][i,:,:,:]=slice.data

and this works (although I still would need to write the time value for each step), but I'm wondering whether I could avoid using NetCDF4 directly and just use Iris.

Full example here:

http://nbviewer.ipython.org/5777643

Thanks,
Rich

Andrew Dawson

unread,
Jun 19, 2013, 4:37:30 AM6/19/13
to scitoo...@googlegroups.com
Hi Rich

Unfortunately it doesn't look like iris supports appending to existing netcdf files currently. The netcdf saver is hard-wired to open netcdf datasets in write ('w') mode.

You are able to do this in memory however, and write the resulting cube to file in one go:

import iris
from iris.experimental.concatenate import concatenate


cubelist
= iris.cube.CubeList
for i in range(10):
    url
= 'http://nomads.ncdc.noaa.gov/thredds/dodsC/modeldata/\
cmd_ocnh/2009/200905/200905%2.2d/ocnh01.gdas.200905%2.2d00.grb2'
% (i+1,i+1)

   
print url
    cubes
= iris.load(url)
    t
= cubes[4]

    slice
= t.extract(iris.Constraint(longitude=lambda cell: -77.+360. < cell < -63.0+360.,latitude=lambda cell: 34. < cell < 46.0))
    cubelist
.append(slice)
cubes
= concatenate(cubelist)
iris
.save(cubes, 'cfsr_multi_time.nc')

The problem with this is the concatenation will load the entire data payload from all the cubes, and is therefore not a good idea if you have many large cubes and not much memory in your computer! I know this is not what you are looking for, so perhaps it would be best to create an issue on github for appending to netcdf files.
Reply all
Reply to author
Forward
0 new messages