I know the question is slightly vague, but I think this issue will come up in many different ways so I want to be as general as possible.
I have a cube list loaded by:
cubes = iris.load('netCDF file for data at year*.nc')
There is a file for each year from 1950-2013 giving a total size of ~100GB. All dimension are the same except for time which is an unlimited dimension. I am pretty impressed with the ability to extract data and collapse dimensions for each cube in the list while not loading all cubes, however I am wondering how I can do some other operations iterating through each cube in the list without loading all cubes into memory.
For example, if I wanted to interpolate a set of lat/lon points in each cube and put them into another cubelist how can I do this without loading all cubes? I've tried:
interpolated_cubes = iris.cube.CubeList(cube.interpolate([('latitude',latpoints),('longitude', lonpoints)], iris.analysis.Linear()) for cube in cubes)
but I think this is loading all cubes into memory (I stopped before my computer crashed). What would be the proper way to do this to reduce memory usage? I can handle each cube individually (~1.5GB) but not all.
As a side note, I think it would be really nice to have a section in the User Guide dealing specifically with how to take advantage of biggus in iris for dealing with large datasets. I'm sure a lot of people can relate/have similar questions regarding this area in iris. Also I think a lot of people interested in this functionality and it would be good to highlight it as it becomes more mature in iris.