Should I just accept this
I vaguely remember reading that iris will only merge along scalar dimension
Thanks for the reply Niall. It is useful to know that there is a concatenate method that can do what I am requesting, but for the tasks I am performing it is not feasible for me to load all the data into memory....
Merging the meta-data to a single cube so useful because it means I can cut the data however I want before loading into memory.
On Friday, 6 September 2013 11:24:33 UTC+1, Chris Roberts wrote:
Thanks for the reply Niall. It is useful to know that there is a concatenate method that can do what I am requesting, but for the tasks I am performing it is not feasible for me to load all the data into memory....
Merging the meta-data to a single cube so useful because it means I can cut the data however I want before loading into memory.
I have a large amount of satellite data as individual cubes for time slices that are spaced approximately every 10 days between 1999 and 2012
import iris
import glob
wildcard = '/data/local/hadhy/Projects/LAI_India/g2_BIOPAR_LAI_20121***0000_ASIA_VGT_V1.3_India.nc' # data on eld121
files = glob.glob(wildcard)
cubelist = iris.load(files)
print cubelist
0: unknown / (unknown) (grid_latitude: 1500; grid_longitude: 1500; time: 1)
1: unknown / (unknown) (grid_latitude: 1500; grid_longitude: 1500; time: 1)
...
8: unknown / (unknown) (grid_latitude: 1500; grid_longitude: 1500; time: 1)
new_cube = cubelist.merge_cube()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/project/avd/iris/live/pre-release/iris/cube.py", line 343, in merge_cube
proto_cube.register(cube, error_on_mismatch=True)
File "/project/avd/iris/live/pre-release/iris/_merge.py", line 1237, in register
error_on_mismatch)
File "/project/avd/iris/live/pre-release/iris/_merge.py", line 259, in match_signature
raise iris.exceptions.MergeError(msgs)
iris.exceptions.MergeError: failed to merge into a single cube.
Coordinates in cube.dim_coords differ: time.
new_cube = cubelist.concatenate()[0]
print new_cube
unknown / (unknown) (grid_latitude: 1500; grid_longitude: 1500; time: 9)
Dimension coordinates:
grid_latitude x - -
grid_longitude - x -
time - - x
Attributes:
Conventions: CF-1.5
print new_cube[0].coord('time')
DimCoord([2012-10-03 00:00:00, 2012-10-13 00:00:00, 2012-10-24 00:00:00,
2012-11-03 00:00:00, 2012-11-13 00:00:00, 2012-11-23 00:00:00,
2012-12-03 00:00:00, 2012-12-13 00:00:00, 2012-12-24 00:00:00], standard_name=u'time', calendar=u'standard', var_name='time')
>>> print cubelist[0].coords('time')
[DimCoord(array([ 376248.]), standard_name=u'time', units=Unit('hours since 1970-01-01 00:00:00', calendar='standard'), var_name='time')]
>>> print cubelist[1].coords('time')
[DimCoord(array([ 376752.]), standard_name=u'time', units=Unit('hours since 1970-01-01 00:00:00', calendar='standard'), var_name='time')]
>>> print cubelist[8].coords('time')
[DimCoord(array([ 375528.]), standard_name=u'time', units=Unit('hours since 1970-01-01 00:00:00', calendar='standard'), var_name='time')]
>>> print cubelist[0].coord('time').has_bounds()
False
>>> print cubelist[1].coord('time').has_bounds()
False
>>> print cubelist[8].coord('time').has_bounds()
False
# The date comes from the metadata embedded in the dataset
mydate = ds.GetMetadata_Dict()['TEMPORAL_NOMINAL']
mydate = [int(i) for i in mydate.split('-')]
u = unit.Unit('hours since 1970-01-01 00:00:00', calendar=unit.CALENDAR_STANDARD)
this_date= u.date2num(datetime.datetime(mydate[0], mydate[1], mydate[2], 0))
time = DimCoord(this_date, standard_name='time', units=u)
cube = Cube(data, dim_coords_and_dims=[(latitude, 0), (longitude, 1), (time, 2)])
...
ncdump -h g2_BIOPAR_LAI_201212240000_ASIA_VGT_V1.3_India.nc
netcdf g2_BIOPAR_LAI_201212240000_ASIA_VGT_V1.3_India {
dimensions:
grid_latitude = UNLIMITED ; // (1500 currently)
grid_longitude = 1500 ;
time = 1 ;
variables:
float unknown(grid_latitude, grid_longitude, time) ;
unknown:_FillValue = 1.e+20f ;
unknown:grid_mapping = "rotated_latitude_longitude" ;
int rotated_latitude_longitude ;
rotated_latitude_longitude:grid_mapping_name = "rotated_latitude_longitude" ;
rotated_latitude_longitude:longitude_of_prime_meridian = 0. ;
rotated_latitude_longitude:earth_radius = 6371229. ;
rotated_latitude_longitude:grid_north_pole_latitude = 90. ;
rotated_latitude_longitude:grid_north_pole_longitude = 180. ;
rotated_latitude_longitude:north_pole_grid_longitude = 0. ;
float grid_latitude(grid_latitude) ;
grid_latitude:axis = "Y" ;
grid_latitude:units = "degrees" ;
grid_latitude:standard_name = "grid_latitude" ;
float grid_longitude(grid_longitude) ;
grid_longitude:axis = "X" ;
grid_longitude:units = "degrees" ;
grid_longitude:standard_name = "grid_longitude" ;
double time(time) ;
time:axis = "T" ;
time:units = "hours since 1970-01-01 00:00:00" ;
time:standard_name = "time" ;
time:calendar = "standard" ;
// global attributes:
:Conventions = "CF-1.5" ;
}
...... &nbs
cubes = iris.load(files)
# Now remove the time dimension coordinate using slicing, time is the third dimension
# (number 2):
cubes = iris.cube.CubeList([cube[:, :, 0] for cube in cubes])
# Now merge them:
cube = cubes.merge_cube()...... &nbs
we really could combine merge and concatonate, couldn't we? It confuses everyone and the logic of choosing which one to use must be a pretty simple if statement, especially if we get the user to specify which coord it was over, which would save us getting unexpected merge shapes as well
realization x forecast_reference_time x forecast_period x lat x lon
sliced_cube.merge(['realization', 'forecast_reference_time', 'forecast_period', 'lat', 'lon'])
--
You received this message because you are subscribed to a topic in the Google Groups "Iris" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/scitools-iris/QKaocYoLf8I/unsubscribe.
To unsubscribe from this group and all its topics, send an email to scitools-iri...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.