problem with cube names and units when saving anomaly cubes

289 views
Skip to first unread message

Ivan P.

unread,
Aug 14, 2014, 7:40:42 AM8/14/14
to scitoo...@googlegroups.com
I'm calculating anomalies as a percent of the average. The resultant cube's name is 'unknown' and the units aren't what I want. I rename the cube , change the units and save the cube:

anomaly = (year_euro4/clim_cube)*100
print anomaly.summary(True)

unknown / (1)                       (grid_latitude: 1000; grid_longitude: 1100)

anomaly
.standard_name = year_euro4.standard_name
#I've also tried anomaly.rename(year_euro4.standard_name) but that didn't work either
anomaly.units = '%'
print anomaly.summary(True)

surface_downwelling_shortwave_flux_in_air
/ (%) (grid_latitude: 1000; grid_longitude: 1100)

iris.save(anomaly, os.path.join(save_dir, filename))


However when I load the cube I've saved:

cube = iris.load_cube(path/to/file)
print cube.summary(True)

unknown / (unknown)                 (grid_latitude: 1000; grid_longitude: 1100)

Which doesn't make any sense seeing as all of the above worked fine, I didn't get a ''ValueError: [UT_UNKNOWN] Failed to parse unit" exception when setting the units and the cube name is just whatever my source data's name was, which complies with the iris naming conventions.

If I do:

anomaly.standard_name = year_euro4.standard_name
anomaly.units = year_euro4.units

This saves fine but that's not what my units are... Just renaming the phenomenon and leaving the units as whatever iris sets them to when calculating the anomaly or setting them to "None" also doesn't work...

Anyone else faced this?

Andrew Dawson

unread,
Aug 14, 2014, 3:07:21 PM8/14/14
to scitoo...@googlegroups.com
I haven't been able to reproduce this with dummy data using Iris 1.7.1. I tried the following:

import iris

c
= iris.cube.Cube([0])
c
.standard_name = 'surface_downwelling_shortwave_flux_in_air'
c
.units = '%'
print c.summary(True)
iris
.save(c, 'test1.nc')
r
= iris.load_cube('test1.nc')
print r.summary(True)

This works as expected for me, does it work for you? If not, which version of Iris are you using? If it does work, can you give us a few more details about the cube(s) you are working with?

Ivan P.

unread,
Aug 15, 2014, 6:30:27 AM8/15/14
to scitoo...@googlegroups.com
I'm using Iris 1.7.0 on one of the MetOffice Linux boxes.

My initial data is files of hourly values for each day from 1979 to 2013. I'm using monthly averages of those as my year_euro4 cubes, e.g. Jan 1979 etc.. The clim_cube cubes are averages of that monthly data across some range of years. E.g. an average of Jan averages for 1979-1988, an average of all months for the same period etc. The year_month_euro4 cubes are obviously the monthly averages.

Here's the code I'm using:

basepath_clim = '/data/local/ipaspald/euro4_retrieved/clim_averages'
dirnames_clim
= [os.path.join(basepath_clim, dirname) for dirname in os.listdir(basepath_clim)\
               
if os.path.isdir(os.path.join(basepath_clim, dirname))]

basepath_euro4
= os.path.dirname(basepath_clim)
full_euro4
= iris.load_cube(os.path.join(basepath_euro4, '*.pp')) # load all my monthly-averaged cube data, I extract what I need from here
full_euro4
.coord('time').bounds=None # otherwise get an error "Cannot determine if point lie in a region of bounded datetime objects (something like that...)
for clim_average in dirnames_clim: # do each clim period I have
   
print 'Period ',os.path.basename(clim_average)
    save_dir
= os.path.join(basepath_euro4, 'anomalies', os.path.basename(clim_average))
   
for year in range(1979,2013+1):
       
print '\tYear '+str(year)
        filename
= 'anomaly_Year{0}_vs_{1}{2}'.format(year, os.path.basename(clim_average), '.pp')
        clim_cube
= iris.load_cube(os.path.join(clim_average, 'full', '*.pp'))
        constraint
= iris.Constraint(time = iris.time.PartialDateTime(year=year))# get a single year
        year_euro4
= full_euro4.extract(constraint).collapsed('time', iris.analysis.MEAN) # collapse my monthly averages for a whole year
        anomaly
= (year_euro4/clim_cube)*100
        anomaly
.standard_name = full_euro4.standard_name
       
print anomaly.standard_name
        anomaly
.units = 'percent'
       
print anomaly.units
        iris
.save(anomaly, os.path.join(save_dir, filename))
       
#bc.cube_saver(anomaly, save_dir, filename) # this is a func I've defined that creates the save_dir if it doesn't already exist and then passes it to iris.save
       
for month in range(1,12+1):
           
print '\t\tMonth' + str(month)
            filename
= 'anomaly_Year{0}_Month{1}_vs_{2}{3}'.format(year, month, os.path.basename(clim_average), '.pp')
            clim_cube
= iris.load_cube(os.path.join(clim_average, '*month'+str(month)+'_*'))
            constraint
= iris.Constraint(time = iris.time.PartialDateTime(year=year, month=month))get just a month
            year_month_euro4
= full_euro4.extract(constraint)
            anomaly
= (year_euro4/clim_cube)*100
            anomaly
.standard_name = full_euro4.standard_name
            anomaly
.units = 'percent'
            iris
.save(anomaly, os.path.join(save_dir, filename))
           
# bc.cube_saver(anomaly, save_dir, filename)
print 'DONE'



Here's an example of an year_euro4 cube:
surface_downwelling_shortwave_flux_in_air / (W m-2) (grid_latitude: 1000; grid_longitude: 1100)
     
Dimension coordinates:
          grid_latitude                                           x                    
-
          grid_longitude                                          
-                     x
     
Scalar coordinates:
          forecast_period
: -346.0 hours, bound=(-718.0, 26.0) hours
          forecast_reference_time
: 1979-07-16 21:31:40, bound=(1979-01-30 21:31:40, 1979-12-30 21:31:40)
          time
: 1979-07-02 17:31:40, bound=(1979-01-16 23:31:40, 1979-12-16 11:31:40)
     
Attributes:
          STASH
: m01s01i235
          source
: Data from Met Office Unified Model
          um_version
: 8.2
     
Cell methods:
          mean
: time
          mean
: time

A clim_cube:

surface_downwelling_shortwave_flux_in_air / (W m-2) (grid_latitude: 1000; grid_longitude: 1100)
     
Dimension coordinates:
          grid_latitude                                           x                    
-
          grid_longitude                                          
-                     x
     
Scalar coordinates:
          forecast_period
: -131452.0 hours, bound=(-262930.0, 26.0) hours
          forecast_reference_time
: 2013-12-30 15:31:40
          time
: 1999-01-01 11:31:40, bound=(1984-01-02 05:31:40, 2013-12-31 17:31:40)
     
Attributes:
          STASH
: m01s01i235
          source
: Data from Met Office Unified Model
          um_version
: 8.2
     
Cell methods:
          mean
: time

Do you need anything else? I'm totally lost where this problem could come from....

Andrew Dawson

unread,
Aug 15, 2014, 7:11:40 AM8/15/14
to scitoo...@googlegroups.com
OK this problem can be reproduced, looks like a problem with the PP loader/saver (you didn't mention you were using PP format). I've opened a ticket #1275.

Ivan P.

unread,
Aug 15, 2014, 7:40:56 AM8/15/14
to scitoo...@googlegroups.com
Cheers, should have been more specific! Do you have an ETA as to when this'd be fixed as I'd need to run my soon-ish analyses?

Andrew Dawson

unread,
Aug 15, 2014, 9:33:55 AM8/15/14
to scitoo...@googlegroups.com
Hard to say right now. If this is determined to be a bug the fix will probably go into the next bug-fix release (rather than waiting for the next full release) so it could be quite fast. Once a fix is merged you can always use the development version of iris so you don't even have to wait for a bug-fix release. I suggest you keep an eye on the ticket for further developments, if you have a Github account you can subscribe to the ticket to get email notifications of updates.
Reply all
Reply to author
Forward
0 new messages