Penelope Maher
unread,Feb 18, 2021, 5:54:51 AM2/18/21Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to SciTools (iris, cartopy, cf_units, etc.) - https://github.com/scitools
Hi all,
I am new to iris and trying to load quite a large amount of UM data in .pp file format. My goal is up load 28 years of monthly data but for only a subset of stashcodes of interest (approx 15 variables) and output as .nc (ideally into one file but no problem if I need to join them later).
Each input file contains one month of monthly data and there are on the order of 200 variables in each file. My code takes about 2 minutes to run 4 months of data. So the projected run time is about 3 hours. Is there a faster way to do this? I could loop over years or similar, but that won't bring down the run time (just mitigate problems if the code crashes). The code I am using to load the cube is at the end of the email.
Thank you,
Penelope
stash = ['03i217', '03i234']
for var in stash:
stash_list.append('m01s{0}'.format(var))
stash_num = iris.AttributeConstraint(STASH=lambda x: x in stash_list)
cubes = iris.load(file_list, stash_num) #file_list is a list of strings for the file names