Hi,
>> <HDF5 dataset "level1": shape (10000, 11000, 2), type "<i2">
>> my_array = f['root'/level1'].value
>> in __getitem__
>> arr = numpy.ndarray(mshape, new_dtype, order='C')
>> MemoryError
This means you don't have enough memory to read in the whole dataset
at once. You have 10,000 x 11,000 x 2 x (2 bytes) = 420 MB, which
evidently is too much to allocate.
Your best bet is to load only the parts you want, using the standard
NumPy slicing syntax. There's a section in the docs here:
http://www.h5py.org/docs/high/dataset.html#slicing-access
HTH,
Andrew