Hi,
I have a 30 GB hdf5 file with datasets compressed using the snappy algorithm. I recently upgraded anaconda, h5py and pytables and now I can no longer read this file. When I try to read a dataset using the following code:
import tables
import h5py
f = h5py.File('/tmp/options.hdf5', 'r')
f['prices/E1AF8/C:2830/timestamp'][()]
I get the stack trace below. Strangely enough, if I copy one small group from the file using h5copy, I can read the new file, but if I copy the parent group, I get the same error when reading data from the new file.
I am running on Mac OS 10.15.15 with python 3.7.6, hdf5 1.10.6 , h5py 2.10.0 and pytables 3.6.1. I tried downgrading all these files to their original versions, but still get the same error.
Best,
Sal
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-3-d7e57ae31f9d> in <module>
1 f = h5py.File('/tmp/options.hdf5', 'r')
----> 2 f['prices/E1AF8/C:2830/timestamp'][()]
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
~/anaconda3/envs/py37/lib/python3.7/site-packages/h5py/_hl/dataset.py in __getitem__(self, args)
571 mspace = h5s.create_simple(mshape)
572 fspace = selection.id
--> 573 self.id.read(mspace, fspace, arr, mtype, dxpl=self._dxpl)
574
575 # Patch up the output for NumPy
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/_objects.pyx in h5py._objects.with_phil.wrapper()
h5py/h5d.pyx in h5py.h5d.DatasetID.read()
h5py/_proxy.pyx in h5py._proxy.dset_rw()
h5py/_proxy.pyx in h5py._proxy.H5PY_H5Dread()
OSError: Can't read data (this Blosc library does not have support for the 'snappy' compressor, but only for: blosclz,lz4,lz4hc,zlib,zstd)