In the process of writing a utility to reorganize HDF5 data into separate files, I've tripped into an issue with compound data types.
This is what I am told by the developers for 2 applications that I deal with (one upstream/creates, the other downstream/consumes my data)
They expect compound data types to be packed. When I use Pytables to copy, some datasets are unpacked, and this results in an alignment error with unexpected "random data" that plays havoc when the downstream application reads the file.
I'm told this can be addressed with H5Tpack(). Is that HDF5 function exposed thru the Pytables API? Or, is there another way to accomplish this task?