I am writing compressed data via the C API. I have a fixed block size (256k) and a buffer size of 256k + min header size. When I send in 384k of random data, I take the first 256k, compress it, and write it plus the header out, then do the same with the remaining 128k. That all seems to round-trip nicely.
I then write out that stream of data to a file, and would like to be able to read this data using the python API. It seems that the python API requires that the bytestring passed in contain only one chunk of data, and will not do any sort of iterative "read the header; decompress, read the next header, repeat" behavior. Is that correct?
In order to read the data, will I have to implement the "read header, check cbytes, slice bytes to length; pass to python-blosc" loop myself? Or is there some way to do this that I'm not seeing?