Hi Rayn,
Thanks for that answer. I have a similar problem, and this solves my problem. Now I have a follow-up question, as I haven't used the HDF5 format yet. I thought maybe you would know:
My training database consists of about 30 million samples (dimension: 6x36x36), so the HDF5 file will grow very large.
=> Should I use chunking? (Do I get higher read performance if I use chunking?)
=> If yes, what is a reasonable chunk size?
When I train my neural network, I am using a batch size of 128, so I thought a chunk size of 128x6x36x36 makes sense, that would give me about 230'000 chunks. Is that reasonable?