Thanks, Patrick.
I should have been more explicit about what I was asking. The method you outlined I was aware of.
What I'm asking has to do if you have such a large number of files or a single file that is so large it can't be (or is impractical to be) read into memory.
The idea I tried, and works thus far is to use a numpy memmap object.
So imagine a file "VeryLargeImageFile.img", which consists of the binary data of type unsigned integer (uint16). Let's say this was captured with a camera of format frame shape (512,640), and there are 10,000 frames stored in the file.
fp = np.memmap('VeryLargeImageFile.img', dtype='unt16', mode='r',
offset=0, shape=(10000,512,640))
The variable can be passed to ImageView item by imv.setImage(fp).
However, for 32-bit operating systems, fp would still be limited to 2Gb.
So my real question is if there is a method to pass a generator which reads a frame at a time to the ImageView or, if I had a large list of binary files, could I create a generator which will cycle through reading each file and return a numpy array of the data read for each file.
Is what I'm asking making more sense, now? If not, let me know and I'll try to clarify by uploading my current code.
Thanks again.
-Dennis