Hi all,
I've recently been working with some very large (~750 million voxel) grids. I work with grids of different value types but a Vec3I grid this size can end up using about ~10GB memory per grid and for some operations I need to combine two grids at a time to evaluate some complex expressions which reference many grids. My concern is that some of the users of the app may be on laptops with only 16GB of ram.
After I run a ValueOnCIter iterator over a delay loaded grid it ends up being fully loaded into memory including all leaf buffers. Is there any mechanism to flush these buffers(back to out of core state?) or something like that?
If I use a value accessor to request all voxels I will also end up fully loading the grid into memory right?
Is there any existing mechanism to iterate a grid without having to hold the entire grid in memory?
I guess I can use the bbox constraint on loading the grid to partition my operations. If I do this though I will end up with a set of VDB files on disk which must then be merged. Will that still require the entire final result grid to be able to fit in memory or does the node-stealing aspect of merging allow data to remain out-of-core through the merge?
Another idea is to open each VDB multiple times: maybe iterate one top level node from each file and then close and re-open the file or something like that? I suspect this would not be ideal because the initial load of the grid has some cost too.
Any advice appreciated,
Thanks,
Mike