I have a use case to iterate through ALL the RocksDB records (16 million records).
To improve the performance I thought to mount the RocksDB file system on a path backed by tmpfs.
But
that did not improve things by much. On my Apple silicon, it was taking around 28 seconds while on a VM it took around 60 seconds. I guess this might be due to virtualisation delays?
I tried to go through the tuning guide to see if there is some tuning I can do to optimise this but couldn’t find much.
Plain table format is suited for prefix lookups I suppose so that might not be valid here.
Is there anything else I can do to improve my read everything performance on tmpfs?
What about reducing the block cache size? Parallel reads?