Hello,
I'm using RocksDB for large lookup tables that are written once and then only opened in read only mode. I'm facing trouble with high memory usage for some of these databases and I would like to understand the memory usage when opening the file and options for reducing the memory.
A typical such lookup table corresponds to 400GB of data with SST files of ~1GB, so ~400 SST files. I took care of forcing compaction on the database when it was originally written and the (then empty) WAL file is removed before opening as read-only.
It is my understanding that when opening a file, the rocksdb library will consider each SST file and create some data structures in memory for each. Could anyone point me at places where I can learn more about tuning the used main memory? Ideally, I would like to find out about options settings on opening the files to reduce memory usage. If necessary, I could re-create the SST files with different settings (is there anything in a SST file that can make the library use more/less memory when opening it?)
My "read only" workload consists more or less of random access to a small fraction of the values so it would not benefit from any caching.
Cheers,
Manuel