From the LOG you provided earlier in the thread I see:
capacity : 8388608
num_shard_bits : 4
So that is an 8MB block cache with 16 (2^4) shards. While 16 shards is probably OK, an 8MB block cache is small and you will probably be better with a larger block cache. Also, this setup means there are 2^19 bytes/shard given 8MB / 16 and there are problems that can happen when shards are small --
http://smalldatum.blogspot.com/2023/07/one-source-write-stalls-with-rocksdb.htmlBut elsewhere in LOG I see a ~14G block cache with 2^6 shards which is better
capacity :
15032385536 num_shard_bits : 6
Elsewhere in LOG I see
2023/10/15-00:03:32.729977 140258010806016 Options.compression: Snappy
2023/10/15-00:03:32.729979 140258010806016 Options.bottommost_compression: Disabled
And I also see that the database is ~15G on disk. If that is 15G with Snappy compression then it will probably be larger than 15G when uncompressed in the block cache.
Is the database compressed?
All of the block cache eviction counters have a 0 count, grep for "evict COUNT : 0"
It is hard for me to guess something about data.hit. It could be that these counters just reflect the population of an empty cache. The value of block_size was 4096 so the size of data blocks should be <= 4096 and data.miss * 4096 ~= 20GB so it might be that the size of data is ~20G uncompressed (in the block cache) and ~15G on disk.
rocksdb.block.cache.data.miss COUNT : 5250132
rocksdb.block.cache.data.hit COUNT : 5229701
rocksdb.block.cache.data.add COUNT : 5250132
From this, the storage device(s) might be overloaded. The line for (380000, 570000] means that ~1.5% of the reads take 380ms or longer, and there are many outliers exceeding 1 second.
** File Read Latency Histogram By Level [f::s::n::l] **
** Level 3 read latency histogram (micros):
( 22000, 33000 ] 55733 1.092% 71.443%
( 33000, 50000 ] 233936 4.583% 76.026% #
( 50000, 75000 ] 449437 8.804% 84.830% ##
( 75000, 110000 ] 398808 7.812% 92.643% ##
( 110000, 170000 ] 198621 3.891% 96.534% #
( 170000, 250000 ] 54475 1.067% 97.601%
( 250000, 380000 ] 27719 0.543% 98.144%
( 380000, 570000 ] 18829 0.369% 98.513%
( 570000, 860000 ] 17670 0.346% 98.859%
( 860000, 1200000 ] 46012 0.901% 99.760%
( 1200000, 1900000 ] 50130 0.982% 100.742%
( 1900000, 2900000 ] 133 0.003% 100.745%
( 2900000, 4300000 ] 25 0.000% 100.745%