I am running ROcksDB on an SSD
drive. The results show that the value of 'micros/op' is not equal to
1,000,000 divided by the value of 'ops/sec'. I am trying to understand
why this is the case. Could you please explain the exact meanings of
'micros/op' and 'ops/sec'?
Here is a snippet of the results.
Initializing RocksDB Options from the specified file
Initializing RocksDB Options from command-line flags
Keys: 20 bytes each (+ 0 bytes user-defined timestamp)
Values: 800 bytes each (400 bytes after compression)
Entries: 3300000000
Prefix: 0 bytes
Keys per prefix: 0
RawSize: 2580642.7 MB (estimated)
FileSize: 1321792.6 MB (estimated)
Write rate: 20971520 bytes/second
Read rate: 0 ops/second
Compression: Snappy
Compression sampling rate: 0
Memtablerep: SkipListFactory
Perf Level: 1
------------------------------------------------
DB path: [rocksdbtest/dbbench]
readwhilewriting : 1483.688 micros/op 21561 ops/sec; 14.6 MB/s (1050010 of 1212999 found)