By default, if you use direct I/O, the write size is 1MB, and without direct I/O, OS should buffer data. So it’s unlikely to be the reason.
Can you clarify what 1000 MB/s you got? Did you insert to RocksDB in a rate of 1000 MB/s? Or did you see from iostat that that disk does 1000 MB/s?
--
You received this message because you are subscribed to the Google Groups "rocksdb" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
rocksdb+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/rocksdb/9dba6835-03cc-414d-88be-c5e5981e8ef4n%40googlegroups.com.
Well, 60 seconds are usually too short for measuring the sustainable write throughput and I do expect the number will drop after 60 seconds. The RocksDB layer usually introduce some write amplification, between 3 and 20. So if the drive can do 2 GB/s, we are usually happy with a logical ingestion rate of 200 MB/s or so if you never fine tune anything.
If you only insert in 60 seconds, the bottleneck is likely to be the speed RocksDB can insert into single DB, e.g. inserting to write buffer, writing to WAL, tracking writ ordering, etc. This limitation can be mitigated by having multiple RocksDB instances opened and shard writes into them.
To view this discussion on the web visit https://groups.google.com/d/msgid/rocksdb/39da36f9-02ae-412f-9227-96515f529e91n%40googlegroups.com.