This topic is very interesting, almost all the disk storage engine is not linear on the write speed, actually more slowly when the records keeping on growing.
If you have 1000 records and write it, the speed is very fast.
If you have 10million records and write it, the speed maybe fast too.
But write 1000 records based on the existing 10million records, the speed drops, because you should do many things to merge the 1000 records with the existing records which are all on the disk.
How about the write amplification of nessDB?
I have some tested at this afternoon,
random write 50million records:
max merge time:83 sec;the slowest merge-count:6000000 and merge-speed:72289.0/sec
and then random write 50million records too:
max merge time:172 sec;the slowest merge-count:6000000 and merge-speed:34883.0/sec
then random write 18million records:
max merge time:269 sec;the slowest merge-count:6000000 and merge-speed:22304.0/sec
and then random write 18million records:
max merge time:296 sec;the slowest merge-count:6000000 and merge-speed:20270.0/sec
From the above testing results, we can see that the merge speed slower, the lowering speed is acceptable.
The Rate Decrease Factor is more and more *.sst(index file) generated, each merge need reading many "*.sst" from disk to memory, and merge then write them to disk too.
More and more "*.sst" generated don't affect on random-read. No matter how many records, up to three times disk touch for one random-read.