thanks for the reply. I just use the default configuration. I already understand my scenario but I stil don't know how to avoid that.
actually I'm converting data from another type of db, in that db a key could have multiple values, so I have to using rocksdb to store multiple values.
for example I have a key `key1` and three values `v1` `v2` and `v3`. here's what I did:
1. I iterate a pair `key1` and `v1`, so I store this pair into rocksdb
2. I iterate the second pair `key1` and `v2`. so I read the previous `v1` and joint them together and store `v1v2`
3. I iterate the third pair `key1` and `v3`, so I read the previous `v1v2` and store `v1v2v3`
unfortunately, there are so many duplicates keys in my scenario, and I find after minutes running the length of jointed value could be > 5k bytes.
I also find that the memory allocation comes from `WriteBatch.rep_` and `SkipListRep::Allocate`. I found in my example the `WriteBatch.rep_` will store something like `v1v1v2v1v2v3', i.e. it stores history version, which make the memory increase fast.
But it's sad that I still how to tackle it......