Mital,
Like Arun said, you should set bdb.cache.evictln=true. Setting it to false is an optimization for slow spinning disks at the cost of increased JVM heap usage (and consequently longer GC times), but you're on SSD. So, you'll actually have better performance if you set this to true.
Can you also give us more details on your read and write patterns?
- put() per second
- get() per second
- getAll() per second
- delete() per second
- For pu(), ratio of updates versus creates (overwrite versus new key)
- For getAll(), average and max number of keys per call
- Average and max key size
- Average and max value size
- How many stores on the cluster
You're best off making the JVM as small as possible, but you will need to have enough bdb cache to hold the hotset of the indexes of the stores.
A quick performance enhancement out of the box is to remove the ReadOnlyStorageEngineConfiguration from the storage.configs parameter. That way you only have the BdbStorageEngine running in your app.
This is one of our cluster configurations that hosts 50 stores and gets about a peak of 70,000 writes a second against billions of keys:
admin.max.threads=40
bdb.cache.evictln=true
bdb.cache.size=20GB
bdb.cleaner.interval.bytes=15728640
bdb.cleaner.lazy.migration=false
bdb.cleaner.min.file.utilization=0
bdb.cleaner.threads=1
bdb.enable=true
bdb.evict.by.level=true
bdb.expose.space.utilization=true
bdb.lock.nLockTables=47
bdb.minimize.scan.impact=true
bdb.one.env.per.store=true
bdb.raw.property.string=je.cleaner.adjustUtilization=false
data.directory=${voldemort.data.dir}
enable.server.routing=false
enable.verbose.logging=false
http.enable=false
max.proxy.put.threads=50
nio.connector.selectors=50
num.scan.permits=2
restore.data.timeout.sec=1314000
retention.cleanup.first.start.hour=3
scheduler.threads=24
storage.configs=voldemort.store.bdb.BdbStorageConfiguration
stream.read.byte.per.sec=209715200
stream.write.byte.per.sec=78643200
voldemort.home=${voldemort.home.dir}
Some of the settings, like bdb.cleaner.threads, bdb.checkpoint.interval.bytes and bdb.cleaner.interval.bytes depend heavily on how frequently you create new keys and overwrite existing keys and how large the average and peak write sizes are.
We host that config in a 31gb Xms/Xmx JVM heap with UseCompressedOops set.