storage-dev@,
I've been tracking a memory regression introduced by a feature of ours. The feature itself isn't too important here, what I found was that our feature increase the usage of certain leveldb databases, and it was those databases that increased the memory usage of Chrome.
There's three databases impacted by our feature, two owned by ntp_snippets, and one owned by the download component. All three databases typically store relatively few items, say maybe a dozen. As items are added and deleted, it results in higher memory usage. Since we're targeting Android devices which have limited memory, this is important.
I think the leveldb options for these databases needs updated. I went ahead and created a simple program that writes some data to a leveldb and then deletes that data. I ran that program 1000 times, reopening the database each time and writing and deleting 20KB worth of data (so that the database actually has no live data after each run).
I varied some of the leveldb options to see how memory and CPU time were affected.
Options:
reuse_logs (defaults to true)
max_file_size (2MB default)
write_buffer_size (4MB default)
Data is here:
Some observations:
reuse_logs=true costs more in memory and time for low usage databases.
since there's many leveldb instances in Chrome, the defaults of 4MB/reuse_logs add up significantly to memory usage over time, even for "small" databases.
So I think I should change these three specific DBs to use much smaller write buffers, and to use reuse_logs=false. Does that make sense to you? Are there any risks?
Should we do something to make this poor configuration less likely in the future?
Is this the right group to field these questions :-)
Thanks!