In current versions of Couchbase Server we require enough memory for metadata. This is to keep operations like a get miss or an add always fast.
In your case, you'd want to increase the memory size to store more items.
If you could share more about your use case, it'd be helpful to us. What were you trying to test for?
Matt
I am doing the test on a desktop VM. 4 cores. 6 GB ram. 10k rpm HDD. 100gb space. Ubuntu 12.04. Just couchbase running, nothing else. Couchbase with 1 bucket ~5.5 GB ram. Mem_low_wat @ 4gb, mem_high_wat @ 5gb. No replication, flush enabled, auto compaction.
A java program outside the VM connecting to the couchbase server. All local network since on same desktop.
2 threads each continuously inserting 500k unique 5 chars strings as key, 1 as value, and then sleep 10sec before repeating. I added in the sleep because I want couchbase to finish eviction.
But apparently this approach doesn't works. Low water mark is hit and then high water mark.
Is this approach unrealistic? It doesn't seems to be a very high load website IMO. I understand metadata has to be in memory for fast ops. But things are not evicted fast and enough.