Metadata overhead warning AND Metadata overhead warning

900 views
Skip to first unread message

twb

unread,
Mar 29, 2014, 7:24:07 AM3/29/14
to couc...@googlegroups.com
I am using 2.2.0 community. I am testing with a simple Java program,

4 threads, each with its couchbase client writing to a bucket (512 ram, 15gb hdd space) continuously.

                       public void run() {
List<URI> hosts = Arrays.asList(URI.create("http://localhost:8091/pools"));
String bucket = "mybucket";
String password = "";
CouchbaseClient client = null;
try {
client = new CouchbaseClient(hosts, bucket, password);
} catch (IOException e1) {
e1.printStackTrace();
}

OperationFuture<Boolean> addOp;
int tmp = 0;
while (true) {
tmp = count.incrementAndGet();
if (tmp > 1833) break;
try {
br = new BufferedReader(new FileReader("d:\\file"+tmp));
String line;
while ((line=br.readLine()) != null) {
addOp = client.add(line, 1);
if (!addOp.get()) {
System.out.println(tmp+"---"+line);
}
}
br.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}



At about 6mil inserts, I am encountering these errors. And subsequent inserts failed. What should I do?


[19:22:05] - Hard Out Of Memory Error. Bucket "mybucket" on node 127.0.0.1 is full. All memory allocated to this bucket is used for metadata.
[19:22:05] - Metadata overhead warning. Over 68% of RAM allocated to bucket "mybucket" on node "127.0.0.1" is taken up by keys and metadata.


twb

unread,
Mar 29, 2014, 7:46:53 AM3/29/14
to couc...@googlegroups.com
By the way, the bucket type is couchbase.
Replica not enabled.
Disk Read-Write Concurrency 3.
Flush enabled.

Matt Ingenthron

unread,
Mar 29, 2014, 2:41:05 PM3/29/14
to couc...@googlegroups.com

In current versions of Couchbase Server we require enough memory for metadata.  This is to keep operations like a get miss or an add always fast.

In your case, you'd want to increase the memory size to store more items.

If you could share more about your use case, it'd be helpful to us.  What were you trying to test for?

Matt

--
You received this message because you are subscribed to the Google Groups "Couchbase" group.
To unsubscribe from this group and stop receiving emails from it, send an email to couchbase+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

twb

unread,
Mar 30, 2014, 8:23:46 AM3/30/14
to couc...@googlegroups.com
I am trying to do a very simple stress test. To get a rough feel of how much memory, how many ops/sec. So I am trying to bulk load random unique strings of 5 chars as keys, and 1 as value.

I read that it is due to inserts much faster than eviction. I tried to play around with the mem_low_wat mem_high_wat but no success. But to be fair, I have only given couchbase 769mb of ram.

Now I am doing the same test, 2 threads continuously inserts, couchbase of 6gb. Shall see if the memory errors happen again.

But 1 more question, why am I only achieving only avg 7k ops/sec?

twb

unread,
Mar 30, 2014, 9:28:14 AM3/30/14
to couc...@googlegroups.com
For a moment when compacting kicks in, the bucket seems not operational. Gotten more than a handful of the errors below.

java.lang.RuntimeException: Timed out waiting for operation
        at net.spy.memcached.internal.OperationFuture.get(OperationFuture.java:1
41)
        at com.amazonaws.tvm.PingServlet$1.run(PingServlet.java:89)
        at java.lang.Thread.run(Thread.java:722)
Caused by: net.spy.memcached.internal.CheckedOperationTimeoutException: Timed ou
t waiting for operation - failing node: 192.168.75.142/192.168.75.142:11210
        at net.spy.memcached.internal.OperationFuture.get(OperationFuture.java:1
66)
        at net.spy.memcached.internal.OperationFuture.get(OperationFuture.java:1
39)
        ... 2 more


twb

unread,
Mar 30, 2014, 10:03:16 PM3/30/14
to couc...@googlegroups.com
Matt, I got a feeling in something is very wrong. Tell me I am doing it incorrectly. It's pretty scary I have developed my mobile apps and backend with couchbase only to discover this now.

I am doing the test on a desktop VM. 4 cores. 6 GB ram. 10k rpm HDD. 100gb space. Ubuntu 12.04. Just couchbase running, nothing else. Couchbase with 1 bucket ~5.5 GB ram. Mem_low_wat @ 4gb, mem_high_wat @ 5gb. No replication, flush enabled, auto compaction.

A java program outside the VM connecting to the couchbase server. All local network since on same desktop.

2 threads each continuously inserting 500k unique 5 chars strings as key, 1 as value, and then sleep 10sec before repeating. I added in the sleep because I want couchbase to finish eviction.

But apparently this approach doesn't works. Low water mark is hit and then high water mark.

Is this approach unrealistic? It doesn't seems to be a very high load website IMO. I understand metadata has to be in memory for fast ops. But things are not evicted fast and enough.

Reply all
Reply to author
Forward
0 new messages