Operation queue filling up

39 views
Skip to first unread message

shlomo

unread,
Jun 4, 2008, 8:15:32 AM6/4/08
to spymemcached
I have two machines, one Athlon X2 and one Opteron, with identical OS
and runtime environments. Each machine runs a memcached server,
configured exactly the same way (-m 256). On each machine I'm running
my application, which uses the default connection type and connects to
the memcached server on localhost:11211. On startup, my application
pre-populates memcached with about 200MB of data.

Here's the strange part: On the Opteron machine, I successfully pre-
populate the cache from the database with no problem. On the Athlon X2
machine, after storing about 190 MB I get an exception:
java.lang.IllegalStateException: Queue full
at java.util.AbstractQueue.add(AbstractQueue.java:64)
at
net.spy.memcached.protocol.TCPMemcachedNodeImpl.addOp(TCPMemcachedNodeImpl.java:
215)
at
net.spy.memcached.MemcachedConnection.addOperation(MemcachedConnection.java:
431)
at
net.spy.memcached.MemcachedConnection.addOperation(MemcachedConnection.java:
426)
at
net.spy.memcached.MemcachedClient.addOp(MemcachedClient.java:191)
at
net.spy.memcached.MemcachedClient.asyncStore(MemcachedClient.java:221)
at net.spy.memcached.MemcachedClient.set(MemcachedClient.java:
426)

Using memstat I can see that the memcached server is indeed being
filled.

Any ideas what could be going on? How to solve it?

Thanks.

Dustin

unread,
Jun 4, 2008, 12:52:38 PM6/4/08
to spymemcached

This happens when you add async ops faster than they can be
processed. The easiest thing to do is occasionally do a get() on the
future emitted from a set. Something like this:

int i=0;
for(Something s : giganticIterable) {
Future<Boolean> f=mc.set(s.getKey(), s);
if(++i % 1000 == 0) {
f.get();
}
}

That'll slow down the load, of course, but it ensures you don't
overflow the op queues by draining it after every 1,000 ops.

shlomo

unread,
Jun 6, 2008, 7:15:30 AM6/6/08
to spymemcached
Dustin - thanks! adding such a "flushing" operation every 1K write
operations cleared up the issue.
Reply all
Reply to author
Forward
0 new messages