swap-store failure

24 views
Skip to first unread message

Avani

unread,
Oct 22, 2010, 5:43:32 PM10/22/10
to project-voldemort
I am loading 31G HDFS file into hadoop store. build-store succeeded,
but swap is failing. I have a 3 node cluster. Any pointers on these
out of memory errors?

Also, the swap command does not exit with an error- it just hangs!

[2010-10-23 04:38:10,783
voldemort.server.http.gui.ReadOnlyStoreManagementServlet] INFO Fetch
complete.
[2010-10-23 04:38:28,294
voldemort.store.readonly.ReadOnlyStorageEngine] INFO Swapping files
for store 'STORE' from /hadoop/voldemort/hdfs-fetcher/
STORE_1287783114110/node-0
[2010-10-23 04:38:28,294
voldemort.store.readonly.ReadOnlyStorageEngine] INFO Acquiring write
lock on 'STORE':
[2010-10-23 04:38:28,294
voldemort.store.readonly.ReadOnlyStorageEngine] INFO Renaming data and
index files for 'STORE':
[2010-10-23 04:38:28,307
voldemort.store.readonly.ReadOnlyStorageEngine] INFO Setting primary
files for store 'STORE' to /hadoop/voldemort/hdfs-fetcher/
STORE_1287783114110/node-0
[2010-10-23 04:38:28,695
voldemort.store.readonly.ReadOnlyStorageEngine] INFO Rolling back
store 'STORE' to version 1.
[2010-10-23 04:38:30,965
voldemort.store.readonly.ReadOnlyStorageEngine] INFO Rollback
operation completed on 'STORE', releasing lock.
[2010-10-23 04:38:30,965
voldemort.store.readonly.ReadOnlyStorageEngine] ERROR Swap operation
failed.
[2010-10-23 04:38:30,965
voldemort.server.http.gui.ReadOnlyStoreManagementServlet] ERROR Error
while performing operation.
voldemort.VoldemortException: java.io.IOException: Map failed
at
voldemort.store.readonly.ChunkedFileSet.mapFile(ChunkedFileSet.java:
126)
at
voldemort.store.readonly.ChunkedFileSet.<init>(ChunkedFileSet.java:75)
at
voldemort.store.readonly.ReadOnlyStorageEngine.open(ReadOnlyStorageEngine.java:
120)
at
voldemort.store.readonly.ReadOnlyStorageEngine.rollback(ReadOnlyStorageEngine.java:
231)
at
voldemort.store.readonly.ReadOnlyStorageEngine.swapFiles(ReadOnlyStorageEngine.java:
183)
at
voldemort.server.http.gui.ReadOnlyStoreManagementServlet.doSwap(ReadOnlyStoreManagementServlet.java:
156)
at
voldemort.server.http.gui.ReadOnlyStoreManagementServlet.doPost(ReadOnlyStoreManagementServlet.java:
131)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:
727)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:
820)
at
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:
389)
at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:
765)
at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:
152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:
534)
at org.mortbay.jetty.HttpConnection
$RequestHandler.content(HttpConnection.java:879)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:747)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:
218)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:
404)
at
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:
409)
at org.mortbay.thread.QueuedThreadPool
$PoolThread.run(QueuedThreadPool.java:520)
Caused by: java.io.IOException: Map failed
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:758)
at
voldemort.store.readonly.ChunkedFileSet.mapFile(ChunkedFileSet.java:
122)
... 20 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:755)
... 21 more

Avani

unread,
Oct 25, 2010, 6:36:42 PM10/25/10
to project-voldemort
http://groups.google.com/group/project-voldemort/browse_thread/thread/ea0681e7d1faaf25/f75aefe99befcc02?lnk=gst&q=Building+a+read-only+store#f75aefe99befcc02

I used the above link for details on this issue. But reducing
readonly.file.handles to 1 did not help in my situation. I have 31G
data with 20 x 37MB indices = 740MB address space. It seems the
default JVM heapsize of 2G in voldemort server needs 64 bit java for
swapping to work for this amount of data. Restarting the server with -
d64 option helped.

What still surprises me is that I have a cluster of 3 nodes, each with
the same configuration. Swap succeeded in node-1 and 2 with (32-bit
java), but failed on node-0. The only difference: node-0 also serves
as the namenode for HDFS - may be that is taking a lot of memory ?
Reply all
Reply to author
Forward
0 new messages