Loading and java.lang.OutOfMemoryError

6 views
Skip to first unread message

s.ar...@geophy.com

unread,
Sep 20, 2016, 8:04:15 AM9/20/16
to Stardog
Hi all, I am trying to load geonames. I have a 19 GB in N-triples format. 

I have configured startdog with 8GB both stack and heap memory.

I get this error:

INFO  2016-09-17 03:24:06,067 [Stardog.Executor-19] com.complexible.stardog.StardogKernel:printInternal(314): Creating index: 100% complete in 00:06:06 (443.3K triples/sec)
INFO  2016-09-17 03:24:06,068 [Stardog.Executor-1030] com.complexible.stardog.StardogKernel:printInternal(314): Creating index: 100% complete in 00:06:06 (443.3K triples/sec)
INFO  2016-09-17 03:24:06,068 [Stardog.Executor-1030] com.complexible.stardog.StardogKernel:stop(326): 
INFO  2016-09-17 03:24:06,068 [Stardog.Executor-1030] com.complexible.stardog.StardogKernel:stop(329): Creating index finished in 00:06:06.536
WARN  2016-09-17 10:43:20,883 [RDFStreamProcessor-2] com.complexible.common.rdf.rio.RDFStreamProcessor:setException(557): Error during loading /home/samur/tools/stardog-4.1.3/../../data/geonames/xaj.nt: java.lang.OutOfMemoryError: GC overhead limit exceeded
java.lang.RuntimeException: java.lang.OutOfMemoryError: GC overhead limit exceeded
at com.complexible.common.rdf.rio.RDFStreamProcessor$ProducerThread.work(RDFStreamProcessor.java:781) [stardog-utils-rdf-4.1.3.jar:?]
at com.complexible.common.rdf.rio.RDFStreamProcessor$Worker.call(RDFStreamProcessor.java:737) [stardog-utils-rdf-4.1.3.jar:?]
at com.complexible.common.rdf.rio.RDFStreamProcessor$Worker.call(RDFStreamProcessor.java:726) [stardog-utils-rdf-4.1.3.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]
Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.nio.CharBuffer.wrap(CharBuffer.java:373) ~[?:1.8.0_91]
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:311) ~[?:1.8.0_91]
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) ~[?:1.8.0_91]
at sun.nio.cs.StreamDecoder.read0(StreamDecoder.java:127) ~[?:1.8.0_91]
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:112) ~[?:1.8.0_91]
at java.io.InputStreamReader.read(InputStreamReader.java:168) ~[?:1.8.0_91]
at org.openrdf.rio.ntriples.NTriplesParser.readCodePoint(NTriplesParser.java:607) ~[sesame-rio-ntriples-4.0.0.jar:?]
at org.openrdf.rio.ntriples.NTriplesParser.parseUriRef(NTriplesParser.java:462) ~[sesame-rio-ntriples-4.0.0.jar:?]
at org.openrdf.rio.ntriples.NTriplesParser.parseSubject(NTriplesParser.java:357) ~[sesame-rio-ntriples-4.0.0.jar:?]
at org.openrdf.rio.ntriples.NTriplesParser.parseTriple(NTriplesParser.java:301) ~[sesame-rio-ntriples-4.0.0.jar:?]
at org.openrdf.rio.ntriples.NTriplesParser.parse(NTriplesParser.java:192) ~[sesame-rio-ntriples-4.0.0.jar:?]
at com.complexible.common.rdf.rio.RDFStreamBuilder$RDFAbstractStream.parse(RDFStreamBuilder.java:230) ~[stardog-utils-rdf-4.1.3.jar:?]
at com.complexible.common.rdf.rio.RDFStreamBuilder$RDFAbstractStream.parse(RDFStreamBuilder.java:197) ~[stardog-utils-rdf-4.1.3.jar:?]
at com.complexible.common.rdf.rio.RDFStreamProcessor$ProducerThread.work(RDFStreamProcessor.java:773) ~[stardog-utils-rdf-4.1.3.jar:?]
... 6 more


Any tip how I can fix this?

Zachary Whitley

unread,
Sep 20, 2016, 9:06:04 AM9/20/16
to Stardog
Are you sure you are actually running Stardog with a 8G heap? I've been able to successfully load a slightly smaller geonames dataset at 18G with a 4G heap. What is the result of running

echo $STARDOG_JAVA_ARGS

If it is set you can simply try increasing the heap space but again I'm surprised that 8G wasn't enough and suspect that it might actually be running with the default 2G heap space.

--
-- --
You received this message because you are subscribed to the C&P "Stardog" group.
To post to this group, send email to sta...@clarkparsia.com
To unsubscribe from this group, send email to
stardog+unsubscribe@clarkparsia.com
For more options, visit this group at
http://groups.google.com/a/clarkparsia.com/group/stardog?hl=en
---
You received this message because you are subscribed to the Google Groups "Stardog" group.
To unsubscribe from this group and stop receiving emails from it, send an email to stardog+unsubscribe@clarkparsia.com.

Evren Sirin

unread,
Sep 20, 2016, 9:24:09 AM9/20/16
to Stardog
You can use `stardog-admin server status` command to see the current
memory usage (heap and off-heap) along with maximum memory settings.

Best,
Evren
>> stardog+u...@clarkparsia.com
>> For more options, visit this group at
>> http://groups.google.com/a/clarkparsia.com/group/stardog?hl=en
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Stardog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to stardog+u...@clarkparsia.com.
>
>
> --
> -- --
> You received this message because you are subscribed to the C&P "Stardog"
> group.
> To post to this group, send email to sta...@clarkparsia.com
> To unsubscribe from this group, send email to
> stardog+u...@clarkparsia.com

Samur Araujo

unread,
Sep 22, 2016, 11:08:22 AM9/22/16
to sta...@clarkparsia.com
Hi Zachary, I manage to load geonames increasing memory. But there is another dataset that I can load.

The dataset is 14GB compress (ttl.gz)

Here is the configuration:

stardog-4.1.3$ ./bin/stardog-admin server status
Backup Storage Directory : .backup
CPU Load                 : 31.6200 %
Connection Timeout       : 1h
Export Storage Directory : .exports
Leading Wildcard Search  : false
Memory Heap              : 1.6G (Allocated: 7.7G Max: 7.7G)
Memory Non-Heap          :  97M (Allocated: 100M Max: 8.0G)
Named Graph Security     : false
Platform Arch            : amd64
Platform OS              : Linux 4.4.0-38-generic, Java 1.8.0_91
Query All Graphs         : false
Query Timeout            : 5m
Search Default Limit     : 100
Stardog Home             : /home/samur/tools/stardog-4.1.3/bin/..
Stardog Version          : 4.1.3
Strict Parsing           : true
Uptime                   : 1 day 7 hours 5 minutes 39 seconds
Watchdog Enabled         : true
Watchdog Port            : 5833
Watchdog Remote Access   : true

------------------------------------------------------------------------------------------------------------------------------------------

INFO  2016-09-22 16:47:54,490 [Stardog.Executor-357] com.complexible.stardog.StardogKernel:printInternal(314): Parsing triples: 8% complete in 00:39:25 (283.0M triples - 119.6K triples/sec)
WARN  2016-09-22 16:48:06,120 [Stardog.Executor-356] com.complexible.common.rdf.rio.RDFStreamProcessor:setException(557): Error during loading /home/samur/tools/d2rq-0.8.1/data.ttl.gz: java.lang.OutOfMemoryError: Direct buffer memory
java.lang.RuntimeException: java.lang.OutOfMemoryError: Direct buffer memory
at com.complexible.common.rdf.rio.RDFStreamProcessor$Consumer.work(RDFStreamProcessor.java:991) [stardog-utils-rdf-4.1.3.jar:?]
at com.complexible.common.rdf.rio.RDFStreamProcessor$Worker.call(RDFStreamProcessor.java:737) [stardog-utils-rdf-4.1.3.jar:?]
at com.complexible.common.rdf.rio.RDFStreamProcessor$Worker.call(RDFStreamProcessor.java:726) [stardog-utils-rdf-4.1.3.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_91]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_91]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_91]
Caused by: java.lang.OutOfMemoryError: Direct buffer memory
at java.nio.Bits.reserveMemory(Bits.java:693) ~[?:1.8.0_91]
at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:123) ~[?:1.8.0_91]
at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) ~[?:1.8.0_91]
at sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:174) ~[?:1.8.0_91]
at sun.nio.ch.IOUtil.write(IOUtil.java:58) ~[?:1.8.0_91]
at sun.nio.ch.FileChannelImpl.writeInternal(FileChannelImpl.java:778) ~[?:1.8.0_91]
at sun.nio.ch.FileChannelImpl.write(FileChannelImpl.java:764) ~[?:1.8.0_91]
at com.complexible.stardog.dht.impl.PagedDiskHashTable$1.write(PagedDiskHashTable.java:168) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.impl.OffHeapCompactPage.write(OffHeapCompactPage.java:257) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.impl.PagedDiskHashTable.writePage(PagedDiskHashTable.java:969) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.impl.PagedDiskHashTable.inactivatePage(PagedDiskHashTable.java:979) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.impl.HashPageCache.evictBatch(HashPageCache.java:197) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.impl.PagedDiskHashTable.createPage(PagedDiskHashTable.java:1118) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.impl.PagedDiskHashTable.addOverflowPage(PagedDiskHashTable.java:1196) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.impl.SequentialHashPage.putMaybeFits(SequentialHashPage.java:84) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.impl.SequentialHashPage.put(SequentialHashPage.java:75) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.impl.SequentialHashPage.put(SequentialHashPage.java:56) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.impl.PagedDiskHashTable.putIntoPage(PagedDiskHashTable.java:940) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.impl.PagedDiskHashTable.access$1400(PagedDiskHashTable.java:79) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.impl.PagedDiskHashTable$Updater.put(PagedDiskHashTable.java:1325) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.impl.PagedDiskHashTable$Updater.put(PagedDiskHashTable.java:1314) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.dictionary.HashDictionary.getOrAddUncached(HashDictionary.java:561) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.dictionary.HashDictionary.getOrAddID(HashDictionary.java:526) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.dht.dictionary.HashDictionary.add(HashDictionary.java:497) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.index.dictionary.DelegatingMappingDictionary.add(DelegatingMappingDictionary.java:44) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.index.dictionary.InliningMappingDictionary.add(InliningMappingDictionary.java:73) ~[stardog-4.1.3.jar:?]
at com.complexible.stardog.index.IndexUpdaterHandler.handleStatements(IndexUpdaterHandler.java:70) ~[stardog-4.1.3.jar:?]
at com.complexible.common.rdf.rio.RDFStreamProcessor$Consumer.work(RDFStreamProcessor.java:978) ~[stardog-utils-rdf-4.1.3.jar:?]
... 6 more

Any tip how I can fix this?

>> For more options, visit this group at
>> http://groups.google.com/a/clarkparsia.com/group/stardog?hl=en
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "Stardog" group.
>> To unsubscribe from this group and stop receiving emails from it, send an

>
>
> --
> -- --
> You received this message because you are subscribed to the C&P "Stardog"
> group.
> To post to this group, send email to sta...@clarkparsia.com
> To unsubscribe from this group, send email to

> For more options, visit this group at
> http://groups.google.com/a/clarkparsia.com/group/stardog?hl=en

--
-- --
You received this message because you are subscribed to the C&P "Stardog" group.
To post to this group, send email to sta...@clarkparsia.com
To unsubscribe from this group, send email to

For more options, visit this group at
http://groups.google.com/a/clarkparsia.com/group/stardog?hl=en
---
You received this message because you are subscribed to a topic in the Google Groups "Stardog" group.
To unsubscribe from this topic, visit https://groups.google.com/a/clarkparsia.com/d/topic/stardog/XFLrd9vBrA0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to stardog+unsubscribe@clarkparsia.com.




--
Senior Data Scientist 
Geophy 

Nieuwe Plantage 54-55
2611XK  Delft
+31 (0)70 7640725 

1 Fore Street
EC2Y 9DT  London
+44 (0)20 37690760

Reply all
Reply to author
Forward
0 new messages