Strange error - no space left on device

10 views
Skip to first unread message

Tze-John Tang

unread,
May 2, 2014, 4:47:39 PM5/2/14
to sta...@clarkparsia.com
I am seeing the following error, and just wondering around the cause for it.

com.complexible.stardog.StardogException: An error occurred adding RDF to the index: No space left on device

 

at com
.complexible.stardog.protocols.client.SPECClientUtil.toStardogException(SPECClientUtil.java:65) ~[client-2.1.1.jar:na]

at com
.complexible.stardog.protocols.snarl.client.SNARLConnection.applyChanges(SNARLConnection.java:314) ~[client-2.1.1.jar:na]

at com
.complexible.stardog.api.impl.AbstractConnection.pushOutstanding(AbstractConnection.java:298) ~[api-2.1.1.jar:na]

at com
.complexible.stardog.api.impl.AbstractConnection.commit(AbstractConnection.java:207) ~[api-2.1.1.jar:na]

at abbvie
.gprd.emr.db.stardog.AdLoader.load(AdLoader.java:161) [classes/:na]

at abbvie
.gprd.emr.db.stardog.AdLoader.main(AdLoader.java:94) [classes/:na]

Caused by: com.complexible.common.protocols.client.ClientException: An error occurred adding RDF to the index: No space left on device

at com
.complexible.common.protocols.client.rpc.DefaultRPCClient.get(DefaultRPCClient.java:335) ~[client-2.1.1.jar:na]

at com
.complexible.common.protocols.client.rpc.DefaultRPCClient.execute(DefaultRPCClient.java:312) ~[client-2.1.1.jar:na]

at com
.complexible.stardog.protocols.snarl.client.SNARLClientImpl.add(SNARLClientImpl.java:281) ~[client-2.1.1.jar:na]

at com
.complexible.stardog.protocols.snarl.client.SNARLClientImpl.add(SNARLClientImpl.java:53) ~[client-2.1.1.jar:na]

at com
.complexible.stardog.protocols.snarl.client.SNARLConnection.change(SNARLConnection.java:321) ~[client-2.1.1.jar:na]

at com
.complexible.stardog.protocols.snarl.client.SNARLConnection.applyChanges(SNARLConnection.java:298) ~[client-2.1.1.jar:na]

... 4 common frames omitted

com
.complexible.stardog.db.DatabaseException: An error occurred adding RDF to the index: No space left on device

at sun
.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.7.0_09]

at sun
.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) ~[na:1.7.0_09]

at sun
.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.7.0_09]

at java
.lang.reflect.Constructor.newInstance(Constructor.java:525) ~[na:1.7.0_09]

at com
.complexible.stardog.protocols.snarl.SNARLThrowableCodec.decode(SNARLThrowableCodec.java:123) ~[shared-2.1.1.jar:na]

at com
.complexible.barc.BigPacketCodec.toException(BigPacketCodec.java:267) ~[shared-2.1.1.jar:na]

at com
.complexible.barc.BigPacketCodec.decode(BigPacketCodec.java:239) ~[shared-2.1.1.jar:na]

at com
.complexible.barc.BigPacketDecoder.channelRead(BigPacketDecoder.java:75) ~[shared-2.1.1.jar:na]

at io
.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:338) ~[netty-all-4.0.14.Final.jar:4.0.14.Final]

at io
.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:324) ~[netty-all-4.0.14.Final.jar:4.0.14.Final]

at io
.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) ~[netty-all-4.0.14.Final.jar:4.0.14.Final]

at io
.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:338) ~[netty-all-4.0.14.Final.jar:4.0.14.Final]

at io
.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:324) ~[netty-all-4.0.14.Final.jar:4.0.14.Final]

at io
.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:153) ~[netty-all-4.0.14.Final.jar:4.0.14.Final]

at io
.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:338) ~[netty-all-4.0.14.Final.jar:4.0.14.Final]

at io
.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:324) ~[netty-all-4.0.14.Final.jar:4.0.14.Final]

at io
.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:785) ~[netty-all-4.0.14.Final.jar:4.0.14.Final]

at io
.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:126) ~[netty-all-4.0.14.Final.jar:4.0.14.Final]

at io
.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:485) ~[netty-all-4.0.14.Final.jar:4.0.14.Final]

at io
.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:452) ~[netty-all-4.0.14.Final.jar:4.0.14.Final]

at io
.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:346) ~[netty-all-4.0.14.Final.jar:4.0.14.Final]

at io
.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:101) ~[netty-all-4.0.14.Final.jar:4.0.14.Final]

at java
.lang.Thread.run(Thread.java:722) ~[na:1.7.0_09]

This is occurring on a Connection.commit() call. This was after a Connection().add().graph() where the graph had about 2,000,000 statements. The statement was followed by another Connection.add().statement(). The Connection.remove() line is not getting called in this specific case.

 // add the new updated records
 conn
.add().graph(newGraph, adContextUri);
 
 
// update the modification date
 
if (lastModificationDateStmt != null) {
 
// remove old date
 conn
.remove().statement(lastModificationDateStmt);
 
}
 
// add new date
 conn
.add().statement(adContextUri, IInfrastructureSchema.hasModificationDate, factory.createLiteral(cal.getTime()), adContextUri);
 
 conn
.commit();

I thought I would break down the graph into 10,000 statement chunks. So I broke it down into smaller graphs and committed after each chunk. I got the same error after getting to the final Connection.commit() line. If I comment out the code that adds the 2,000,000  statements, then it works fine. And then after that, if I put back in the code, but have smaller chunks to commit(), then it all works fine. I have not played around with it to see what number of statements causes this to happen. But because I tried it earlier with the commits every 10,000 statements, I assume it might be something under 10,000. But maybe it is just a combination of factors that is causing this to happen. I am sure that before I had no issues committing the 2,000,000 statements. But the addition of the additional add statement seems to have caused issues. I will remove the final add, and see if I get the same results.

Tze-John Tang

unread,
May 2, 2014, 7:06:14 PM5/2/14
to sta...@clarkparsia.com
Did some more testing.

When I just commit the 2,000,000+ triples in a single graph, I get the same exception. The number of total triples in the graph is 2,640,112.
To summarize, I get the exception when I break the total triples into smaller chunks where the total number of triples being inserted is the same, and I also get the error when I add them all at once.

-tj


at com
.complexible.barc.BigPacketDecoder.channelRead(BigPacketDecoder.java</sp
...

Kendall Clark

unread,
May 2, 2014, 7:13:36 PM5/2/14
to sta...@clarkparsia.com
Sorry to ask the obvious but how much free disk space do you have?


On Friday, May 2, 2014, Tze-John Tang <tzejoh...@gmail.com> wrote:
Did some more testing.

When I just commit the 2,000,000+ triples in a single graph, I get the same exception. The number of total triples in the graph is 2,640,112.
To summarize, I get the exception when I break the total triples into smaller chunks where the total number of triples being inserted is the same, and I also get the error when I add them all at once.

-tj

On Friday, May 2, 2014 3:47:39 PM UTC-5, Tze-John Tang wrote:
I am seeing the following error, and just wondering around the cause for it.

com.complexible.stardog.StardogException: An error occurred adding RDF to the index: No space left on device

 

at com
.complexible.stardog.protocols.client.SPECClientUtil.toStardogException(SPECClientUtil.java:65) ~[client-2.1.1.jar:na]

at com
.complexible.stardog.protocols.snarl.client.SNARLConnection.applyChanges(SNARLConnection.java:314) ~[client-2.1.1.jar:na]

at com
.complexible.stardog.api.impl.AbstractConnection.pushOutstanding(AbstractConnection.java:298) ~[api-2.1.1.jar:na]

at com
.complexible.stardog.api.impl.AbstractConnection.commit(AbstractConnection.java:207) ~[api-2.1.1.jar:na


at com
.complexible.barc.BigPacketDecoder.channelRead(BigPacketDecoder.java</sp
...

--
-- --
You received this message because you are subscribed to the C&P "Stardog" group.
To post to this group, send email to sta...@clarkparsia.com
To unsubscribe from this group, send email to
stardog+u...@clarkparsia.com
For more options, visit this group at
http://groups.google.com/a/clarkparsia.com/group/stardog?hl=en

Tze-John Tang

unread,
May 2, 2014, 8:09:18 PM5/2/14
to sta...@clarkparsia.com
On the server I have 40GB+. On the client I have 245GB+.... But, that is a good question.. I just checked and it looks like our /tmp folder on the server is full.
I will retest when I clear it out.

Thanks,

-tj

Tze-John Tang

unread,
May 2, 2014, 9:06:45 PM5/2/14
to sta...@clarkparsia.com
It looks like that fixed the issue. The /tmp dir was full. I noticed that files of the form  data728832225265490554.scr are created in the /tmp directory. When do these files get cleared out?

-tj

Mike Grove

unread,
May 2, 2014, 9:45:50 PM5/2/14
to stardog
On Fri, May 2, 2014 at 9:06 PM, Tze-John Tang <tzejoh...@gmail.com> wrote:
It looks like that fixed the issue. The /tmp dir was full. I noticed that files of the form  data728832225265490554.scr are created in the /tmp directory. When do these files get cleared out?

That's a bug that was fixed in 2.1.2.  It looked like from your original stack trace that you were using 2.1.1, so if you upgrade, you should not see the problem anymore.

Cheers,

Mike
 

Tze-John Tang

unread,
May 3, 2014, 1:33:15 PM5/3/14
to sta...@clarkparsia.com
I am actually running stardog 2.1.3 on the server side.

-tj

Mike Grove

unread,
May 5, 2014, 2:54:46 PM5/5/14
to stardog
Are you running the client & server on the same machine?  Are you using the embedded mode at all?

I didn't notice the file name you mentioned at first, which is used in temp storage of data streams during transactions, which is different than what we fixed in 2.1.1.  I have an idea of what's going on, but I'm curious what the root cause could be.

Cheers,

Mike

TJ Tang

unread,
May 5, 2014, 3:09:28 PM5/5/14
to stardog
Mike,

I am running client and server on different machines. Client is running on a Windows 7 desktop. Server is on a Linux box. I am not using embedded mode.

-tj


To unsubscribe from this group and stop receiving emails from it, send an email to stardog+u...@clarkparsia.com.

Kendall Clark

unread,
May 6, 2014, 9:04:43 AM5/6/14
to stardog
Have you seen the problem since you deleted the files in /tmp? We think it might be that this was the bug fixed in 2.1.2 but that doesn't mean that the files in /tmp got deleted, so 2.1.3 could still fail to write some streaming data to disk because, in fact, /tmp was full.

Please let us know if you've seen it since emptying /tmp.

Cheers,
Kendall

Alex Tucker

unread,
May 6, 2014, 9:14:11 AM5/6/14
to sta...@clarkparsia.com
Hi,

One thing that springs to mind is that Linux distributions are moving to effectively putting /tmp in RAM (tmpfs), which might not be what Stardog intends.  See, e.g. https://fedoraproject.org/wiki/Features/tmp-on-tmpfs

Alex.

Kendall Clark

unread,
May 6, 2014, 9:26:26 AM5/6/14
to stardog
True...

Stardog simply writes to wherever the JVM tells it the temp dir is... This can be controlled for a particular Stardog server to be in some specific place, filesystem, etc by passing the appropriate system property to Java when you start Stardog:

$> java -Djava.io.tmpdir=/path/to/some/thing

Cheers,
Kendall

Tze-John Tang

unread,
May 12, 2014, 7:46:51 AM5/12/14
to sta...@clarkparsia.com
Kendall,

I still see the files in /tmp. They are not getting deleted. My server side is definitely 2.1.3. I am connecting over the SNARL protocol. On the client side, it is possible that not all of the libs are 2.1.3.

-tj

Mike Grove

unread,
May 13, 2014, 7:39:50 AM5/13/14
to stardog
I've pinpointed what I think is causing the problem, we'll have a fix for that in the next release.

Cheers,

Mike
Reply all
Reply to author
Forward
0 new messages