Druid Ingestion Failed: FileNotFoundException **/partitions.json (No space left on device)

47 views
Skip to first unread message

Vaisakh R

unread,
Jun 30, 2017, 6:11:10 AM6/30/17
to Druid Development
The Druid ingestion failse with the  FileNotFoundException with /partitions.json (No space left on device) for all ingestion.
Please find the log below.
And I am using default configuration from conf-quickstart.

System has enough space

Filesystem     1K-blocks     Used Available Use% Mounted on
devtmpfs        
4078424       64   4078360   1% /dev
tmpfs            4089312        0   4089312   0% /
dev/shm
/dev/xvda1      20509288 12111144   8297896  60% /
/
dev/xvdf       51475068 29173904  19663340  60% /opt/xdrive


Please find the log here

2017-06-30T08:10:02,161 INFO [Thread-21] org.apache.hadoop.mapred.LocalJobRunner - reduce task executor complete.
2017-06-30T08:10:02,181 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - reduce > reduce
2017-06-30T08:10:02,186 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - reduce > reduce
2017-06-30T08:10:02,170 WARN [Thread-21] org.apache.hadoop.mapred.LocalJobRunner - job_local923591174_0001
java
.lang.Exception: java.io.FileNotFoundException: /opt/xdrive/druid-0.9.2/var/druid/hadoop-tmp/2017-06-30T080914.274Z_9baa0ee460f04b3f9d52624b0f6f7975/20140821T000000.000Z_20140822T000000.000Z/partitions.json (No space left on device)
        at org
.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) ~[hadoop-mapreduce-client-common-2.3.0.jar:?]
        at org
.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529) [hadoop-mapreduce-client-common-2.3.0.jar:?]
Caused by: java.io.FileNotFoundException: /opt/xdrive/druid-0.9.2/var/druid/hadoop-tmp/my_datasource_name/2017-06-30T080914.274Z_9baa0ee460f04b3f9d52624b0f6f7975/20140821T000000.000Z_20140822T000000.000Z/partitions.json (No space left on device)
        at java
.io.FileOutputStream.open(Native Method) ~[?:1.7.0_121]
        at java
.io.FileOutputStream.<init>(FileOutputStream.java:221) ~[?:1.7.0_121]
        at org
.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:206) ~[hadoop-common-2.3.0.jar:?]
        at org
.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.<init>(RawLocalFileSystem.java:202) ~[hadoop-common-2.3.0.jar:?]
        at org
.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:265) ~[hadoop-common-2.3.0.jar:?]
        at org
.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:252) ~[hadoop-common-2.3.0.jar:?]
        at org
.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.<init>(ChecksumFileSystem.java:384) ~[hadoop-common-2.3.0.jar:?]
        at org
.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:443) ~[hadoop-common-2.3.0.jar:?]
        at org
.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:424) ~[hadoop-common-2.3.0.jar:?]
        at org
.apache.hadoop.fs.FileSystem.create(FileSystem.java:907) ~[hadoop-common-2.3.0.jar:?]
        at org
.apache.hadoop.fs.FileSystem.create(FileSystem.java:888) ~[hadoop-common-2.3.0.jar:?]
        at org
.apache.hadoop.fs.FileSystem.create(FileSystem.java:785) ~[hadoop-common-2.3.0.jar:?]
        at io
.druid.indexer.Utils.makePathAndOutputStream(Utils.java:70) ~[druid-indexing-hadoop-0.9.2.jar:0.9.2]
        at io
.druid.indexer.DetermineHashedPartitionsJob$DetermineCardinalityReducer.reduce(DetermineHashedPartitionsJob.java:328) ~[druid-indexing-hadoop-0.9.2.jar:0.9.2]
        at io
.druid.indexer.DetermineHashedPartitionsJob$DetermineCardinalityReducer.reduce(DetermineHashedPartitionsJob.java:299) ~[druid-indexing-hadoop-0.9.2.jar:0.9.2]
        at org
.apache.hadoop.mapreduce.Reducer.run(Reducer.java:171) ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
        at io
.druid.indexer.DetermineHashedPartitionsJob$DetermineCardinalityReducer.run(DetermineHashedPartitionsJob.java:351) ~[druid-indexing-hadoop-0.9.2.jar:0.9.2]
        at org
.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627) ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
        at org
.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) ~[hadoop-mapreduce-client-core-2.3.0.jar:?]
        at org
.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319) ~[hadoop-mapreduce-client-common-2.3.0.jar:?]
        at java
.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[?:1.7.0_121]
        at java
.util.concurrent.FutureTask.run(FutureTask.java:262) ~[?:1.7.0_121]
        at java
.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[?:1.7.0_121]
        at java
.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[?:1.7.0_121]
        at java
.lang.Thread.run(Thread.java:745) ~[?:1.7.0_121]
2017-06-30T08:10:02,193 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - reduce > reduce
2017-06-30T08:10:02,214 INFO [communication thread] org.apache.hadoop.mapred.LocalJobRunner - reduce > reduce


Gian Merlino

unread,
Jul 4, 2017, 10:14:50 PM7/4/17
to druid-de...@googlegroups.com
"No space left on device" could refer to inodes as well as actual disk space. Try checking df -i.

Gian

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-development+unsubscribe@googlegroups.com.
To post to this group, send email to druid-development@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/druid-development/af19fe8e-2e84-40ab-bc4c-988d5284a109%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Vaisakh R

unread,
Jul 6, 2017, 7:17:58 AM7/6/17
to Druid Development
I had enought same in the machine.

System has enough space

Filesystem     1K-blocks     Used Available Use% Mounted on
devtmpfs        
4078424       64   4078360   1% /dev
tmpfs            4089312        0   4089312   0% /
dev/shm
/dev/xvda1      20509288 12111144   8297896  60% /
/
dev/xvdf       51475068 29173904  19663340  60% /opt/xdrive


Gian Merlino

unread,
Jul 6, 2017, 12:40:23 PM7/6/17
to druid-de...@googlegroups.com
Did you check on the inodes too?

Gian

--
You received this message because you are subscribed to the Google Groups "Druid Development" group.
To unsubscribe from this group and stop receiving emails from it, send an email to druid-development+unsubscribe@googlegroups.com.
To post to this group, send email to druid-development@googlegroups.com.

Vaisakh R

unread,
Jul 6, 2017, 12:42:53 PM7/6/17
to Druid Development
Correct, that was the issue. Fixed it thanks.

Gian

To unsubscribe from this group and stop receiving emails from it, send an email to druid-developm...@googlegroups.com.
To post to this group, send email to druid-de...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages