javax.naming.NameNotFoundException: DNS name not found

1,174 views
Skip to first unread message

Tanvir Shahid

unread,
Jul 10, 2015, 12:04:12 AM7/10/15
to predicti...@googlegroups.com
Hi,

I have installed a fully distributed HBASE and my /etc/hosts file is as follows:

192.168.181.155 namenode1 master1
192.168.181.156 region1
192.168.181.157 region2
192.168.181.158 mapreduce

I have installed predictionIO in master1. Now, when I enter "pio train" command then I got following errors:

[WARN] [TableInputFormatBase] Cannot resolve the host name for region2/192.168.181.157 because of javax.naming.NameNotFoundException: DNS name not found [response code 3]; remaining name '157.181.168.192.in-addr.arpa'
[WARN] [TableInputFormatBase] Cannot resolve the host name for region2/192.168.181.157 because of javax.naming.NameNotFoundException: DNS name not found [response code 3]; remaining name '157.181.168.192.in-addr.arpa'
[WARN] [TableInputFormatBase] Cannot resolve the host name for region2/192.168.181.157 because of javax.naming.NameNotFoundException: DNS name not found [response code 3]; remaining name '157.181.168.192.in-addr.arpa'
[WARN] [TableInputFormatBase] Cannot resolve the host name for region2/192.168.181.157 because of javax.naming.NameNotFoundException: DNS name not found [response code 3]; remaining name '157.181.168.192.in-addr.arpa'
[INFO] [Engine$] MySimilarProduct.TrainingData does not support data sanity check. Skipping check.
[INFO] [Engine$] MySimilarProduct.PreparedData does not support data sanity check. Skipping check.
[WARN] [TableInputFormatBase] Cannot resolve the host name for region2/192.168.181.157 because of javax.naming.NameNotFoundException: DNS name not found [response code 3]; remaining name '157.181.168.192.in-addr.arpa'
[WARN] [TableInputFormatBase] Cannot resolve the host name for region2/192.168.181.157 because of javax.naming.NameNotFoundException: DNS name not found [response code 3]; remaining name '157.181.168.192.in-addr.arpa'
[Stage 10:>                                                         (0 + 2) / 2][ERROR] [Executor] Exception in task 0.0 in stage 10.0 (TID 13)
[ERROR] [SparkUncaughtExceptionHandler] Uncaught exception in thread Thread[Executor task launch worker-1,5,main]
[WARN] [TaskSetManager] Lost task 0.0 in stage 10.0 (TID 13, localhost): java.lang.OutOfMemoryError: GC overhead limit exceeded
    at scala.collection.IndexedSeqLike$class.iterator(IndexedSeqLike.scala:91)
    at scala.collection.mutable.WrappedArray.iterator(WrappedArray.scala:34)
    at scala.collection.Iterator$.apply(Iterator.scala:63)
    at org.apache.spark.util.collection.ExternalAppendOnlyMap.insert(ExternalAppendOnlyMap.scala:105)
    at org.apache.spark.Aggregator.combineCombinersByKey(Aggregator.scala:93)
    at org.apache.spark.shuffle.hash.HashShuffleReader.read(HashShuffleReader.scala:44)
    at org.apache.spark.rdd.ShuffledRDD.compute(ShuffledRDD.scala:92)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
    at org.apache.spark.scheduler.Task.run(Task.scala:64)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

[ERROR] [TaskSetManager] Task 0 in stage 10.0 failed 1 times; aborting job
[WARN] [TaskSetManager] Lost task 1.0 in stage 10.0 (TID 14, localhost): TaskKilled (killed intentionally)


Any idea why it happens!

Donald Szeto

unread,
Jul 10, 2015, 12:15:47 PM7/10/15
to predicti...@googlegroups.com, rock...@gmail.com
Hi,

It looks like the real issue was that your training was running out of memory. What was the full command that you were using when you trained?

If you are running locally, try "pio train -- --driver-memory 8g".

Regards,
Donald
Reply all
Reply to author
Forward
0 new messages