java.lang.IllegalStateException: state should be: open when using with Spark with master set to "local[*]"

1,993 views
Skip to first unread message

Jerry Lin

unread,
Jun 27, 2015, 8:40:14 PM6/27/15
to mongod...@googlegroups.com
When I connect to my local mongodb instance with any worker thread amount greater than 1, I get this result. Everything runs fine if I have 1 worker thread (i.e. master = "local[1]"). I'm using Spark 1.3.1 and Scala with mongo-hadoop 1.3.2

15/06/26 12:18:02 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java
.lang.IllegalStateException: state should be: open
 at com
.mongodb.assertions.Assertions.isTrue(Assertions.java:70)
 at com
.mongodb.connection.DefaultServer.getDescription(DefaultServer.java:97)
 at com
.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.getServerDescription(ClusterBinding.java:81)
 at com
.mongodb.operation.QueryBatchCursor.<init>(QueryBatchCursor.java:53)
 at com
.mongodb.operation.FindOperation$1.call(FindOperation.java:409)
 at com
.mongodb.operation.FindOperation$1.call(FindOperation.java:394)
 at com
.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:195)
 at com
.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:168)
 at com
.mongodb.operation.FindOperation.execute(FindOperation.java:394)
 at com
.mongodb.operation.FindOperation.execute(FindOperation.java:57)
 at com
.mongodb.Mongo.execute(Mongo.java:738)
 at com
.mongodb.Mongo$2.execute(Mongo.java:725)
 at com
.mongodb.DBCursor.initializeCursor(DBCursor.java:815)
 at com
.mongodb.DBCursor.hasNext(DBCursor.java:149)
 at com
.mongodb.hadoop.input.MongoRecordReader.nextKeyValue(MongoRecordReader.java:75)
 at org
.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:143)
 at org
.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
 at scala
.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
 at scala
.collection.Iterator$$anon$13.hasNext(Iterator.scala:413)
 at scala
.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
 at scala
.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
 at org
.apache.spark.util.random.SamplingUtils$.reservoirSampleAndCount(SamplingUtils.scala:41)
 at org
.apache.spark.RangePartitioner$$anonfun$8.apply(Partitioner.scala:259)
 at org
.apache.spark.RangePartitioner$$anonfun$8.apply(Partitioner.scala:257)
 at org
.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:647)
 at org
.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:647)
 at org
.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
 at org
.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
 at org
.apache.spark.rdd.RDD.iterator(RDD.scala:244)
 at org
.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
 at org
.apache.spark.scheduler.Task.run(Task.scala:64)
 at org
.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
 at java
.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java
.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java
.lang.Thread.run(Thread.java:745)
15/06/26 12:18:02 INFO connection: Opened connection [connectionId{localValue:17, serverValue:13403}] to 127.0.0.1:27017
15/06/26 12:18:02 INFO connection: Opened connection [connectionId{localValue:21, serverValue:13408}] to 127.0.0.1:27017
15/06/26 12:18:02 INFO connection: Opened connection [connectionId{localValue:20, serverValue:13407}] to 127.0.0.1:27017
15/06/26 12:18:02 INFO connection: Opened connection [connectionId{localValue:19, serverValue:13406}] to 127.0.0.1:27017
15/06/26 12:18:02 INFO connection: Opened connection [connectionId{localValue:18, serverValue:13405}] to 127.0.0.1:27017
15/06/26 12:18:02 INFO connection: Opened connection [connectionId{localValue:15, serverValue:13404}] to 127.0.0.1:27017
15/06/26 12:18:02 INFO TaskSetManager: Starting task 15.0 in stage 0.0 (TID 15, localhost, ANY, 1462 bytes)
15/06/26 12:18:02 INFO Executor: Running task 15.0 in stage 0.0 (TID 15)
15/06/26 12:18:02 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.IllegalStateException: state should be: open
 at com
.mongodb.assertions.Assertions.isTrue(Assertions.java:70)
 at com
.mongodb.connection.DefaultServer.getDescription(DefaultServer.java:97)
 at com
.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.getServerDescription(ClusterBinding.java:81)
 at com
.mongodb.operation.QueryBatchCursor.<init>(QueryBatchCursor.java:53)
 at com
.mongodb.operation.FindOperation$1.call(FindOperation.java:409)
 at com
.mongodb.operation.FindOperation$1.call(FindOperation.java:394)
 at com
.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:195)
 at com
.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:168)
 at com
.mongodb.operation.FindOperation.execute(FindOperation.java:394)
 at com
.mongodb.operation.FindOperation.execute(FindOperation.java:57)
 at com
.mongodb.Mongo.execute(Mongo.java:738)
 at com
.mongodb.Mongo$2.execute(Mongo.java:725)
 at com
.mongodb.DBCursor.initializeCursor(DBCursor.java:815)
 at com
.mongodb.DBCursor.hasNext(DBCursor.java:149)
 at com
.mongodb.hadoop.input.MongoRecordReader.nextKeyValue(MongoRecordReader.java:75)
 at org
.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:143)
 at org
.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
 at scala
.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
 at scala
.collection.Iterator$$anon$13.hasNext(Iterator.scala:413)
 at scala
.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
 at scala
.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
 at org
.apache.spark.util.random.SamplingUtils$.reservoirSampleAndCount(SamplingUtils.scala:41)
 at org
.apache.spark.RangePartitioner$$anonfun$8.apply(Partitioner.scala:259)
 at org
.apache.spark.RangePartitioner$$anonfun$8.apply(Partitioner.scala:257)
 at org
.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:647)
 at org
.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:647)
 at org
.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
 at org
.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
 at org
.apache.spark.rdd.RDD.iterator(RDD.scala:244)
 at org
.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
 at org
.apache.spark.scheduler.Task.run(Task.scala:64)
 at org
.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
 at java
.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java
.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java
.lang.Thread.run(Thread.java:745)


Driver stacktrace:
 at org
.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1204)
 at org
.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1193)
 at org
.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1192)
 at scala
.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
 at scala
.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
 at org
.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1192)
 at org
.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
 at org
.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:693)
 at scala
.Option.foreach(Option.scala:257)
 at org
.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:693)
 at org
.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1393)
 at org
.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1354)
 at org
.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
15/06/26 12:18:02 ERROR Executor: Exception in task 14.0 in stage 0.0 (TID 14)
java
.lang.IllegalStateException: state should be: open
 at com
.mongodb.assertions.Assertions.isTrue(Assertions.java:70)
 at com
.mongodb.connection.DefaultServer.getDescription(DefaultServer.java:97)
 at com
.mongodb.binding.ClusterBinding$ClusterBindingConnectionSource.getServerDescription(ClusterBinding.java:81)
 at com
.mongodb.operation.QueryBatchCursor.<init>(QueryBatchCursor.java:53)
 at com
.mongodb.operation.FindOperation$1.call(FindOperation.java:409)
 at com
.mongodb.operation.FindOperation$1.call(FindOperation.java:394)
 at com
.mongodb.operation.OperationHelper.withConnectionSource(OperationHelper.java:195)
 at com
.mongodb.operation.OperationHelper.withConnection(OperationHelper.java:168)
 at com
.mongodb.operation.FindOperation.execute(FindOperation.java:394)
 at com
.mongodb.operation.FindOperation.execute(FindOperation.java:57)
 at com
.mongodb.Mongo.execute(Mongo.java:738)
 at com
.mongodb.Mongo$2.execute(Mongo.java:725)
 at com
.mongodb.DBCursor.initializeCursor(DBCursor.java:815)
 at com
.mongodb.DBCursor.hasNext(DBCursor.java:149)
 at com
.mongodb.hadoop.input.MongoRecordReader.nextKeyValue(MongoRecordReader.java:75)
 at org
.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:143)
 at org
.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
 at scala
.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
 at scala
.collection.Iterator$$anon$13.hasNext(Iterator.scala:413)
 at scala
.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
 at scala
.collection.Iterator$$anon$11.hasNext(Iterator.scala:369)
 at org
.apache.spark.util.random.SamplingUtils$.reservoirSampleAndCount(SamplingUtils.scala:41)
 at org
.apache.spark.RangePartitioner$$anonfun$8.apply(Partitioner.scala:259)
 at org
.apache.spark.RangePartitioner$$anonfun$8.apply(Partitioner.scala:257)
 at org
.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:647)
 at org
.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:647)
 at org
.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
 at org
.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
 at org
.apache.spark.rdd.RDD.iterator(RDD.scala:244)
 at org
.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
 at org
.apache.spark.scheduler.Task.run(Task.scala:64)
 at org
.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:203)
 at java
.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 at java
.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 at java
.lang.Thread.run(Thread.java:745)




Luke Lovett

unread,
Jun 29, 2015, 1:06:25 PM6/29/15
to mongod...@googlegroups.com
This looks like a bug that was fixed in mongo-hadoop 1.4 rc0. See https://jira.mongodb.org/browse/HADOOP-204.

I saw your comment on that ticket, but I wanted to post this here as well in case others are facing the same issue.
...
Reply all
Reply to author
Forward
0 new messages