Failed to open native connection to Cassandra

5,105 views
Skip to first unread message

Sumant Deshpande

unread,
Feb 5, 2015, 4:58:58 PM2/5/15
to spark-conn...@lists.datastax.com
Hi,

I am trying to connect to cassandra from spark-jobserver by creating spark context.
I have included libraries of spark-cassandra-connector in spark-jobserver. But when I am trying to connect to cassandra I am getting below error :

"errorClass": "java.io.IOException",
"cause": "All host(s) tried for query failed (tried: /XX.XXX.XX.XXX:9042 (com.datastax.driver.core.TransportException: [/XX.XXX.XX.XXX:9042] Cannot connect))",

"causingClass": "com.datastax.driver.core.exceptions.NoHostAvailableException",
"message": "Failed to open native connection to Cassandra at {XX.XXX.XX.XXX}:9042"

=========================================================

sbt conf file where I have included the spark-cassandra-connector library

libraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector" % "1.1.0",
libraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector-java" % "1.1.0",
libraryDependencies += "org.apache.cassandra" % "cassandra-thrift" % "2.1.2",
libraryDependencies += "org.apache.cassandra" % "cassandra-clientutil" % "2.1.2",
libraryDependencies += "com.datastax.cassandra" % "cassandra-driver-core" % "2.1.3",

=========================================================

Spark Code :

val conf = new SparkConf(true)
.set("spark.cassandra.connection.host", "127.0.0.1")
val sc = new SparkContext("spark://127.0.0.1:7077", "test", conf)

val rdd = sc.cassandraTable("test", "kv")
println(rdd.count)

=========================================================

cassandra.yaml file properties :

ssl_storage_port: 7001
listen_address: localhost
start_native_transport: true
native_transport_port: 9042
start_rpc: true
rpc_address: localhost
rpc_port: 9160
rpc_keepalive: true

=========================================================

Could someone guide me how to solve this issue or any possible causes ?

Thanks in advance.
-Sumant

Lishu Liu

unread,
Feb 5, 2015, 5:10:10 PM2/5/15
to spark-conn...@lists.datastax.com
Can you cqlsh to your cassandra host? If still not, then it might be the connection between your spark cluster and cassandra cluster.

Sumant

unread,
Feb 5, 2015, 5:34:19 PM2/5/15
to spark-conn...@lists.datastax.com
yeah .. I can cqlsh to Cassandra host and I am able to fetch data using queries..

how to check if there is firewall issue which might be preventing to connect on port 9042 ?

Piotr Kołaczkowski

unread,
Feb 6, 2015, 3:30:10 AM2/6/15
to spark-conn...@lists.datastax.com

Your rpc_address is localhost and this is definitely not what you want. Put a real address there.

5 lut 2015 23:34 "Sumant" <sumantde...@gmail.com> napisał(a):
yeah .. I can cqlsh to Cassandra host and I am able to fetch data using queries..

how to check if there is firewall issue which might be preventing to connect on port 9042 ?

To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.

Sumant

unread,
Feb 6, 2015, 2:09:43 PM2/6/15
to spark-conn...@lists.datastax.com
Thank you very much. The problem is solved after I changed the rpc address to real address.

Su

unread,
Mar 11, 2015, 2:09:44 PM3/11/15
to spark-conn...@lists.datastax.com
Hello Everyone,

I just started using Cassandra and I am having the same issue as Sumant, but changing rpc_address to public.ip did not work. Here are my details...they seem to be exactly the same as above. Thank you for the help!

Error:Exception in thread "main" java.io.IOException: Failed to open native connection to Cassandra at {public.ip.address}:9042

Code:

SparkConf sparkConf = new SparkConf().setAppName("SimpleStreamingApp").set("spark.cassandra.connection.host", "public.ip.address");

JavaSparkContext sc = new JavaSparkContext(sparkConf);

CassandraConnector connector = CassandraConnector.apply(sc.getConf());

List<SD> sd = Arrays.asList(new SD("123", "3", "300"));
JavaRDD<SD> sdRDD = sc.parallelize(sd);

javaFunctions(sdRDD).writerBuilder("demo", "tableName", mapToRow(SD.class)).saveToCassandra();



Maven dependencies:

cassandra-clientutil-2.1.3.jar
cassandra-driver-core-2.1.3.jar
cassandra-thrift-2.1.3.jar
libthrift-0.9.2.jar
spark-cassandra-connector_2.10-1.2.0-alpha1.jar
spark-cassandra-connector-java_2.10-1.2.0-alpha1.jar
spark-core_2.10-1.2.0-cdh5.3.0.jar
hadoop-client-2.5.0-mr1-cdh5.3.0.jar

Cassandra.yaml file properties:

ssl_storage_port:7001
listen_address:localhost
start_native_transport:true
native_transport_port:9042
rpc_address:pu.blic.ip.address (also tried private)
rpc_port:9160

Other details:
Spark 1.2.0
CDH 5.3.0
Cassandra 2.1.3 (can access cqlsh)

Thank you for the help!

Su

unread,
Mar 11, 2015, 5:36:45 PM3/11/15
to spark-conn...@lists.datastax.com
Woops...realized I might have not given important info. This is the end of the error:

Caused by: java.lang.ClassNotFoundException:com.google.common.util.concurrent.Striped

I tried import com.google.common.util.concurrent.Striped, but it couldn't find the Symbol. I imported com.google.common.util.concurrent.*, but got the same Caused by statement.

Thank you!

Su She

unread,
Mar 11, 2015, 6:27:49 PM3/11/15
to spark-conn...@lists.datastax.com
I have resolved the issue from this thread

1) I added another dependency (guava-18.0.jar) and was able to solve my issues. This is my final list of dependencies, for anyone else troubleshooting:

cassandra-clientutil-2.1.3.jar
cassandra-driver-core-2.1.3.jar
cassandra-thrift-2.1.3.jar
guava-18.0.jar
hadoop-client-2.5.0-mr1-cdh5.3.0.jar
hadoop-yarn-server-web-proxy-2.5.0.jar
kafka_2.10-0.8.2-beta.jar
kafka-clients-0.8.2-beta.jar
libthrift-0.9.2.jar
metrics-core-2.2.0.jar
scala-compiler-2.10.4.jar
scala-library-2.10.4.jar
spark-cassandra-connector_2.10-1.2.0-alpha1.jar
spark-cassandra-connector-java_2.10-1.2.0-alpha1.jar
spark-core_2.10-1.2.0-cdh5.3.0.jar
spark-streaming-kafka_2.10-1.2.0.jar
zkclient-0.3.jar

2) I could not connect to the single node cluster I had downloaded because of a Host exception, I was able to connect to the cassandra ami. Both rpc_addresses were set to public.ip and both had same security setting for port 9042.

3) I got a table not found exception, which I found out was due to the fact that if you create a table called sD in cassandra, it gets converted to sd.

Hope this helps, sorry for the spam.

Sindhuja Balaji

unread,
Jun 4, 2016, 10:51:25 PM6/4/16
to DataStax Spark Connector for Apache Cassandra, suhsh...@gmail.com
I am using the below maven dependencies for Cassandra 3.0.5 and scala 2.10 and getting the below error. Any help would be highly appreciated. The version I used were compatible


<!-- Spark dependancies -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.4.1</version>
</dependency>

<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.10</artifactId>
<version>1.4.1</version>
</dependency>

<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.4.1</version>
</dependency>

<!-- Connectors -->

<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector_2.10</artifactId>
<version>1.5.0-M3</version>
</dependency>


<dependency>
<groupId>com.datastax.spark</groupId>
<artifactId>spark-cassandra-connector-java_2.10</artifactId>
<version>1.5.0-M2</version>
</dependency>

Exception in thread "main" java.util.NoSuchElementException: key not found: 'int'
at scala.collection.MapLike$class.default(MapLike.scala:228)
at scala.collection.AbstractMap.default(Map.scala:58)
at scala.collection.MapLike$class.apply(MapLike.scala:141)
at scala.collection.AbstractMap.apply(Map.scala:58)
at com.datastax.spark.connector.types.ColumnType$.fromDriverType(ColumnType.scala:81)
at com.datastax.spark.connector.cql.ColumnDef$.apply(Schema.scala:117)
at com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchPartitionKey$1.apply(Schema.scala:199)
at com.datastax.spark.connector.cql.Schema$$anonfun$com$datastax$spark$connector$cql$Schema$$fetchPartitionKey$1.apply(Schema.scala:198)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:722)
at scala.collection.immutable.HashSet$HashSet1.foreach(HashSet.scala:153)
at scala.collection.immutable.HashSet$HashTrieSet.foreach(HashSet.scala:306)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:721)
at com.datastax.spark.connector.cql.Schema$.com$datastax$spark$connector$cql$Schema$$fetchKeyspaces$1(Schema.scala:246)

Russell Spitzer

unread,
Jun 4, 2016, 11:13:13 PM6/4/16
to DataStax Spark Connector for Apache Cassandra, suhsh...@gmail.com
Spark 1.4 is not compatible with the connector 1.5

--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.

To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
--

Sindhuja Balaji

unread,
Jun 4, 2016, 11:17:47 PM6/4/16
to spark-conn...@lists.datastax.com
Thanks Russell. Do you know the exact version of connector and spark to be used. I tried using Spark 1.5.1 for connector 1.5. Getting errors as well.

Thank you
Thanks,
Sindhuja

Russell Spitzer

unread,
Jun 4, 2016, 11:30:42 PM6/4/16
to spark-conn...@lists.datastax.com
Well the spark version depends on what spark you are running. Then you pick the release of the connector which matches that spark version. The MX releases are milestones instead use the 1.X.Y release.

Ie if you are running Spark 1.5.1 use the SCC 1.5.X where X is the highest release we made
For 1.4.1 use SCC 1.4.X where X is the highest release we made

Sindhuja Balaji

unread,
Jun 6, 2016, 11:59:13 AM6/6/16
to spark-conn...@lists.datastax.com
Thanks Russell. It works. One more question.

Is the Datastax java connector open source. Do we need the Datastax support agreement to use the ocnnectors?

Russell Spitzer

unread,
Jun 6, 2016, 12:09:44 PM6/6/16
to spark-conn...@lists.datastax.com
No need, it's Apache Licensed which means you don't need anyone's permission :) 
Reply all
Reply to author
Forward
0 new messages