I'm trying to read a table that has a set and a map of user defined types. I had to use a CassandraTableScanJavaRdd to intercept the call and use map() to do a row mapping. I haven't been able to figure out how to convert the map or set. I see you can pass in a TypeConverter, but I am not sure how to implement it or if that is the way it should be done.
row.getMap("column_name", TypeConverter, TypeConverter);
If I should use a type converter, does anyone have an example for a UDT in Java? Or if you can tell me the best way to convert the entire table to my pojo.
I'm using datastax spark connector 1.5.1.
Thank you,
Joe
JavaRDD<Person> rdd = javaFunctions(sc).cassandraTable("ks", "people", mapRowTo(Person.class))
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
Caused by: java.lang.IllegalArgumentException: Unsupported type: com.test.cassandra.model.Format
at com.datastax.spark.connector.types.TypeConverter$.forCollectionType(TypeConverter.scala:728) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.types.TypeConverter$.forType(TypeConverter.scala:740) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.types.TypeConverter$.forCollectionType(TypeConverter.scala:713) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.types.TypeConverter$.forType(TypeConverter.scala:740) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.rdd.reader.ClassBasedRowReader$$anonfun$3.apply(ClassBasedRowReader.scala:45) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.rdd.reader.ClassBasedRowReader$$anonfun$3.apply(ClassBasedRowReader.scala:45) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) ~[scala-library-2.10.4.jar:na]
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) ~[scala-library-2.10.4.jar:na]
at scala.collection.immutable.HashMap$HashMap1.foreach(HashMap.scala:224) ~[scala-library-2.10.4.jar:na]
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403) ~[scala-library-2.10.4.jar:na]
at scala.collection.immutable.HashMap$HashTrieMap.foreach(HashMap.scala:403) ~[scala-library-2.10.4.jar:na]
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) ~[scala-library-2.10.4.jar:na]
at scala.collection.AbstractTraversable.map(Traversable.scala:105) ~[scala-library-2.10.4.jar:na]
at com.datastax.spark.connector.rdd.reader.ClassBasedRowReader.<init>(ClassBasedRowReader.scala:45) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.rdd.reader.ClassBasedRowReaderFactory.rowReader(ClassBasedRowReader.scala:147) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.rdd.reader.ClassBasedRowReaderFactory.rowReader(ClassBasedRowReader.scala:145) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.rowReader(CassandraTableRowReaderProvider.scala:46) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.rowReader$lzycompute(CassandraTableScanRDD.scala:58) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.rowReader(CassandraTableScanRDD.scala:58) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.rdd.CassandraTableRowReaderProvider$class.verify(CassandraTableRowReaderProvider.scala:163) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.verify(CassandraTableScanRDD.scala:58) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.getPartitions(CassandraTableScanRDD.scala:117) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:222) ~[spark-core_2.10-1.2.1.jar:1.2.1]
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:220) ~[spark-core_2.10-1.2.1.jar:1.2.1]
at scala.Option.getOrElse(Option.scala:120) ~[scala-library-2.10.4.jar:na]
at org.apache.spark.rdd.RDD.partitions(RDD.scala:220) ~[spark-core_2.10-1.2.1.jar:1.2.1]
at org.apache.spark.rdd.RDD.take(RDD.scala:1077) ~[spark-core_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.rdd.CassandraRDD.take(CassandraRDD.scala:118) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at com.datastax.spark.connector.rdd.CassandraRDD.take(CassandraRDD.scala:119) ~[spark-cassandra-connector_2.10-1.2.1.jar:1.2.1]
at org.apache.spark.rdd.RDD.first(RDD.scala:1110) ~[spark-core_2.10-1.2.1.jar:1.2.1]
at org.apache.spark.api.java.JavaRDDLike$class.first(JavaRDDLike.scala:437) ~[spark-core_2.10-1.2.1.jar:1.2.1]
at org.apache.spark.api.java.JavaRDD.first(JavaRDD.scala:32) ~[spark-core_2.10-1.2.1.jar:1.2.1]
Are UDT not supported at all in 1.2? Or do I need to write my own RowReader with a TypeConverter. I haven't had any luck find info about how to write my own TypeConverter.