I am stuck with adding options while using dataframe.save() method. It takes some parameters in options like for keyspace and cassandra hosts etc, but but not option for setting TTL.
I did some investigation in online documentation and only way I found to specify custom TTL is to call sc.cassandraTable() which takes RDD and WriteConf as paramter like
rdd.saveToCassandra("test", "tab", writeConf = WriteConf(ttl = TTLOption.constant(100)))
I want to do it using dataframe.save() and finding is there option to pass TTL as options. I looked at source code using debugger and it seems to be sequence of operations
DataFrameWriter.save() ---> ResolvedDataSource(
df.sqlContext,
source,
partitioningColumns.map(_.toArray).getOrElse(Array.empty[String]),
mode,
extraOptions.toMap,
df)
--> ResolvedDataSource.apply() -- > org.apache.spark.sql.cassandra.createRelation() -- >CassandraSourceRelation.apply() which creates val writeConf = WriteConf.fromSparkConf(conf).
It does not read TTL value and it does not seems to be easy way of doing it.
Has any one know a workaround or hook in framework?
Thanks
Fraaz
--
You received this message because you are subscribed to the Google Groups "DataStax Spark Connector for Apache Cassandra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to spark-connector-...@lists.datastax.com.
finaldata.write().format("org.apache.spark.sql.cassandra").options(new HashMap<String, String>() {
{
put("keyspace", "keyspacename");
put("table", "tablename");
put("spark.cassandra.output.ttl", "ttltime in second");
}
}).mode(SaveMode.Append).save();