FWIW, got it working and did not require a new interpreter. I noticed the
Bind/Unbind interpreters to note that was just recently committed to master. Reading through that it was fairly clear on how to do it. It could have been done with out this but by reading through this it became clear the the only thing I really needed to do was leverage the
JAVA_OPTS in the
conf/zeppelin-env.sh file.
Here is what I did, may be a better way but this works (open to suggestions on better approach).
1. Modified the $ZEPPELIN_HOME/conf/zeppelin-env.sh and set the JAVA_OPTS to include the additional jars required for the DataStax spark-cassandra-connector:
export ZEPPELIN_JAVA_OPTS="-Dspark.jars=./spark-datastax-connector-lib/cassandra-clientutil-2.1.2.jar,./spark-datastax-connector-lib/cassandra-driver-core-2.1.3.jar,./spark-datastax-connector-lib/cassandra-thrift-2.1.2.jar,./spark-datastax-connector-lib/joda-convert-1.7.jar,./spark-datastax-connector-lib/joda-time-2.4.jar,./spark-datastax-connector-lib/spark-cassandra-connector_2.10-1.1.1.jar"
2. Opted to create a new parser and bind it to the note book with the properties I needed for connecting to Cassandra. One could also add these to the JAVA_OPTS, i.e.
-Dspark.cassandra.connection.host=localhost. New interpreter, only real difference for now was addition of above parameter:

3. Create a very simple notebook for the following column family in Cassandra:
CREATE TABLE sql_demo (
key int,
value decimal,
PRIMARY KEY ((key))
)
Note book to query the above Table / Column Family:
import org.apache.spark.sql.cassandra.CassandraSQLContext
case class Demo(key:Int, value:Double)
collect
val rddRows = rdd.map(r =>
Demo(r.getInt(0), r.getDouble(1))
)
rddRows.registerTempTable("demo")
5. Finally query the TempTable "Demo":
Works great. Still need to do some additional testing but so far so good. Hopefully this may help some one else out in the future.
- Todd