|Connecting to cassandra?||Todd Gruben||8/21/12 7:17 PM|
How would you go about connecting a spark cluster to a cassandra data store?
|Re: Connecting to cassandra?||Matei Zaharia||8/22/12 12:19 PM|
You should be able to read it if you create a Hadoop JobConf object for reading from Cassandra. I haven't used Cassandra yet, but you can look at the Word Count example in it to see how they read from MapReduce. Then create the same JobConf object in Scala, and use the SparkContext.hadoopRDD method, which takes the JobConf object, key class, value class, etc and reads the data.
I'll look into this when I have a moment to install Cassandra, but if you want to see this method used in another place, here's some code we had to read from Hypertable:
val conf = new JobConf
val data = sc.hadoopRDD(conf, classOf[TextTableInputFormat], classOf[Text], classOf[Text])
|Re: Connecting to cassandra?||Erich Nachbar||8/22/12 1:32 PM|
There is a Cassandra Input Format (http://wiki.apache.org/cassandra/HadoopSupport) that should work.
I personally haven't tested it yet, because we create file dumps using parallel fetches (aka first fetch a batch of row keys only, then distribute those keys to your worker processes/threads and make them retrieve the actual data of the row).
That works reasonably well if your data size isn't too big or you do incremental dumps (like by having rows that are partitioned by time).
If you do have a lot of data, the Input Format should help by taking data locality into consideration. Of course you would need to install Spark also on the nodes running Cassandra.
We found that most tools to explore the data in Cassandra are rather awkward. So having file dumps enables people to play/look at the data more easily and provide a nice "snapshot in time" functionality for the rest of our pipeline on Spark.
CTO | Quantifind | 650-430-5500
|Re: Connecting to cassandra?||Siping Xu||3/3/13 8:40 AM|
Hi Todd or others,
Does the integration work? I'm going to work on this, but I'm not sure if it's tricky. Looking forward your comments.
|Re: Connecting to cassandra?||Prashant Sharma||3/3/13 9:12 PM|
Who is Todd? And what integration are you talking about ?
|Re: Connecting to cassandra?||Siping Xu||3/3/13 11:34 PM|
Sorry. I get some message lost. I'm looking for integrating Spark&Shark with Cassandra. For example, I issue a SQL in shark like the following:
select sum(col1), count(col2), col3, col4 from myCF
group by col3, col4
Can the Shark/Spark do thing like the following:
1. read data locally on ech Cassandra node
2. do aggregation by keys locally
3. Reduce the result (meaning do aggregation by keys again)
|Re: Connecting to cassandra?||Reynold Xin||3/4/13 10:21 PM|
We haven't tried it with Cassandra yet. My guess is the predicate and aggregation push down part won't work right now without an extra Cassandra specific operator.
sent from mobile device. please excuse the brevity.
--You received this message because you are subscribed to the Google Groups "Spark Users" group.
|Re: Connecting to cassandra?||Siping Xu||3/5/13 9:53 PM|
Thanks for your timely reply. I'm trying to create a solution for real-time monitoring/dashboard. There will be 5K/10K events (0.5K~1K per event) per seconds to monitor. Historical data has to be kept for around 1-2 month. Typical use cases will be list all events relating to me (perhaps, there will be 1K - 100K). There are dozens column for each event. Need to do order by, filter on the fly.
For your perspective, what datastore is recommended? Originally, I want to use Cassandra. BTW, I cannot use license under LGPL, GPL, AGPL and etc) in my organization. Apache, BSD, MIT, EPL is OK.