Connecting Cassandra Cluster with Titan

182 views
Skip to first unread message

Madabhattula Rajesh Kumar

unread,
Nov 16, 2015, 1:15:38 PM11/16/15
to Aurelius
Hi,

I have a Cassandra Cluster running on remote machines. I'm not able to connect Cassandra Cluster with Titan from my local JVM.

Could you please share how to connect Cassandra Cluster using Titan.

Regards,
Rajesh

Jason Plurad

unread,
Nov 16, 2015, 1:24:59 PM11/16/15
to Aurelius
In your titan properties, set storage.hostname to your remote Cassandra cluster. Read more about it in the Titan documentation.

http://s3.thinkaurelius.com/docs/titan/1.0.0/cassandra.html#_remote_server_mode

Madabhattula Rajesh Kumar

unread,
Nov 16, 2015, 9:07:50 PM11/16/15
to Aurelius
Hi Jason Plurad,

Thank you for the response. In my client program I have used remote cassandra cluster IP Adddress. Please find below code base and exception details.

Could you please help me how to resolve this issue

Code :-
val conf = new BaseConfiguration
  conf setProperty("storage.backend","cassandra")
  conf setProperty("storage.hostname","REMOTE CASSANDRA CLUSTER IP ADDRESS")
  conf.setProperty("storage.cassandra.keyspace","TITAN_DEMO");
  conf.setProperty("ids.renew-timeout", 120000);
  conf.setProperty("storage.username","cassandra");
  conf.setProperty("storage.password","cassandra");

  val g = TitanFactory open(conf)
  val mgmt= g.openManagement()

  val id = mgmt.makePropertyKey("id").dataType(classOf[String]).make
  val name = mgmt.makePropertyKey("name").dataType(classOf[String]).make
  val role = mgmt.makePropertyKey("role").dataType(classOf[String]).make
 
  mgmt.buildIndex("UID",classOf[Vertex]).addKey(id).unique().buildCompositeIndex()
 
  mgmt.makeVertexLabel("ID").make();
  mgmt.makeVertexLabel("UI").make();
 
  mgmt.makeEdgeLabel("friends").multiplicity(Multiplicity.MULTI).make()
  mgmt.makeEdgeLabel("access").multiplicity(Multiplicity.MULTI).make()
   
  println(" ### == "+mgmt.getGraphIndex("UID").getFieldKeys)  
  val a = mgmt.getGraphIndex("UID").getFieldKeys
  mgmt.commit()

Exception :-

23:01:26,339  WARN ConsistentKeyIDAuthority:343 - Temporary storage exception while acquiring id block - retrying in PT0.6S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.41S => too slow, threshold is: PT0.3S
23:01:28,180  WARN ConsistentKeyIDAuthority:343 - Temporary storage exception while acquiring id block - retrying in PT1.2S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.41S => too slow, threshold is: PT0.3S
23:01:30,432  WARN ConsistentKeyIDAuthority:343 - Temporary storage exception while acquiring id block - retrying in PT2.4S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.421S => too slow, threshold is: PT0.3S
23:01:34,025  WARN ConsistentKeyIDAuthority:343 - Temporary storage exception while acquiring id block - retrying in PT4.8S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.409S => too slow, threshold is: PT0.3S
23:01:40,059  WARN ConsistentKeyIDAuthority:343 - Temporary storage exception while acquiring id block - retrying in PT9.6S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.41S => too slow, threshold is: PT0.3S
23:01:50,599  WARN ConsistentKeyIDAuthority:343 - Temporary storage exception while acquiring id block - retrying in PT9.6S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.309S => too slow, threshold is: PT0.3S
23:02:01,122  WARN ConsistentKeyIDAuthority:343 - Temporary storage exception while acquiring id block - retrying in PT9.6S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.309S => too slow, threshold is: PT0.3S
23:02:11,645  WARN ConsistentKeyIDAuthority:343 - Temporary storage exception while acquiring id block - retrying in PT9.6S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.307S => too slow, threshold is: PT0.3S
23:02:22,454  WARN ConsistentKeyIDAuthority:343 - Temporary storage exception while acquiring id block - retrying in PT9.6S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.409S => too slow, threshold is: PT0.3S
23:02:33,206  WARN ConsistentKeyIDAuthority:343 - Temporary storage exception while acquiring id block - retrying in PT9.6S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.41S => too slow, threshold is: PT0.3S
23:02:43,856  WARN ConsistentKeyIDAuthority:343 - Temporary storage exception while acquiring id block - retrying in PT9.6S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.326S => too slow, threshold is: PT0.3S
23:02:54,378  WARN ConsistentKeyIDAuthority:343 - Temporary storage exception while acquiring id block - retrying in PT9.6S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.307S => too slow, threshold is: PT0.3S
23:03:04,950  WARN ConsistentKeyIDAuthority:343 - Temporary storage exception while acquiring id block - retrying in PT9.6S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.312S => too slow, threshold is: PT0.3S
23:03:15,703  WARN ConsistentKeyIDAuthority:343 - Temporary storage exception while acquiring id block - retrying in PT9.6S: com.thinkaurelius.titan.diskstorage.TemporaryBackendException: Wrote claim for id block [1, 51) in PT0.431S => too slow, threshold is: PT0.3S
Exception in thread "main" com.thinkaurelius.titan.core.TitanException: ID block allocation on partition(0)-namespace(4) timed out in 2.000 min
    at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.waitForIDBlockGetter(StandardIDPool.java:150)
    at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.nextBlock(StandardIDPool.java:172)
    at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.nextID(StandardIDPool.java:198)
    at com.thinkaurelius.titan.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:295)
    at com.thinkaurelius.titan.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:169)
    at com.thinkaurelius.titan.graphdb.database.idassigner.VertexIDAssigner.assignID(VertexIDAssigner.java:140)
    at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.assignID(StandardTitanGraph.java:437)
    at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.makeSchemaVertex(StandardTitanTx.java:834)
    at com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx.makePropertyKey(StandardTitanTx.java:856)
    at com.thinkaurelius.titan.graphdb.types.StandardPropertyKeyMaker.make(StandardPropertyKeyMaker.java:86)
    at sparkcassandra5.test7$.delayedEndpoint$sparkcassandra5$test7$1(test7.scala:50)
    at sparkcassandra5.test7$delayedInit$body.apply(test7.scala:34)
    at scala.Function0$class.apply$mcV$sp(Function0.scala:40)
    at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
    at scala.App$$anonfun$main$1.apply(App.scala:76)
    at scala.App$$anonfun$main$1.apply(App.scala:76)
    at scala.collection.immutable.List.foreach(List.scala:381)
    at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
    at scala.App$class.main(App.scala:76)
    at sparkcassandra5.test7$.main(test7.scala:34)
    at sparkcassandra5.test7.main(test7.scala)
Caused by: java.util.concurrent.TimeoutException
    at java.util.concurrent.FutureTask.get(FutureTask.java:205)
    at com.thinkaurelius.titan.graphdb.database.idassigner.StandardIDPool.waitForIDBlockGetter(StandardIDPool.java:129)
    ... 20 more

Jason Plurad

unread,
Nov 17, 2015, 10:10:25 AM11/17/15
to Aurelius
There are a couple suggestions you could try from this post https://groups.google.com/forum/#!msg/aureliusgraphs/P2KiXlWoDag/ip1KcrZxdssJ
Reply all
Reply to author
Forward
0 new messages