# inputfaunus.graph.input.format=com.thinkaurelius.faunus.formats.graphson.GraphSONInputFormatfaunus.input.location=../adam.graphson# outputfaunus.graph.output.format=com.thinkaurelius.faunus.formats.titan.cassandra.TitanCassandraOutputFormatfaunus.graph.output.titan.storage.backend=cassandrafaunus.graph.output.titan.storage.hostname=10.0.0.1faunus.graph.output.titan.storage.port=9160faunus.graph.output.titan.storage.keyspace=titanfaunus.graph.output.titan.storage.batch-loading=truefaunus.graph.output.titan.infer-schema=truefaunus.graph.output.blueprints.tx-commit=5000faunus.sideeffect.output.format=org.apache.hadoop.mapreduce.lib.output.TextOutputFormatfaunus.output.location=outputfaunus.output.location.overwrite=true
gremlin> g = FaunusFactory.open('faunus.properties')==>faunusgraph[graphsoninputformat->titancassandraoutputformat]gremlin> g._...
13/04/27 16:35:13 WARN mapred.LocalJobRunner: job_local_0001java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManagerat com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:268)at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:226)at com.thinkaurelius.titan.diskstorage.Backend.<init>(Backend.java:97)at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:406)at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:62)at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:40)at com.thinkaurelius.faunus.formats.titan.GraphFactory.generateGraph(GraphFactory.java:20)at com.thinkaurelius.faunus.formats.BlueprintsGraphOutputMapReduce.generateGraph(BlueprintsGraphOutputMapReduce.java:61)at com.thinkaurelius.faunus.formats.titan.SchemaInferencerMapReduce$Reduce.setup(SchemaInferencerMapReduce.java:71)at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:174)at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:650)at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:418)at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:262)Caused by: java.lang.reflect.InvocationTargetExceptionat sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:525)at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:257)... 12 moreCaused by: com.thinkaurelius.titan.diskstorage.TemporaryStorageException: Temporary failure in storage backendat com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.ensureKeyspaceExists(AstyanaxStoreManager.java:394)at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.<init>(AstyanaxStoreManager.java:164)... 17 moreCaused by: com.netflix.astyanax.connectionpool.exceptions.NoAvailableHostsException: NoAvailableHostsException: [host=None(0.0.0.0):0, latency=0(0), attempts=0] No hosts to borrow fromat com.netflix.astyanax.connectionpool.impl.RoundRobinExecuteWithFailover.<init>(RoundRobinExecuteWithFailover.java:31)at com.netflix.astyanax.connectionpool.impl.TokenAwareConnectionPoolImpl.newExecuteWithFailover(TokenAwareConnectionPoolImpl.java:74)at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:229)at com.netflix.astyanax.thrift.ThriftClusterImpl.executeSchemaChangeOperation(ThriftClusterImpl.java:131)at com.netflix.astyanax.thrift.ThriftClusterImpl.addKeyspace(ThriftClusterImpl.java:252)at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.ensureKeyspaceExists(AstyanaxStoreManager.java:389)... 18 more
--
You received this message because you are subscribed to the Google Groups "Gremlin-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gremlin-user...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Welcome to Ubuntu 12.04.2 LTS (GNU/Linux 3.5.0-27-generic x86_64)* Documentation: https://help.ubuntu.com/System information as of Sat Apr 27 18:01:41 CEST 2013System load: 0.06 Users logged in: 1Usage of /: 83.9% of 5.67GB IP address for lo:1: 10.0.0.1Memory usage: 54% IP address for lo:2: 10.0.0.2Swap usage: 0% IP address for lo:3: 10.0.0.3Processes: 129 IP address for eth0: 192.168.2.105Graph this data and manage this system at https://landscape.canonical.com/Last login: Sat Apr 27 15:19:18 2013 from localhostdaniel@titan:~$ ./cassandra/apache-cassandra-1.2.3/bin/nodetool -h 10.0.0.1 -p 8001 ringDatacenter: datacenter1==========Replicas: 1Address Rack Status State Load Owns Token010.0.0.3 rack1 Up Normal 101,09 KB 33,33% 11342745564031281485796955865106245222410.0.0.2 rack1 Up Normal 104,56 KB 33,33% 5671372782015640742898477932553122611210.0.0.1 rack1 Up Normal 152,47 KB 33,33% 0daniel@titan:~$ ./cassandra/apache-cassandra-1.2.3/bin/nodetool -h 10.0.0.1 -p 8001 infoToken : 0ID : 6ae6824d-438f-43af-87a1-e4a4df17e875Gossip active : falseThrift active : falseLoad : 152,47 KBGeneration No : 0Uptime (seconds) : 94682Heap Memory (MB) : 30,52 / 455,13Data Center : datacenter1Rack : rack1Exceptions : 1Key Cache : size 313 (bytes), capacity 1048576 (bytes), 170 hits, 175 requests, NaN recent hit rate, 14400 save period in secondsRow Cache : size 0 (bytes), capacity 0 (bytes), 0 hits, 0 requests, NaN recent hit rate, 0 save period in seconds
You received this message because you are subscribed to the Google Groups "Aurelius" group.
To unsubscribe from this group and stop receiving emails from it, send an email to aureliusgraph...@googlegroups.com.
# input graph parametersfaunus.graph.input.format=org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormatfaunus.input.location=friendster/# output data (graph or statistic) parametersfaunus.graph.output.format=com.thinkaurelius.faunus.formats.titan.cassandra.TitanCassandraOutputFormatfaunus.graph.output.titan.storage.backend=cassandrafaunus.graph.output.titan.storage.hostname=localhostfaunus.graph.output.titan.storage.port=9160faunus.graph.output.titan.storage.keyspace=titanfaunus.graph.output.titan.storage.batch-loading=truefaunus.graph.output.titan.ids.block-size=100000faunus.graph.output.titan.storage.idauthority-wait-time=1000# faunus.graph.output.titan.storage.connection-timeout=60000# faunus.graph.output.titan.storage.cassandra.thrift.frame_size_mb=49# faunus.graph.output.titan.storage.cassandra.thrift.max_message_size_mb=50faunus.graph.output.titan.infer-schema=falsefaunus.graph.output.blueprints.tx-commit=10000mapred.map.tasks=12mapred.reduce.tasks=12mapred.map.child.java.opts=-Xmx2Gmapred.reduce.child.java.opts=-Xmx2Gmapred.job.reuse.jvm.num.tasks=-1mapred.task.timeout=5400000faunus.sideeffect.output.format=org.apache.hadoop.mapreduce.lib.output.TextOutputFormatfaunus.output.location=outputfaunus.output.location.overwrite=true
java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:268) at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:226) at com.thinkaurelius.titan.diskstorage.Backend.<init>(Backend.java:97) at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:406) at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:62) at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:40) at com.thinkaurelius.faunus.formats.titan.GraphFactory.generateGraph(GraphFactory.java:20) at com.thinkaurelius.faunus.formats.BlueprintsGraphOutputMapReduce.generateGraph(BlueprintsGraphOutputMapReduce.java:61) at com.thinkaurelius.faunus.formats.BlueprintsGraphOutputMapReduce$Reduce.setup(BlueprintsGraphOutputMapReduce.java:159) at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:174) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:650) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:418) at org.apache.hadoop.mapred.Child$4.run(Child.java:255) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1149) at org.apache.hadoop.mapred.Child.main(Child.java:249) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:257) ... 16 more Caused by: com.thinkaurelius.titan.diskstorage.TemporaryStorageException: Temporary failure in storage backend at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.ensureKeyspaceExists(AstyanaxStoreManager.java:394) at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.<init>(AstyanaxStoreManager.java:164) ... 21 more Caused by: com.netflix.astyanax.connectionpool.exceptions.NoAvailableHostsException: NoAvailableHostsException: [host=None(0.0.0.0):0, latency=0(0), attempts=0] No hosts to borrow from at com.netflix.astyanax.connectionpool.impl.RoundRobinExecuteWithFailover.<init>(RoundRobinExecuteWithFailover.java:31) at com.netflix.astyanax.connectionpool.impl.TokenAwareConnectionPoolImpl.newExecuteWithFailover(TokenAwareConnectionPoolImpl.java:74) at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:229) at com.netflix.astyanax.thrift.ThriftClusterImpl.executeSchemaChangeOperation(ThriftClusterImpl.java:131) at com.netflix.astyanax.thrift.ThriftClusterImpl.addKeyspace(ThriftClusterImpl.java:252) at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.ensureKeyspaceExists(AstyanaxStoreManager.java:389) ... 22 more
gremlin> g.makeType().name('type').unique(OUT).indexed(Vertex.class).dataType(String.class).makePropertyKey()==>v[36028797018964170]gremlin> g.makeType().name('domain').unique(BOTH).indexed(Vertex.class).dataType(String.class).makePropertyKey()==>v[36028797018964178]gremlin> requests = g.makeType().name('requests').unique(OUT).dataType(Long.class).makePropertyKey()==>v[36028797018964186]gremlin> g.makeType().name('tracks').primaryKey(requests).makeEdgeLabel()==>v[36028797018964198]gremlin> g.makeType().name('followed_by').primaryKey(requests).makeEdgeLabel()==>v[36028797018964206]gremlin> g.commit()==>null
gremlin> supernode = g.V('type','supernode').next()8607 [main] WARN com.thinkaurelius.titan.graphdb.transaction.StandardTitanTx - Query requires iterating over all vertices [(v[36028797018963978]=supernode)]. For better performance, use indexes
gremlin> g.getType('type')==>v[36028797018963978]
titan.graph.output.infer-schema=false
{"type":"supernode","_id":0}{"name":"d.co.uk","type":"site","_id":1,"_inE":[{"_label":"tracks","_id":11,"_outV":0}]}{"name":"t.com","type":"site","_id":2,"_inE":[{"_label":"tracks","_id":12,"_outV":0}]}{"name":"dt.de","type":"site","_id":3,"_inE":[{"_label":"tracks","_id":13,"_outV":0}]}{"name":"w.com","type":"site","_id":4,"_inE":[{"_label":"tracks","_id":14,"_outV":0}]}{"name":"dw.net","type":"site","_id":5,"_inE":[{"_label":"tracks","_id":15,"_outV":0}]}{"name":"tw.net","type":"site","_id":6,"_inE":[{"_label":"tracks","_id":16,"_outV":0}]}{"name":"dtw.com","type":"site","_id":7,"_inE":[{"_label":"tracks","_id":17,"_outV":0}]}{"name":"x.co.uk","type":"site","_id":8,"_inE":[{"_label":"tracks","_id":18,"_outV":0}]}{"name":"dx.net","type":"site","_id":9,"_inE":[{"_label":"tracks","_id":19,"_outV":0}]}{"name":"tx.de","type":"site","_id":10,"_inE":[{"_label":"tracks","_id":20,"_outV":0}]}
v1-has->v2 will always be v2<-has-v1, there's actually no need to define both.
faunus.graph.input.edge-copy.direction=IN
faunus.graph.input.edge-copy.direction=BOTH
--
You received this message because you are subscribed to the Google Groups "Aurelius" group.
To unsubscribe from this group and stop receiving emails from it, send an email to aureliusgraph...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.