New Neo4j SPARQL Plugin

529 views
Skip to first unread message

Niclas Hoyer

unread,
Nov 11, 2014, 8:29:01 AM11/11/14
to ne...@googlegroups.com
Hi,

as part of my master thesis I developed a new SPARQL plugin for Neo4j.

The current plugin is developed as server plugin and somewhat limited, as the SPARQL protocol standards are not correctly implemented (regarding result formats and RDF input).

The new plugin is developed as unmanaged extension and fully implements the SPARQL 1.1 Protocol standard and the SPARQL 1.1 Graph Store HTTP Protocol standard. That means SPARQL 1.1 queries and update queries are supported and also updating of RDF data using HTTP.
For large datasets it is possible to import them in chunks. The plugin will commit smaller chunks to the database to reduce memory consumption.

Moreover the plugin includes a new approach to OWL-2 inference using query rewriting of SPARQL algebra expressions. For SPARQL 1.1 queries the plugin will rewrite the query in such a way that also inferred solutions are returned.

For more information, download, installation and usage head over to the GitHub page.

Regards,
Niclas Hoyer

Michael Hunger

unread,
Nov 12, 2014, 8:57:26 PM11/12/14
to ne...@googlegroups.com
Niclas,

this is amazing, thanks so much for creating it and also making it available for the open source community.
Do you have any information about the model you use to store RDF efficiently and any performance numbers? Esp. comparing it with cypher? That would be really interesting.

Do you have any examples for rdf / turtle import using the plugin? 

And if you had a blog post, we could help you promote the plugin and also link it from our website.

Where are you located?

Cheers, MIchael


--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Niclas Hoyer

unread,
Nov 13, 2014, 6:11:47 AM11/13/14
to ne...@googlegroups.com

Do you have any information about the model you use to store RDF efficiently and any performance numbers? Esp. comparing it with cypher? That would be really interesting.
The plugin is based on the blueprints framework and the "SAIL Ouplementation". A RDF triple is mapped as a directed edge. All information is stored in the properties. So e.g. a URI node in RDF <http://example.com> is mapped as ({kind: "uri",
value: "http://example.com" }). The URI of an edge is represented as type of the edge in Neo4j. The mapping does not use labels on nodes.

I tested performance against the Fuseki Graphstore with different sizes of datasets. Unfortunately the RDF mapping has its drawbacks, because Neo4j needs much more space than Fuseki. The largest dataset I tried was 17.9 GB in n-triples format.
Fuseki uses ~ 9 GB disk space after import, but Neo4j allocated 390 GB. That also results in about 27 times slower query execution on this large dataset. Using the smallest dataset with just 2 MB Neo4j is just 2.4 times slower than Fuseki. I used the "Berlin SPARQL Benchmark" for testing.

Do you have any examples for rdf / turtle import using the plugin? 
Yes, on the GitHub page there is an example for turtle import using curl. A PUT request to the graph resource will replace all data in the graph:

$ curl -v -X PUT \
       localhost:7474/rdf/graph \
       -H "Content-Type:text/turtle" --data-binary @data.ttl
 
And if you had a blog post, we could help you promote the plugin and also link it from our website.
Yes, I don't have a blog post yet. I'll come back to you as soon as I've got something.
 
Where are you located?
 Kiel, Germany.

Regards,
Niclas

Michael Hunger

unread,
Nov 13, 2014, 6:47:57 AM11/13/14
to ne...@googlegroups.com
Interesting. 
I always wondered if it was possible to transform RDF to a more compact property graph model on import but still allow RDF queries / export on top of that.
This would be more efficient both in space and performance but more involved at the import stage.

E.g. all rdf triples that identify "properties" would be transformed into real properties and relevant type/ontology information would be (also) transformed into Labels.
Only the "real" semantic relationships that add value to the domain would kept as actual relationships but potentially augmented with properties too.

One could also imagine a "graph optimization applied to the RDF graph" that does the above but leaves the original RDF model in Neo4j but uses the optimized version for more efficient querying?

Cool, then perhaps we can meet somewhere in Germany (Berlin, Frankfurt), or you can come over to our Malmö office for a meetup to show it off?

Looking forward to your blog post. I'd love to see a complete roundtrip covered, from import to queries and inference.

If you need anything from me, please ping me.

Cheers, Michael


--

Mike Bryant

unread,
Nov 13, 2014, 2:16:57 PM11/13/14
to ne...@googlegroups.com
First of all, thanks very much for this. It looks great!

My personal interest is being able to query standard Neo4j property graphs via SparQL and export them as triples. To some extent you can currently do this using the Blueprints PropertyGraphSail, but where I'm lacking the knowledge and insight (and time to research!) is in creating mapping schemas or using some inference step to produce triples such that "property X of node/edge with label Y = Z", adding domain-specific semantics.

I too will be keen to here more details about this.
Cheers,
~Mike

Jacob Hansson

unread,
Nov 13, 2014, 4:46:49 PM11/13/14
to ne...@googlegroups.com
Just popping in to say wow - that is a fantastic contribution, great work! 

Jim Salmons

unread,
Nov 14, 2014, 12:02:14 AM11/14/14
to ne...@googlegroups.com
Hi Niclas, Michael, Mike, and Jacob,

I concur with others in congratulating you and encouraging your work. In particular, due to your in-country proximity and shared creative spirits, I would encourage Michael to make that proposed get together with Niclas in Frankfurt with Axel, Christian, to include the Structr team in your conversation. :D

I'm currently working on a cognitive computing initiative in the digital humanities domain via www.FactMiners.org where we are leveraging a metamodel subgraph design pattern. The idea is to allow as much "pure graph" expressiveness and extensibility inside my "Fact Cloud private garden" and push LOD (Linked Open Data) query response formatting/harmonization as much as possible to a dynamic mapping in the FactMiners' platform "presentation/publication" layer -- where RDF/SPARQL is such an important factor. 

I'm planning to "stand on the shoulders of giants" in this regard by making as much use as possible of Karma. Niclas, are you familiar with it? I can't help but think that this project would have some interest to you considering the transformations and "border crossings" you are wrestling with. :-)

Karma is an amazing Open Source "multilingual" ontology-aware cross-model smart-mapper providing "Rosetta Stone"-like powers to users coping with the ever-shifting publication of Linked Open Data (LOD). Karma is the evolving brilliant work from the team of researcher-makers led by Craig Knoblock and Pedro Szekely of the Information Sciences Institute at the University of Southern California. Here's a short blog post at FactMiners on Karma with additional info and links to the project, etc.

Keep up the good work, Niclas.
-: Jim :-

Alex Averbuch

unread,
Nov 17, 2014, 4:24:10 AM11/17/14
to ne...@googlegroups.com, oha...@uwaterloo.ca
Great work Niclas!

Regarding the mapping between Cypher/Property Graph  and SPARQL/RDF, you may find this interesting:
A very recent paper by Olaf Hartig (https://cs.uwaterloo.ca/~ohartig/) - CC'd

Best,
Alex

--

Niclas Hoyer

unread,
Nov 20, 2014, 3:59:40 AM11/20/14
to ne...@googlegroups.com
Hi,


Looking forward to your blog post. I'd love to see a complete roundtrip covered, from import to queries and inference.

My blog post is now available [1]. I go through the basic installation steps and add a minimal example that also demonstrates inference.

 [1] http://comsys.informatik.uni-kiel.de/lang/en/res/sparql-and-owl-2-inference-for-neo4j/

Regards,
Niclas

Michael Hunger

unread,
Nov 20, 2014, 5:48:03 AM11/20/14
to ne...@googlegroups.com, Niclas Hoyer
Hi Niclas,

great blog post, thanks a lot.

I have only one textual remark, can you please change all occurences of "plugin" to "extension" when referring to yours? 
Otherwise it gets confusing with the old plugin (perhaps also change "current plugin" to "old plugin").
On your github page you correctly only call your extension "extension".

It would be great, if the blog post could also mentions the kind of graph model you choose for representing RDF as well as performance implications and future work.

Then I'd be happy to reblog it on neo4j.com/blog

Btw. I'd love to continue the discussion about the best model to represent RDF in a property graph. It would be cool if we could come up with a "pragmatic" RDF model that can be mapped directly to a property graph and is performant and can still be exposed as RDF and queried with SPARQL.

Cheers, Michael


--

Andrii Stesin

unread,
Nov 20, 2014, 9:19:01 AM11/20/14
to ne...@googlegroups.com
Dear Niclas,

thank you and congratulations! This is really fantastic job.

Would you mind also providing us with a comment on how your work (co)relate with http://rdf4j.org/ please?

Thanks in advance!

WBR,
Andrii

Andrii Stesin

unread,
Nov 20, 2014, 10:13:46 AM11/20/14
to ne...@googlegroups.com
On Thursday, November 13, 2014 1:11:47 PM UTC+2, Niclas Hoyer wrote:
Fuseki uses ~ 9 GB disk space after import, but Neo4j allocated 390 GB. That also results in about 27 times slower query execution on this large dataset.
 
I suspect some data modelling issue here... the difference is way bigger than one can expect. Factor of 10 won't make me wonder too much, but 40+ ?? why and how?

Using the smallest dataset with just 2 MB Neo4j is just 2.4 times slower than Fuseki.

This also makes me wonder, does Neo4j introduce so big an overhead compared to Fuseki? (small example should completely fit in memory, doesn't it?)

WBR,
Andrii

Michael Hunger

unread,
Nov 20, 2014, 11:33:47 AM11/20/14
to ne...@googlegroups.com
That's what I meant with the misfit of modeling RDF data 1:1 into the property graph instead of having a "sensible" mapping of only real entities to nodes, real semantic tuples to relationships and everything else to properties.

It would be stellar to resolve that in a good way with a sensible default mapping that might be augmented.
Wes and I discussed that when importing Freebase Data into Neo4j

Michael

--

Andrii Stesin

unread,
Nov 20, 2014, 12:08:19 PM11/20/14
to ne...@googlegroups.com
+100500 to you

Bo Ferri

unread,
Nov 21, 2014, 3:34:52 AM11/21/14
to ne...@googlegroups.com
Hi all,

@Niclas: thanks a lot for efforts on providing an up-to-date SPARQL extension for Neo4j.

On Thursday, November 20, 2014 5:33:47 PM UTC+1, Michael Hunger wrote:
That's what I meant with the misfit of modeling RDF data 1:1 into the property graph instead of having a "sensible" mapping of only real entities to nodes, real semantic tuples to relationships and everything else to properties.


I was also often thinking about this approach of mapping RDF to the property graph model. However, I think that this wouldn't really scale, because then you'll usually double the cost of your query (afaik), i.e., you need to have a look at the node properties and at its relations for a certain attribute, because you cannot always assume to receive literals or resources for a certain attribute (which you'll need to know before to just request one of both types). Furthermore, (afaik) node properties are not designed to store lists of values, i.e., it requires further processing steps to store multiple literal values in one node property (i.e. for one attribute).
So I would tend to say that it's best to store everything as (node)-[edge]->[node] relationship, i.e., which perfectly aligns with the basic structure of an RDF statement (simple subject-predicate-object sentence). Then it doesn't matter, whether you are querying for statements with literal values or statements with resource values. Moreover, you have the opportunity to (better) deal with metadata (external context, ...) about statements, since you have the opportunity to also add properties at the edges (relationships), i.e., you can do, .e.g., statement-based versioning, clustering/partitioning (e.g. á la Named Graphs) or introduce qualified attributes for ordering or simply add a unique identifier for the statement itself. So you can design a (graph) data model with more comprehensive capabilities then RDF ;) (because of the flexibility of the property graph model). Finally, you can create indices as necessary, e.g., for resource (nodes), for statements (relationships) or for literals to speed up the queries.

Last but not least, we implemented this approach (prototypically (?)) as Neo4j unmanaged extension that can be found at


More details about the design of the graph data model can be found at


We are happy about every kind of feedback and looking forward to interesting discussion about RDF-based graph data models mapped onto the property graph data model ;)

Cheers,


Bo


Andrii Stesin

unread,
Nov 21, 2014, 8:42:27 AM11/21/14
to ne...@googlegroups.com
How about the following quick and dirty approach?


Andrii Stesin

unread,
Nov 21, 2014, 10:12:04 AM11/21/14
to ne...@googlegroups.com
Maybe this (comparatively) short 2-part article also worth your attention,

Niclas Hoyer

unread,
Nov 21, 2014, 5:00:55 PM11/21/14
to ne...@googlegroups.com
Hi,


So I would tend to say that it's best to store everything as (node)-[edge]->[node] relationship, i.e., which perfectly aligns with the basic structure of an RDF statement (simple subject-predicate-object sentence). Then it doesn't matter, whether you are querying for statements with literal values or statements with resource values. Moreover, you have the opportunity to (better) deal with metadata (external context, ...) about statements, since you have the opportunity to also add properties at the edges (relationships), i.e., you can do, .e.g., statement-based versioning, clustering/partitioning (e.g. á la Named Graphs) or introduce qualified attributes for ordering or simply add a unique identifier for the statement itself. So you can design a (graph) data model with more comprehensive capabilities then RDF ;) (because of the flexibility of the property graph model). Finally, you can create indices as necessary, e.g., for resource (nodes), for statements (relationships) or for literals to speed up the queries.

As far as I understood, that is exactly how the "Sail Ouplementation" from the Blueprints project that I used in the extension maps the RDF data. The problem in general is, that once you decided which RDF properties go as edges and which go into properties of nodes it is very complicated to return correct SPARQL results. The problem with the "(node)-[edge]->[node]" approach seems to be the resulting graph size.

Regards,
Niclas

Bo Ferri

unread,
Nov 21, 2014, 5:12:16 PM11/21/14
to ne...@googlegroups.com
Hi Andrii,

On 11/21/2014 2:42 PM, Andrii Stesin wrote:
> How about the following quick and dirty approach?
>
> <https://lh6.googleusercontent.com/-76j1hFEFcbc/VG9BP-P_dKI/AAAAAAAABbM/HfvEwyWNLD0/s1600/RDF_graph_model_gamma0.png>
>

well, this looks like a re-incarnation of RDF reification [1], which is
the worst modelling option for subject-predicate-object statements in my
mind ;)

Cheers,


Bo


[1] http://www.w3.org/TR/2014/REC-rdf11-mt-20140225/#reification


>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Neo4j" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/neo4j/M_lOoLU9F1g/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> neo4j+un...@googlegroups.com
> <mailto:neo4j+un...@googlegroups.com>.

Andrii Stesin

unread,
Nov 21, 2014, 8:10:09 PM11/21/14
to ne...@googlegroups.com
Hi Bo,

yes it definitely has something in common. Would you please mind pointing me to some explanations, why reification is considered so harmful?

I see some interesting points there, namely

The subject of a reification is intended to refer to a concrete realization of an RDF triple, such as a document in a surface syntax, rather than a triple considered as an abstract object. This supports use cases where properties such as dates of composition or provenance information are applied to the reified triple, which are meaningful only when thought of as referring to a particular instance or token of a triple.

it seems worth the attention at the very least. And also

Since the relation between triples and reifications of triples in any RDF graph or graphs need not be one-to-one, asserting a property about some entity described by a reification need not entail that the same property holds of another such entity, even if it has the same components.

This seems interesting to me. Also I like dictionary approach like in RDF HDT format. What do you think about this combination as a basic data model?

WBR,
Andrii

On Saturday, November 22, 2014 12:12:16 AM UTC+2, Bo Ferri wrote:
Hi Andrii,

Michael Hunger

unread,
Nov 21, 2014, 8:13:19 PM11/21/14
to ne...@googlegroups.com
Hey Niclas, when looking at your post, I wondered how you do your cypher query and thought you could easily speed up performance by a factor of 100 or 1000

look at these nodes -> if you added the "kind" as a label to each node, like :Uri, :Literal, :BNode  and then created an index on :Label(value) for each of those.
You could even leave off the "kind" properties then.

Alternatively for a quick win you can add a "generic" label, like ":Node" and create an index on :Node(value)

Then (depending on the way you resolve things in your sparql impl, you should be able to speed it up massively, by using the label + value to find things (either via cypher or embedded api (graphdb.findByLabelAndProperty()

Michael

(a {kind: "uri", value: "http://example.com" })
A literal node:
(a {kind: "literal", value: "Text", type: "http://www.w3.org/2001/XMLSchema#"})
A blank node:
(a {kind: "bnode" value: "genid--b1234"})


--
You received this message because you are subscribed to the Google Groups "Neo4j" group.
To unsubscribe from this group and stop receiving emails from it, send an email to neo4j+un...@googlegroups.com.

Michael Hunger

unread,
Nov 21, 2014, 9:05:17 PM11/21/14
to ne...@googlegroups.com
Niclas, do you have the perf tests as part of the repository?

Michael Hunger

unread,
Nov 21, 2014, 9:26:39 PM11/21/14
to ne...@googlegroups.com, Wes Freeman, Stefan Plantikow
Hmm,

taking a look at the code it seems you use Tinkerpop GraphSail, as that implementation doesn't support labels or probably better representation of rdf as property-graph I wonder what it would take to rewrite GraphSail as part of this extension to properly support the Neo4j property graph model.

I think you should be able to get the size requirements down to a minimal level and performance up a lot.

Bo Ferri

unread,
Nov 22, 2014, 10:35:24 AM11/22/14
to ne...@googlegroups.com
Hi Andrii,

On 11/22/2014 2:10 AM, Andrii Stesin wrote:
> Hi Bo,
>
> yes it definitely has something in common. Would you please mind
> pointing me to some explanations, why reification is considered so harmful?

because it simply blows up your amount of data and makes it more
complicated to access your data. Here are two pointers that include some
explanations of the drawbacks of the "standardized" RDF reification
approach:

- "Handbook of Semantic Web Technologies" by D. Fensel et al.; 2011,;
page 130

- "Pattern Representation Model for n-ary Relations in Ontology" by Vinu
P.V. et al.; 2014; page 4

However, there are many other explanations out in there in the wild,
wild web ;) (unfortunately, answers.semanticweb.com is down atm, which
is also a good starting point for semantic web related questions)

>
> I see some interesting points there, namely
>
> The subject of a reification
> <http://www.w3.org/TR/2014/REC-rdf11-mt-20140225/#dfn-reification>
> is intended to refer to a concrete realization of an RDF triple,
> such as a document in a surface syntax, rather than a triple
> considered as an abstract object. This *supports use cases where
> properties such as dates of composition or provenance information
> are applied to the reified triple*, which are meaningful *only when
> thought of as referring to a particular instance* or token of a triple.
>
>
> it seems worth the attention at the very least. And also
>
> Since the relation between triples and reification
> <http://www.w3.org/TR/2014/REC-rdf11-mt-20140225/#dfn-reification>s
> of triples in any RDF graph or graphs need not be one-to-one,
> asserting a property about some entity described by a reification
> <http://www.w3.org/TR/2014/REC-rdf11-mt-20140225/#dfn-reification>
> need not entail that the same property holds of another such entity,
> even if it has the same components.
>
>
> This seems interesting to me. Also I like dictionary approach like in
> RDF HDT format <http://www.rdfhdt.org/technical-specification/#triples>.
> What do you think about this combination as a basic data model?

The dictionary approach as it is applied in RDF HDT is good for
compressing the amount of information and enabling quick access to
certain information pointers (so its an excellent exchange format in my
mind). However, its no graph structure, and hence, bad when working
(querying) with the information.


Cheers,


Bo


[1]
http://books.google.de/books?id=sdEFvSb9WNsC&lpg=PP1&hl=de&pg=PA130#v=onepage&q&f=false
[2] http://www.jatit.org/volumes/Vol60No2/6Vol60No2.pdf

>
> WBR,
> Andrii
>
> On Saturday, November 22, 2014 12:12:16 AM UTC+2, Bo Ferri wrote:
>
> Hi Andrii,
> well, this looks like a re-incarnation of RDF reification [1], which is
> the worst modelling option for subject-predicate-object statements
> in my
> mind ;)
>

Bo Ferri

unread,
Nov 22, 2014, 10:55:37 AM11/22/14
to ne...@googlegroups.com
Hi Michael,
Hi Niclas,

On 11/22/2014 2:13 AM, Michael Hunger wrote:
> Hey Niclas, when looking at your post, I wondered how you do your cypher
> query and thought you could easily speed up performance by a factor of
> 100 or 1000
>
> look at these nodes -> if you added the "kind" as a label to each node,
> like :Uri, :Literal, :BNode and then created an index on :Label(value)
> for each of those.
> You could even leave off the "kind" properties then.
>

that's also the approach we are following in our unmanaged extension for
our (RDF-based) graph data model and RDF.

> Alternatively for a quick win you can add a "generic" label, like
> ":Node" and create an index on :Node(value)
>
> Then (depending on the way you resolve things in your sparql impl, you
> should be able to speed it up massively, by using the label + value to
> find things (either via cypher or embedded api
> (graphdb.findByLabelAndProperty()

We also experimented with the label-based approach (see [1]) for
identifying the node types of the RDF-based graph data model (to make
use of the schema index approach of neo4j) in an earlier stage of
development. However, we didn't notify any difference in performance,
when querying our data. Furthermore, the metadata model (the RDF-based
graph data model) get mixed with the content itself. In my mind, it's a
better design approach to "separate" both worlds a bit from each other,
i.e., to define clear parts in the property graph model where one can
find content data and where metadata about the content (e.g., whether
the node is a literal node or a resource node). So I prefer to utilise
node labels to assign resource types (as I guess that's what they are
design for (to assign classes, types, categories, universals)), instead
of meta data model node types*.

Cheers,


Bo


*) albeit, we also write rdf:type statements into the graph db (which
can cause super nodes and which doubles this part of the content data)
to guarantee the compatibility to RDF

[1]
https://github.com/dswarm/dswarm-graph-neo4j/blob/3121b05d7a2f7c66945eb3068f6fde993bea55ce/src/main/java/org/dswarm/graph/rdf/parse/nx/Neo4jRDFHandler.java

>
> Michael
>
> |(a {kind: "uri", value: "http://example.com <http://example.com/>" })|
> A literal node:
> |(a {kind: "literal", value: "Text", type:
> "http://www.w3.org/2001/XMLSchema#"})|
> A blank node:
> |(a {kind: "bnode" value: "genid--b1234"})|
>
>
> On Sat, Nov 22, 2014 at 2:10 AM, Andrii Stesin <ste...@gmail.com
> <mailto:ste...@gmail.com>> wrote:
>
> Hi Bo,
>
> yes it definitely has something in common. Would you please mind
> pointing me to some explanations, why reification is considered so
> harmful?
>
> I see some interesting points there, namely
>
> The subject of a reification
> <http://www.w3.org/TR/2014/REC-rdf11-mt-20140225/#dfn-reification>
> is intended to refer to a concrete realization of an RDF triple,
> such as a document in a surface syntax, rather than a triple
> considered as an abstract object. This *supports use cases where
> properties such as dates of composition or provenance
> information are applied to the reified triple*, which are
> meaningful *only when thought of as referring to a particular
> instance* or token of a triple.
>
>
> it seems worth the attention at the very least. And also
>
> Since the relation between triples and reification
> <http://www.w3.org/TR/2014/REC-rdf11-mt-20140225/#dfn-reification>s
> of triples in any RDF graph or graphs need not be one-to-one,
> asserting a property about some entity described by a
> reification
> <http://www.w3.org/TR/2014/REC-rdf11-mt-20140225/#dfn-reification>
> need not entail that the same property holds of another such
> entity, even if it has the same components.
>
>
> This seems interesting to me. Also I like dictionary approach like
> in RDF HDT format
> <http://www.rdfhdt.org/technical-specification/#triples>. What do
> you think about this combination as a basic data model?
>
> WBR,
> Andrii
>
> On Saturday, November 22, 2014 12:12:16 AM UTC+2, Bo Ferri wrote:
>
> Hi Andrii,
> well, this looks like a re-incarnation of RDF reification [1],
> which is
> the worst modelling option for subject-predicate-object
> statements in my
> mind ;)
>
> --
> You received this message because you are subscribed to the Google
> Groups "Neo4j" group.
> To unsubscribe from this group and stop receiving emails from it,
> send an email to neo4j+un...@googlegroups.com
> <mailto:neo4j+un...@googlegroups.com>.
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Neo4j" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/neo4j/M_lOoLU9F1g/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> neo4j+un...@googlegroups.com
> <mailto:neo4j+un...@googlegroups.com>.
Reply all
Reply to author
Forward
0 new messages