Hello all,
Curious about best approaches/practices for scalable degree-centrality search filters on large (millions to billions of nodes) JanusGraphs. i.e. something like :
g.V()
.has("someProperty",eq("someValue"))
.where(outE().count().is(gt(10)));
Suppose the has-step narrows down to a large number of vertices (hundreds of thousands), then performing that form of count on that many vertices will result in timeouts and inefficiencies (at least in my experience). My workaround for this has been pre-calculating centrality in another job and writing to a Vertex Property that can subsequently be included in a mixed index. So we can do:
g.V()
.has("someProperty",eq("someValue"))
.has(“outDegree”,gt(10))
This works, but it is yet another calculation we must maintain in our pipeline and while sufficing, it seems like more of a workaround then a great solution. I was hoping there was a more optimal approach/strategy. Please let me know.
Thank you,
Zach
--
You received this message because you are subscribed to the Google Groups "JanusGraph users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to janusgraph-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-users/385af431-d723-4be6-95cb-43b2954f2e58n%40googlegroups.com.
Thank you Boxuan,
Was using the term “job” pretty loosely. Your inference about doing these things within ingest/deletion process makes sense.
I know there is a lot on the community’s plate now, but if my above solution is truly optimal for current state, I wonder if a JG feature addition may help tackle this problem more consistently. Something like an additional, 3rd , index type (in addition to “graph” and “vertex-centric” indices) . i.e. “edge-connection” or “degree-centrality” index. The feature would require a mixed indexing backend, and minimally a mechanism to choose vertex and edge label combinations to count IN, OUT, and/or BOTH degree centrality.
Not sure what the level of effort or implementation details would be, but this is a very common business requirement for graph-based search. If JanusGraph has native/tested support for it, it would make JanusGraph even easier to champion.
😊
Best,
Zach
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-users/c6539751-c586-42c1-af96-010b6275d1f1n%40googlegroups.com.
g.V()
.has("someProperty",eq("someValue"))
.where(outE().id().count().is(gt(10)));
If this does not work, it should be possible to configure/modify JanusGraph such, that it does not start fetching edge properties that are not needed for the count.
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-users/5c39e3cb-1b97-4c16-a1a7-0fb0b6f1ae7dn%40googlegroups.com.
Hi Marc, Boxuan,
Thank you for the discussion. I have been experimenting with different queries including your id() suggesting Marc. Along Boxuan’s feedback, the where() step performs about the same (maybe slightly slower) when adding .id() step.
My bigger concern for my use case is how this type of operation scales in a matter that seems relatively linear with sample size. i.e.
g.V().limit(10).where(InE().count().is(gt(6))).profile() => ~30 ms
g.V().limit(100).where(InE().count().is(gt(6))).profile() => ~147 ms
g.V().limit(1000).where(InE().count().is(gt(6))).profile() => ~1284 ms
g.V().limit(10000).where(InE().count().is(gt(6))).profile() => ~13779 ms
g.V().limit(100000).where(InE().count().is(gt(6))).profile() => ? > 120000 ms (timeout)
This behavior makes sense when I think about it and also when I inspect the profile (example profile of limit(10) traversal below)
I know the above traversal seems a bit funky, but I am trying to consistently analyze the effect of sample size on the edge count portion of the query.
Looking at the profile, it seems like JG needs to perform a sliceQuery operation on each vertex sequentially which isn’t well optimized for my use case. I know that if centrality properties were included in a mixed index then it can be configured for scalable performance. However, going back to the original post, I am not sure that is the best/only way. Are there other configurations that could be optimized to make this operation more scalable without to an additional index property?
In case it is relevant, I am using JanusGraph v 0.5.2 with Cassandra-CQL backend v3.11.
Thank you,
Zach
Example Profile
gremlin> g.V().limit(10).where(inE().count().is(gt(6))).profile()
==>Traversal Metrics
Step Count Traversers Time (ms) % Dur
=============================================================================================================
JanusGraphStep(vertex,[]) 10 10 8.684 28.71
\_condition=()
\_orders=[]
\_limit=10
\_isFitted=false
\_isOrdered=true
\_query=[]
optimization 0.005
optimization 0.001
scan 0.000
\_query=[]
\_fullscan=true
\_condition=VERTEX
TraversalFilterStep([JanusGraphVertexStep(IN,ed... 21.564 71.29
JanusGraphVertexStep(IN,edge) 13 13 21.350
\_condition=(EDGE AND visibility:normal)
\_orders=[]
\_limit=7
\_isFitted=false
\_isOrdered=true
\_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@9c76d
\_vertices=1
optimization 0.003
backend-query 3 4.434
\_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@9c76d
\_limit=14
optimization 0.001
backend-query 1 1.291
\_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@9c76d
\_limit=14
optimization 0.001
backend-query 2 1.311
\_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@9c76d
\_limit=14
optimization 0.001
backend-query 1 2.483
\_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@9c76d
\_limit=14
optimization 0.001
backend-query 2 1.310
\_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@9c76d
\_limit=14
optimization 0.001
backend-query 2 1.313
\_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@9c76d
\_limit=14
optimization 0.001
backend-query 2 1.192
\_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@9c76d
\_limit=14
optimization 0.001
backend-query 4 1.287
\_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@9c76d
\_limit=14
optimization 0.001
backend-query 3 1.231
\_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@9c76d
\_limit=14
optimization 0.001
backend-query 2 3.546
\_query=org.janusgraph.diskstorage.keycolumnvalue.SliceQuery@9c76d
\_limit=14
RangeGlobalStep(0,7) 13 13 0.037
CountGlobalStep 10 10 0.041
IsStep(gt(6)) 0.022
>TOTAL - - 30.249 -
To view this discussion on the web visit https://groups.google.com/d/msgid/janusgraph-users/0ff0c37a-6a56-476c-8efb-c30416380ec1n%40googlegroups.com.