Hello gremlin-users,
We deploy a cluster of
- 4 janusgraph instances served with gremlin
- BigTable as the strage
- Elasticsearch for indexing
So far, we have around 50+ millions vertices and plan to have around 2 billions edges.
Since Janusgraph provides gremlin-server-metrics.csv directory with metrics( on the eval, errors & so on), we manage to have a monitoring system setup based on theses gremlin servers metrics.
Now, we want to have the number of vertices and edges stored on BigTable and metrics on the indexes, what will a good way to do it, knowing that counting the number of vertices and edges through g.V.count() will be too slow ?
For example, we want the metrics on the edges and vertices count every 30 sec but g.V.count() would already more than 5min.
Hope to read some answers and have some leads,
Best regards,
Marie