IMHO there is a need to get the following issues solved by a companion API to GLV - which then would not only be available to gremlin-python but also to other languages:
In traditional database APIs like JDBC these needs are all handled by some means or another. For GLVs this is IMHO much harder than necessary. E.g. if you want to use data from two or three graph databases at the same time it get's very awkward. I have many usecase where i need adhoc in-memory graphdatabases and only the computational results should be stored in another graph database that is backed by some provider. Should there be a ticket for each of the improvement wishes - are any of these things already addressed somewhere? Should the discussion first happen in a forum before this ticket is refined? I don't know what the proper procedure is to get to an improved version fo TINKERPOP ...
scripts/runOrientDB
3.0.23-tp3: Pulling from library/orientdb
Digest: sha256:97770fb0d21f83f68e1613f5b8e05669a373f9db6cc947c2bb73dee2e0a49312
Status: Image is up to date for orientdb:3.0.23-tp3
docker.io/library/orientdb:3.0.23-tp3
ca2ed42e690725b6595b4ea86702235c2b2b2185a1bf2d1e0dc6da4642623529
python3 test_004_io.py
Traceback (most recent call last):
File "test_004_io.py", line 20, in <module>
test_loadGraph()
File "test_004_io.py", line 13, in test_loadGraph
g.V().drop().iterate()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gremlin_python/process/traversal.py", line 65, in iterate
try: self.nextTraverser()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gremlin_python/process/traversal.py", line 70, in nextTraverser
self.traversal_strategies.apply_strategies(self)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gremlin_python/process/traversal.py", line 512, in apply_strategies
traversal_strategy.apply(traversal)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gremlin_python/driver/remote_connection.py", line 148, in apply
remote_traversal = self.remote_connection.submit(traversal.bytecode)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gremlin_python/driver/driver_remote_connection.py", line 54, in submit
results = result_set.all().result()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/_base.py", line 435, in result
return self.__get_result()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gremlin_python/driver/resultset.py", line 90, in cb
f.result()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/_base.py", line 428, in result
return self.__get_result()
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gremlin_python/driver/connection.py", line 80, in _receive
status_code = self._protocol.data_received(data, self._results)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gremlin_python/driver/protocol.py", line 97, in data_received
return self.data_received(data, results_dict)
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/gremlin_python/driver/protocol.py", line 110, in data_received
raise GremlinServerError(message["status"])
gremlin_python.driver.protocol.GremlinServerError: 599: null:none([])
docker stop ca2ed42e690725b6595b4ea86702235c2b2b2185a1bf2d1e0dc6da4642623529
# start the default server
./run -s
# set the configuration to the default server
ln -f ln -f TinkerGraph.yaml server.yaml
# run all pytests of the tutorial
./run -t
============================= test session starts ==============================
platform darwin -- Python 3.7.4, pytest-5.1.2, py-1.8.0, pluggy-0.12.0
rootdir: /Users/wf/source/python/gremlin-python-tutorial
collecting 0 items g.V().count=6
g.E().count=6
[v[1], v[2], v[3], v[4], v[5], v[6]]
[v[1]]
['marko']
[e[7][1-knows->2], e[8][1-knows->4]]
['vadas', 'josh']
['vadas', 'josh']
6 results
{'host': '/127.0.0.1:51992'}
air-routes-small.xml has 47 vertices
collected 11 items
test_000.py .
test_001.py g.V().count=6
.g.E().count=6
.
test_002_tutorial.py [v[1], v[2], v[3], v[4], v[5], v[6]]
.[v[1]]
.['marko']
.[e[7][1-knows->2], e[8][1-knows->4]]
.['vadas', 'josh']
.['vadas', 'josh']
.
test_003_connection.py 6 results
{'host': '/127.0.0.1:52018'}
.
test_004_io.py air-routes-small.xml has 47 vertices
.
============================== 11 passed in 5.46s ==============================
This is also what the travis configuration of the tutorial project checks.
To view this discussion on the web visit https://groups.google.com/d/msgid/gremlin-users/20B70177-E5D5-44CE-81E5-73EBC0FB6367%40conexus.ai.
git clone https://github.com/WolfgangFahl/gremlin-python-tutorial
cd gremlin-python-tutorial
./run -i
scripts/runNeo4j -rc ./run -n ln -f Neo4j.yaml server.yaml ./run -t
But I can't see the modifications via the http://localhost:7474/
MATCH (n) RETURN n
doesn't show anything.
--
--
You received this message because you are subscribed to the Google Groups "Gremlin-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gremlin-user...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/gremlin-users/723b4811-8756-47b4-86d5-9fad60876725%40googlegroups.com.
First, there's a reasonable attitude within the realm of highly connected data that the value of the data is in being able to reason over those connections. As such, separating data into distinct data stores can add a lot in the way of costs & overhead. For this line of thinking, it is far preferable to have all of the data in a single data store with all of the relevant connections materialized as edges. I have found that when building graphs we should be strongly biased toward having all of the data in a single data store.