Hi there, i'm currently investigating the various graphdb technologies for building a python/flask web service on top of a graph persistence layer. The purpose is to completely remove the traditional SQL layer and only use graphDB with blobstores for larger objects.I was happily using py2neo with cypher on top of neo4j running in server mode, until i realized that I couldn't create a transaction in this configuration, simply because neo4j didn't offer transaction for its REST Api, and that it was the only client available for python (no native python client).I've since looked at the bulbflow project, but couldn't find anything related to transactional behavior. The blueprint API do specify TransactionalGraph as part of its specification, but i never found anything related on the client side.Does anyone know how what client library i could use ? I'm ready to use another db other than neo4J if that's required (which i choose only because of the heroku integration).Thanks,--
Benjamin
Hi there, i'm currently investigating the various graphdb technologies for building a python/flask web service on top of a graph persistence layer. The purpose is to completely remove the traditional SQL layer and only use graphDB with blobstores for larger objects.I was happily using py2neo with cypher on top of neo4j running in server mode, until i realized that I couldn't create a transaction in this configuration, simply because neo4j didn't offer transaction for its REST Api, and that it was the only client available for python (no native python client).I've since looked at the bulbflow project, but couldn't find anything related to transactional behavior. The blueprint API do specify TransactionalGraph as part of its specification, but i never found anything related on the client side.
The scripts file can contain more than one function per file.
You add the scripts file to the Scripts object using the g.scripts.update(file_path) method. Then you can get the individual scripts by their function names:
Hi James,I'm attempting to evaluate and use Bulbs for a new project. I'm new to both Bulbs and Gremlin. I tried to make an example of my own (using Neo4J, hopefully later with Titan), based on the lightbulbs example in your last post, but I'm stuck. I'm getting a NullPointerException from my version of your modified gremlin script. My two example files can be found here https://gist.github.com/4212993The NPE appears to be being thrown by the "g.idx(index_name).get(index_key, index_value).toList()" line of the "create_or_update_vertex()" closure. I'm not sure what's the best way to debug gremlin scripts. But even before I got the NPE, I had to modify my script to add a parameter to the outermost method/closure to access the Graph variable, which was out of scope in my example script.Any tips you would have would be greatly appreciated.Thanks,Jumand
Hi James and Blake,
over time I've found myself having to implement more and more gremlin-related functionality outside of Bulbs, ... I'm really only using Bulbs to send and execute scripts.
Why? Because in the system I'm currently writing, I'm dealing with dynamically-generated gremlin queries all the time, and Bulbs doesn't help me construct them. In a nutshell, I need to be able to construct gremlin queries directly in python.
So for example, I can do this:
> import python_gremlin as pg
> q = pg.V('element_type','container').out('contains').filter("it.name.matches('foo.*')").out('obj').dedup()
> verts = q.execute(some_bulbs_client)
> import python_gremlin as pg
> q = pg.V('element_type','container').out('contains').filter("it.name.matches('foo.*')").out('obj').dedup()
> verts = q.execute(some_bulbs_client)
What we do (that's been amazing so far) is allow you to attach your gremlin methods (written in groovy) directly to python objects. I've just updated the wiki with some more examples.
Another advantage is we don't require a globally available graph object to be passed around. I never found it to be convenient to have to register my vertices and edges with a central, globally available object. Thunderdome manages that for you automatically whenever you create a Vertex or Edge.
>>> import thunderdome >>> from thunderdome.connection import setup >>> setup(['localhost'], 'thunderdome')
>>> import my_models
?
I see your point here, you need to generate the gremlin query programaticaly so you created a library for that.But what's wrong with programaticaly updating a query string?Like:s = "g.V('element_type','container')"if cond1:value1 = somefunction()s+= ".out('%s')" % value1
On Friday, January 25, 2013 9:20:41 AM UTC-6, Andras Gefferth wrote:I see your point here, you need to generate the gremlin query programaticaly so you created a library for that.But what's wrong with programaticaly updating a query string?Like:s = "g.V('element_type','container')"if cond1:value1 = somefunction()s+= ".out('%s')" % value1
In fact, you should AWLAYS use query params rather than hard coding values because if you don't, the Gremlin Script Engine thinks it's a new script and will have to recompile the script each time, and you'll be blowing through its script cache which will kill your performance.
- James--
We handle the registation of the Vertex or Edge within thunderdome's Element metaclass. When you create the Vertex or Edge, we keep track of that internally. It's a lot more convenient, as the models become immediately available and don't require the global graph object, or to be explicily registered (which we constantly forgot to do - it's not very intuitive).We also recursively create objects - something bulbs doesn't currently do. If you return a table object, bulbs won't give you anything useful back. We needed to extend the built in bulbs methods for executing gremlin and it ended up being very awkward. With thunderdome, you can pass back a map, with a Vertex and a list of edges, and everything will retain the same overall structure but give you python objects, which is very convenient for tables, trees, and other ad-hoc data.
--
In fact, you should AWLAYS use query params rather than hard coding values because if you don't, the Gremlin Script Engine thinks it's a new script and will have to recompile the script each time, and you'll be blowing through its script cache which will kill your performance.
Yeah I thought about this, and it's something I plan to do, but I'm leaving it as a later optimisation. I still need to dynamically generate all queries though, so even if I construct parameterised equivalents, I don't know if I'm going to get enough cache hits for it to be a worthwhile exercise. But, time permitting, I'm going to try it anyway.