Thanks for the tips.
I just happened to have returned to this project (or projects) I had in mind earlier when I first posted the question. It is interesting that two months after the original post, an additional post just recently.
So, I had started with a book written in 2007 called "Programming the Semantic Web." The APIs are vastly different now. Many times I'd read steps from the text and wonder,
"did I miss something?"
So, here are some of my questions now, having learned much more:
1) When working with Django and rdflib, have you found it to be a good practice to save federated SPARQL query results within some kind of local server database ( meaning the same server where my app runs )?
2) Alternatively, I could use either a Relational DB, or Sleepycat, or a Java based triple store like Jena Fuseki. That seems to be a good idea if one were relying upon data from several federated triple stores. If my triple store is local, and it holds results from a combination of other triple
stores with the triples returned to a local triple store, being faster than a federated query. In other words, instead of waiting for a response from a SPARQL query to dbedia, Freebase.com and a movie triple store, I could put it all together into one triple store
on my server? Does that make sense? The problem with this approach is one isn't getting the most recent data.
3) Graham, you said you use RDF with Django - that can mean different things... using data from SPARQL queries to triple stores over which one has no control. Or using a RDF or triple store as the only database serving content, e.g. no Postgresql or Mysql db but instead a
triple store, the one and only db used for a web app is a triple store.
Lastly, On Ubuntu, I do now have Postgresql, and Sqlite3. I have a RDF/XML file on Github. I want to import the triples from the RDF/XML file into both of Sqlite3 and Postgresql. What do I need to do to make that happen?
In other words, first I need to do a pip3 install of the python drivers for both dbs, actually, let's add Sleepycat too -> then inside my app I have to import the store I want to use, or the stores. -> Then I want to write the triples from my RDF/XML file into the database.
I've had a hard time finding a snippet of code that describes this scenario. Can someone show a snippet of code that one might execute inside IDLE to make this work.
I ran into problems with various syntax errors. I know I get a graph by using the rdflib.parse("
http://path/to/some/file.rdf"), no actually it is
g = Graph(store = 'Sleepycat')
then ->
I found that in the docs, but result is never used.
Isn't result holding the triples from the remote file.rdf?
No, that intuition is wrong - it doesn't work. How, do I take all the triples from my remote rdf file and save them in a database on my server?
Thanks,
Bruce