> cute:
> http://dataviewer.zitgist.com/?uri=http%3A//apps.myskua.org/qsac-latest/sac/tonylinde
> (but not very meaningful I guess - need to get some html on the end
> of the namespace refs)
It's not at all bad, though. We do get stuff back, which is valid and
grokkable by a tool we haven't explicitly aimed at, which does
something reasonable with it. Well done us!
Problems we (or I) should look at:
* zitgist.com is a LinkingOpenData place (from a quick skim, and
some half-remembered mentions of it elsewhere), so it's not intended
as a general RDF browser, but instead as a way of browsing round RDF
stores which link to each other. We're sort of misusing it in this
example, therefore, but we could probably do better in what we make
available to it.
* One of the things is that we shouldn't really be exposing
resources like http://localhost:3000/workflows/3 on the open web
(which we are doing, because they appear in this response from
apps.myskua.org). Hmm: what _should_ be happening here, I wonder.
I've been skirting around the LOD world for an age, and hope to do
something about that before SemAst next month.
Thanks for pointing this out, Tony -- it's very valuable roughage.
See you,
Norman
--
Norman Gray : http://nxg.me.uk
Dept Physics and Astronomy, University of Leicester
On 2009 Feb 18, at 09:21, Linde, A.E. wrote:
> Yes, Exhibit was one I looked at but it was more difficult to set up
> and the project seems dead now. Will take another look.
Ross:
> A much better browser (IMHO) is the Exhibit browser from MIT. You need
> to provide your data in JSON. Exhibit is a client side Javascript app,
> but there is also a server side version.
Well, they're intended to do different things. As Ross points out,
Exibit is a JSON viewer, not an RDF one, so won't be able to make any
sense of the RDF which the SAC is serving (it seems that some of the
Simile stuff has moved from MIT to a Google Code project, so it's not
dead yet!).
Tabulator and Disco are intended to be generic RDF browser, but
zitgist is (I think) intended to be an RDF browser which will work
better with LOD-style RDF. So Tony's experience is telling us that
we've done just about the right thing without really trying, but that
there's a couple of unLODish things still in the RDF we're giving out.
As Tony points out:
> Absolutely. My comment was about the SKUA and spacebook namespaces -
> is there something we should do so that when someone looks up
> spacebook:object, they get a meaningful response?
I think so, but it's probably at the level of tweaking (I _think_),
which will be easier once I/we have more experience of LODifying RDF
stores.
(The LOD stuff isn't anything deep, by the way -- it's just a set of
patterns and practices for making RDF available in such a way that
links between RDF repositories are explicit and can be followed).
> And the localhost is just for test purposes: wanted to see that when
> I put the apps endpoint into spacebook, it posted the info correctly.
Sure. It's just that one of the LOD preoccupations is about how you
mint names for things, and I bet there's a prescription or
proscription about this somewhere!
> It does raise the question of what the identifier for a workflow
> ought to be. The '.../workflows/3' takes you to the webpage for that
> workflow(eg http://www.myexperiment.org/workflows/173). The other
> option is something like http://www.myexperiment.org/workflow.xml?id=173
> which uses the myExperiment api to return an XML representation of
> the workflow. I'm not sure which to use. Maybe I ought to ask on the
> myExperiment list?
Ideally, that URL should return XML when it's queried with an Accept
header which permits that, and return RDF when the Accept header
permits only _that_. It wouldn't be that hard for them to do. Hmm:
in fact, why not try
% curl -H accept:application/rdf+xml http://www.myexperiment.org/workflow.xml?id=173
(for a suitable URL) and see if they've implemented this already!
All the best,
Not dead, complete for the purposes of those working on it. If you
need more features they are active enough to accept. Personally I find
the feature set to be very complete for generic data sets.
Ross
On 2009 Feb 18, at 14:20, Linde, A.E. wrote:
> Come to think of it, if it returned RDF, we wouldn't need a SAC
> behind it, just an extension to the api. So the api would present
> myExperiment as if it was a SAC but with nothing federated from it.
> Hmm, worth thinking about.
Yes, all it'd need would be a SPARQL endpoint, and it'd be a read-only
SAC!
If the XML that you get back from myExperiment were browsable, in that
you could go from one retrieved object to another, then it might only
need a fairly thing wrapper (which could be client-side) to make it
LODish RDF. That's not SPARQL, and it's not a SAC, but it's an
interesting way of getting at the myExperiment stuff.
You mention, though, <http://rdf.myexperiment.org/> -- verrrrrry
interesting. I'm not sure I really follow the other message: they
generate the RDF based on the previous evening's snapshot (that seems
a little weird, but...), and they could add RDF representations of
other myExperiment objects on an ad hoc basis?
The sparql endpoint at <http://rdf.myexperiment.org/sparql> is
interesting. However when I retrieve all of the triples from there,
they appear to be only ontology triples -- all under http://rdf.myexperiment.org/ontologies/
-- and no instance data.
See you,
However when I retrieve all of the triples from there,
they appear to be only ontology triples -- all under http://rdf.myexperiment.org/ontologies/
-- and no instance data.
On 2009 Feb 18, at 18:00, Linde, A.E. wrote:
> Isn't http://rdf.myexperiment.org/Workflow/173 an instance? In which
> case it ought to be SPARQL-able.
It is, but it doesn't seem to appear in the model queried by the
SPARQL interface. Going to <http://rdf.myexperiment.org/sparql> and
putting in
select ?p ?o where { <http://rdf.myexperiment.org/Workflow/173> ?p ?o. }
...gets no results.
Putting in
select ?s ?p ?o where { ?s ?p ?o. }
...gets all the triples in the store, but they all appear to be
ontology/TBox ones.
It looks as if the data for the http://rdf.myexperiment.org/Workflow/
173 URLs is generated on the fly from the underlying database (and
this is consistent with your earlier message), but that's distinct
form the SPARQL store. If there's an RDBMS in the background there,
they could potentially query it directly using something like D2RQ,
providing a SPARQL endpoint immediately, and skipping all the custom
DB-to-RDF code. But there may well be other complications.