--
You received this message because you are subscribed to the Google Groups "Getty Vocabularies as Linked Open Data" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gettyvocablo...@googlegroups.com.
To post to this group, send email to gettyv...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/gettyvocablod/a373cc5b-7fa7-4e21-92a5-0062fea23c34%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
>>>>>>Will it be possible for providers of images to extract all their Linked Data metadata, or will it remain 'locked in' to the IS system?
answer: yes, it already is open -- the idea of IS is to allow content creators or curators of collections - to assert their desired copyrights - even if Public Domain and the image metadata and rights openly published. All metadata in ImageSnippets is published daily to a triple store that is open, and queryable from our SPARQL endpoint as well as published on datahub.io.
>>>>>> Are you planning to support Linked Data content negotiation, so that your image URLs could also deliver RDF/XML, Turtle, etc. in response to a suitable Accept header in the HTTP request?
answer: yes we do support content negotiation (aka 5-star linked data) - see the first answer above for clarification. Additionally we are also continuing to expand on the options available for export as we speak - there is an export button that shows some of the options that currently work (simple web gallery, JSON, etc) but we are currently improving upon these.
>>>>>> Do you have plans to support external authorities by defining a web service interface, as a complement to the current option of hand-entering the whole framework? This is presumably how you already interface to Geonames, DBpedia and AAT.
answer: if you click on the 'user datasets' button in the interface, you can see a list of options of datasets that a user can subscribe to, as well as building their own custom datasets - we currently either pull directly from the services (as in Geonames) or use a local copy of dbPedia (which is just faster and less problematic), but adding datasets is extremely easy for us - we added AAT and ULAN each within 24 hours of when they were each released (both of them are local copies on our server) bottom line: we can build custom interfaces very quickly that add whatever dataset you would like to use.
NOTE --- (I am not sure if people need a little more guidance to see this at work), but to look up entities: you type in a word for the 'object value' when you are building a triple - a small exclamation point will come up - if it is black, it has found matches in one, if not all subscribed datasets - CLICK on the black exclamation point and double click to choose the most appropriate match, or build multiple triples - one for each if desired. If the exclamation point comes up red - you have the option of creating an entity - but I caution that people should try another word or phrase before doing this, or before they become more familiar with the system. We are also currently working on dramatically improving this interface to make it much simpler to resolve the entities.
Of course, while you are beta testing, you can add nonsensical entities if you want.
Very happy for your questions and I would be very happy to give a personalized demo.
Best,
Margaret