If you'd like to help, here are some initial questions to help our project scoping:
--
---
You received this message because you are subscribed to the Google Groups "DigitalNZ" group.
To unsubscribe from this group and stop receiving emails from it, send an email to digitalnz+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Which is fair enough if you're going to go full REST with HATEOAS, but almost every "REST" API is actually an HTTP API [1].
I think versioning needs to be considered because it's hard to make a business case for a general client that can do hypermedia.
DigitalNZ bakes in a version in the URL (e.g. http://api.digitalnz.org/v3/records.json) but I think it's better to include a version via an Accept header (see the discussion at [3]) so the resource URL is more long-term stable.
[1] http://martinfowler.com/articles/richardsonMaturityModel.html
[2] https://github.com/18F/api-standards
[3] https://github.com/18F/api-standards/issues/5
One question if I may, are there one or two places you would point to who are 'doing it right'?
I'm also more generally interested in "distant reading" / "distant viewing" i.e. the application of "big data" techniques to the study of culture, and there the requirement is different. Here I've tended to use OAI-PMH or simply HTTP bulk download.
I'm also more generally interested in "distant reading" / "distant viewing" i.e. the application of "big data" techniques to the study of culture, and there the requirement is different. Here I've tended to use OAI-PMH or simply HTTP bulk download.Hey Con, are you able to give some specific examples of the types of big data culture study you have, or might, do?
--
---
You received this message because you are subscribed to the Google Groups "DigitalNZ" group.
To unsubscribe from this group and stop receiving emails from it, send an email to digitalnz+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hmm, I might need to read Roy Fielding's work a bit more,
but it's difficult to see how he thinks it would work without seeing a worked example.I can imagine some future changes can't be handled gracefully via REST. The 'Web' has traditionally solved this (badly) with "let's just create a different website" and with 404s and visitors eventually give up.
Hi Douglas
Yes I was mostly trying out the tech as a learning exercise. But also I think it's in the nature of those techniques that they are exploratory and serendipitous. I did have some things in mind which was about exploring the differences between the colonial wars in NZ and Australia through the prism of newspaper coverage. Unfortunately Trove's API is a serious bottleneck - it's much, much slower and less reliable than Digital NZ and of course the Aussie newspaper corpus is larger. :-(
BTW here's another "big data" project using images that's perhaps more relevant: http://ryanfb.github.io/etc/2015/11/03/finding_near-matches_in_the_rijksmuseum_with_pastec.html
I can imagine some future changes can't be handled gracefully via REST. The 'Web' has traditionally solved this (badly) with "let's just create a different website" and with 404s and visitors eventually give up.Could you give an example of the kind of thing that you think couldn't work?
It's worth noting that if you do go down the SPARQL/RDF route (i.e. deploying a SPARQL Query server as your API provider) then you will get CSV and TSV for free: the SPARQL Query protocol offers these two along with a JSON- and an XML-based format for tabular results.
https://www.w3.org/TR/2013/REC-sparql11-protocol-20130321/#query-success
On 29 Apr 2016 08:19, "Douglas Campbell" <douglas....@tepapa.govt.nz> wrote:
>>>
>>> I can imagine some future changes can't be handled gracefully via REST. The 'Web' has traditionally solved this (badly) with "let's just create a different website" and with 404s and visitors eventually give up.
>>
>>
>> Could you give an example of the kind of thing that you think couldn't work?
>
>
> I can't recall an exact example, but I'm thinking maybe when your data modelling evolves and reveals that what you previously lumped together as a single resource should really have part separated off as a separate resource. I'm not sure how it is possible to do this 'gracefully' without versioning since the main resource now returns half the data (possibly records and/or fields are missing), which may break some existing apps.
Versioning the response type.
New clients would request the resource with an Accept header of "application/my-api-fine-grained+json" to receive the fine-grained resource representation. Without that header you'd get the old coarse-grained one. The URI remains the same, because "cool URIs don't change".
>
> Fielding's post seem to recommend abandoning it and making two new resources, which is just messy. That is still versioning except it is done via documentation - "This resource is the older version that combines resources X and Y, you should now use X or Y instead".
>
Thank you for the time you have invested in answering our questions. The discussion has helped our scoping of Te Papa's collections API considerably. Some of the points may seem obvious but it really helps to get validation, and a couple of unexpected perspectives also popped up.
Here is a summary of the discussion.
Use cases
Other points
Not necessarily; you can use a SPARQL store as a back end to "canned" queries and not require front end devs to know SPARQL, but still get the benefits of SPARQL's power, such as conneg.