Dear all,
The AV working group has collected many examples of the things we need to be able to model in Presentation API 3 – the things found in AV collections. The things we want to open up to the same annotation and interoperability patterns we have for image-based objects.
These use cases led to test manifests and the evolving Presentation 3 spec.
These models need to be tested against reality by verifying that the things we want to do can really be done in a web browser. Annotating content onto a canvas with a time dimension raises technical challenges. [1]
To help explore those challenges, we have been working on a proof of concept with Joscha Jaeger of Filmicweb. This work is funded by the British Library’s Enhancing Discovery and Access for Sound Collections project.
https://github.com/digirati-co-uk/iiif-av-bl
The Github repo includes the test fixtures, implementation details and an evaluation of approaches to media synchronisation and playback.
The proof of concept player can be found here:
https://digirati-co-uk.github.io/iiif-av-bl/
Feedback most welcome!
Tom
[1] IIIF AV Technical challenges
https://docs.google.com/document/d/1lcef8tjqfzBqRSmWLkJZ46Pj0pm8nSD11hbCAd7Hqxg/edit