What is the benefit of this approach instead of, say, implementing an
Airplay-like mirroring/streaming solution? The only aspect of
mirroring/sharing exposed to web pages there is an x-webkit-airplay
attribute on media elements.
For Chromecast-like use cases (i.e. whenever a data comms channel is
required toward a local device/service) we could provide a more
generic network communication primitive. Web browsers could discovered
suitable local devices and provide endpoint control URLs for web
developers to pass e.g. JSON messages via XMLHttpRequest to those
services. Chromecast provides such an HTTP-level interface that we
could harness here:
https://developers.google.com/cast/docs/reference/messages. There is a
proposal that supports communicating exactly this way with e.g. a
Chromecast device at
https://dvcs.w3.org/hg/dap/raw-file/tip/discovery-api/Overview.html.
Providing a more generic local services discovery mechanism and a
better network communications abstraction point (i.e. HTTP) would
allow web developers to communicate with a wider range of local
services and devices than just those services a web browser supports
natively. It would also require less work to not have to implement and
maintain multiple messaging protocol stacks in implementations since
communication with UPnP/DLNA/Chromecast/NextBigServiceHere services
could be left to web developers and suitable JavaScript libraries e.g.
https://richtr.github.io/plug.play.js/.
Why would we not discover and provide HTTP URLs toward local services
and re-use existing platform APIs like XMLHttpRequest and WebSockets
for this type of network communication?
That does still require a way for the user to pass that URL to a
second screen but it is unclear why we need a JavaScript API instead
of just a feature in the UI that can do that.
On Sep 11, 2014 2:10 AM, "Rich Tibbett" <ri...@opera.com> wrote:
>
> On Wed, Sep 10, 2014 at 3:27 PM, Anton Vayvod <ava...@chromium.org> wrote:
> >
> > On Wed, Sep 10, 2014 at 10:28 AM, Rich Tibbett <ri...@opera.com> wrote:
> >>
> >> What is the benefit of this approach instead of, say, implementing an
> >> Airplay-like mirroring/streaming solution? The only aspect of
> >> mirroring/sharing exposed to web pages there is an x-webkit-airplay
> >> attribute on media elements.
> >
> >
> > With the Presentation API one can stream a presentation or a game. If the
> > remote rendering is possible, one can achieve much better user experience in
> > many aspects.
>
> Second screen sharing implemented in OS and browser UI that a user can
> start/stop by selecting a discovered second screen provides the very
> best permissions model for this kind of sharing. It reminds me of
> <input type=file> in that, in many respects, the user is not even
> aware this is a permissions dialog.
I know that we at some point talked about exposing the ability to allow a page to detect that it was being mirrored to a TV. And then enable it to call something like startSession with would automatically and transparently to the user, without any permission prompts, display a separate URL on the mirroring target.
It doesn't appear like we added that to the spec though. It is something that would provide the flow that you are talking about.
It seems worth exploring since I think it would fit very naturally in the current spec model.
/ Jonas