Intent to Implement: Revised WebVR API

243 views
Skip to first unread message

Brandon Jones

unread,
Sep 7, 2017, 7:20:02 PM9/7/17
to blink-dev

Contact emails

baj...@chromium.org, meganl...@chromium.org


Explainer

https://github.com/w3c/webvr/blob/master/explainer.md


There is also an input portion of the API that is detailed in a separate explainer for clarity.

The work-in-progress spec is available here, but the explainers are more complete and authoritative as of the writing of this email.

We have requested a tag review and received partial feedback thus far.


Summary

WebVR is an API that provides access to input and output capabilities commonly associated with Virtual Reality hardware, ranging in capabilities from mobile-based 3DoF headsets (e.g., Google Cardboard, Daydream) to PC-based 6DoF systems (e.g., Vive, Oculus Rift) and everything in between.


The API covered by this intent to implement is a complete refactoring of the previous version of WebVR. We have been running Origin Trials with the previous API version and gathered a significant amount of developer, web platform, hardware manufacturer, and implementer feedback which has informed the revised API design.


Motivation

The WebVR API has been available experimentally for a while now and has seen great interest from developers and users alike, with many compelling applications supporting the feature. However, as hardware has evolved and we have received feedback from both developers and web platform experts, it’s become apparent that there were several issues with the original form of the API that we did not want to make part of the web platform long term. As a result, this revised version of the API has been developed as a collaboration between the major browser vendors and several VR hardware manufacturers. The product of that collaboration is an API that’s more forward looking, user friendly, and “webby”. Thus we want to implement it with the intention of replacing the original version of the API once the implementation is complete.


We intend to run additional Origin Trials with the new API once implemented to gauge developer response and look for potential issues before shipping.


Interoperability and Compatibility Risk

Other vendors have already committed to implementing this updated version of the API, so the primary risk is actually that the previous version of the API sticks around in other browsers longer than intended, creating some user confusion about what “WebVR support” actually entails. Chrome has only provided access to the previous version of the API via Origin Trials.


Edge: Shipped previous WebVR API, has committed to deprecating it in favor of the new API when implemented.

Firefox: Shipped previous WebVR API, has committed to shipping the new API and supporting previous API as a Javascript shim on top of it..

Safari: No commitment yet to implement either version of the API, but participating in spec development.

Web developers: Positive


Ongoing technical constraints

None.


Will this feature be supported on all six Blink platforms (Windows, Mac, Linux, Chrome OS, Android, and Android WebView)?

Yes, though not all platforms may support VR devices. (The API would return an empty device array in that case.)


OWP launch tracking bug

http://crbug.com/670502


Link to entry on the feature dashboard

https://www.chromestatus.com/feature/5680169905815552


Requesting approval to ship?

No.


Emily Stark

unread,
Sep 10, 2017, 4:53:21 PM9/10/17
to Brandon Jones, blink-dev
I might be missing it but is there a description of the changes in the revised API over the previous version?

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAEGwwi0H5FjMsb5iynxDyb77xRUr06DvnCJLy6mw3ejyboo3cg%40mail.gmail.com.

wisanthama

unread,
Sep 10, 2017, 5:14:47 PM9/10/17
to Emily Stark, Brandon Jones, blink-dev




ສົ່ງຈາກໂທລະສັບຊໍາຊຸງກາແລັກຊີຂອງຂ້ອຍ.

-------- ຂໍ້ຄວາມເບື້ອງຕົ້ນ --------
ຈາກ: 'Emily Stark' via blink-dev <blin...@chromium.org>
ວັນທີ: 11/9/2017 03:52 (GMT+07:00)
ເຖິງ: Brandon Jones <baj...@google.com>
ສໍາເນົາສົ່ງ: blink-dev <blin...@chromium.org>
ເລື່ອງ: Re: [blink-dev] Intent to Implement: Revised WebVR API

To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAPP_2SaJfu0S%3DkGnSg2LEuF%2BzTjrsdoyuiQ9xWP9%3DQTtBNw_aw%40mail.gmail.com.

Brandon Jones

unread,
Sep 11, 2017, 1:18:33 PM9/11/17
to wisanthama, Emily Stark, blink-dev
Hm... it would almost be easier to list the things that didn't change! We really did take the feedback to heart and overhauled the API from top to bottom. The end result is an interface that supports most of the same concepts as the previous one (and some important new ones) but approaches them in more precise, better defined, and selectively constrained ways. The side effect being that pretty much the entire API surface is updated.

Still, I can definitely run through some of the high level changes and their intentions to give people that aren't familiar with the previous API better footing:
  • Out of concern for user's privacy and spinning up device resources more intelligently we've limited the amount of information that the API surfaces during device enumeration. It's now a very small amount of data that enables users to pick their intended output destination (in the case that you have multiple headset connected to a single machine, for example.) To get anything more than that (and to display any imagery) the developer must request a "session" from the device, which we can control much more tightly and place various security restrictions on depending on the requested level of functionality. 
  • We've switched from a "descriptive" to a "prescriptive" rendering model. In the previous API you would draw a scene and then describe to the API how you did it ("I rendered the left eye's content into this viewport", etc.). The new API instead describes to the developer precisely how they need to render their content in order for it to be displayed correctly. ("You must render the left eye content into this viewport.") This resolves multiple points of ambiguity and gives the UA much more control over quality, performance, and security.
  • We're no longer assuming that there app only needs to render a left and right eye and instead describe N views that the developer must render. N can be 1 for mono displays, 2 for stereo displays, or 64+ for some sort of crazy lightfield display in the future. Again, the point is to give the UA the flexibility it needs to make intelligent decisions regarding performance, hardware capabilities, and security.
  • As part of the above changes, we've also made it so that all uses of the API can use a single render path. The user largely doesn't have to know or care about the exact details of how their imagery will be displayed, they just know that if they render the N views they were given the UA will work out the rest. This means that a much larger array of devices have the potential to "just work" with a much larger subset of the content on the web. (We can't entirely stop developers from trying to be clever in ways that break devices other than the set they tested with, but we're trying.)
  • We've also been working on a new, VR-centric input system that abstracts away some of the most common input patterns we saw emerge from the previous API and enables it to work the same way on a 2D screen, a gaze-and-click style cardboard device, an orientation-only pointing device like Daydream, or a fully tracked controller like the high-end desktop headsets. This will hopefully encourage more apps to use these generic input patterns instead of checking device names and doing customized logic for each as we had seen previously.
There's a lot of details that list glosses over, but those are the highlights. In general we've been working on ensuring that the API will be applicable to the widest range of future hardware we reasonably can, and ensuring that applications that users create with the API will work automatically with as much of the ecosytem as possible.

Hope that helps!
--Brandon
Reply all
Reply to author
Forward
0 new messages