Intent to Experiment: WebCodecs

Skip to first unread message

Husain Bengali

Aug 27, 2020, 2:15:19 PM8/27/20
to blink-dev, Chris Cunningham, Dan Sanders, Dale Curtis

Contact emails,,


Design docs

TAG review

Draft Spec 


Provides efficient, low-level access to built-in (software and hardware) media encoders and decoders.

Link to “Intent to Implement” blink-dev discussion


Interoperability and Compatibility

The main risk is that other browsers do not implement it.

Gecko: Positive co-editing the spec (Paul Adenot)

WebKit: Negative concerns over keeping it fingerprinting neutral

Web developers: Positive ( Wikipedia: Twitch: Sony:


The proposed API shape enables the core use cases (see explainer) in a performant manner. We've left room for future optimizations and generally minimized complexity, even if that means we don't support all codec features. We intend for this shape to be friendly to both WASM and JS users. 

Decoder outputs will typically be rendered with Canvas and Web Audio. Developers have asked for this low level rendering control. The Canvas rendering path also allows VideoFrames to be manipulated via WebGL. The WebAudio rendering path will often leverage AudioWorklet. The AudioDecoder output uses Web Audio primitives to make this easy.

Encoder inputs will often come from getUserMedia() and getDisplayMedia().


WebCodecs would benefit from having libraries built on top that do things such as: 

  • containerize (mux) and de-containerize (demux) to/from media file formats (e.g. mp4, webm).

  • render media with low latency from an unreliable media transport (in other words, a jitter buffer)

  • RTC client logic / signalling 

WebCodecs could benefit somewhat from polyfills to experiment with the API, but that would not bring any of the performance benefits or access to hardware codecs. Significant documentation and outreach would likely be helpful, especially for advanced uses of codecs, such as spatial and temporal scalability.


The implementation is thoroughly fuzz tested. There may be marginally increased fingerprinting surface due to support for detecting the presence of hardware encode capabilities. This has been reviewed and deemed acceptable by the privacy sandbox team.

Goals for experimentation

We need feedback on most of the API surfaces, which roughly amounts to decode for video, audio and images, and encode for video and audio. Multiple partners have committed (and shown a great deal of eagerness) to participate in our origin trial and we will look for their feedback and validation of the appropriateness of the API shape for their use cases (e.g. real time communications, low latency streaming).

Experimental timeline

The initial goal of the origin trial is to run for 3 milestones, from M86 Stable thru till M88 Stable. We may end the origin trial early if feedback indicates it is ready to ship.

Ongoing technical constraints


Will this feature be supported on all six Blink platforms (Windows, Mac, Linux, Chrome OS, Android, and Android WebView)?


Is this feature fully tested by web-platform-tests?

We have coverage for all basic uses of the API, including exception and error states, which should ensure a base-level uniformity across browser implementation of the API surface. Some deeper aspects of the API's implementation, such as which codecs are supported, vary from browser to browser and hence may not need to be tested by WPTs. Test status can be found at:

Link to entry on the Chrome Platform Status

Aug 27, 2020, 3:26:19 PM8/27/20
to blink-dev,,,
LGTM. Massively excited about this work.

Will codec pluggability be explored as a separate proposal/intent?

Chris Cunningham

Aug 27, 2020, 3:49:41 PM8/27/20
to blink-dev,,, Dan Sanders,
Thanks Alex!

Re: pluggability, WASM (or even just JavaScript) users may choose to provide their own implementations of encoders/decoders using the same WebCodecs interface. This would let their apps swap out one of our decoders for one of their own (e.g. gracefully fallback if the UA didn't support a desired format). They can construct all the same input/output types and use the same rendering mechanisms as if the codec were built in. 

We don't currently have any mechanism to allow authors to register their implementation for participation in other parts of the UA stack. We're definitely happy to consider it. Users so far have asked for total control (e.g. fully manual rendering), such that they don't have much use for our existing <video> stack.


Ashley Gullen

Aug 27, 2020, 7:04:50 PM8/27/20
to Chris Cunningham, blink-dev,, Dan Sanders,
Regarding pluggable codecs for the rest of the UA stack, I wrote about this back in 2016:

I think it's a great idea for encouraging innovation with image file formats like FLIF, or other audio or even video codecs. Previously this has been difficult and hacky, and even compelling experiments have got nowhere due to lack of support. If we could, say, use a WebAssembly module to teach the UA how to decode a whole new image format, it could be both high performance and act as if the browser fully supported that format, without the monumental web compatibility concerns that come with adopting new formats/codecs.

You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
To view this discussion on the web visit
Reply all
Reply to author
0 new messages