How can I use pp::VideoDecoder after PNaCl is gone?

672 views
Skip to first unread message

Brian Pearce

unread,
Sep 29, 2017, 6:59:55 PM9/29/17
to Native-Client-Discuss
Because web standards are sorely lacking when it comes to live low-latency H.264 video streaming, I have been developing an H.264 player that I can embed in web pages in Chrome using PNaCl.  To do this, I expanded on the VideoDecoder example from the Native Client SDK, which uses pp::VideoDecoder.  I embed the .pexe and feed H.264 NAL units into the module from JavaScript, then parse messages that the module sends back to find out when each frame is finished rendering and what its resolution is, etc.  It is far more efficient and more powerful (and more reliable) than any of the Emscripten / JavaScript H.264 decoder ports I've tried.

However today I learned that PNaCl is deprecated and my player will stop working early next year :(

The replacement technology, WebAssembly, doesn't have a free video decoder like I got with PNaCl, and I can't find H.264 decoders that someone else has ported.  I do not have the time or the experience to port one myself.  So right now WebAssembly is not looking like an option.

So my question is, what must I do differently to keep using pp::VideoDecoder?

It doesn't look like pp::VideoDecoder or PPAPI is going away, just the ability to build and consume .pexe files (or Chrome Apps outside of Chrome OS).  Right?

Can I create a Chrome Extension that will enable me to embed a pp::VideoDecoder-powered video player on the web again (with the obvious extra step of installing the extension on each client machine)?  Will I have to build separate executables for each platform I wish to run it on?

Or should I start getting comfortable with the idea of decoding H.264 in JavaScript again?

Some One

unread,
Oct 31, 2017, 6:42:30 AM10/31/17
to Native-Client-Discuss
I have the same question. We need to sync the video frames with vector/text labels precisely. Will WebAssembly ship a similar API eventually?

Brian Pearce

unread,
Nov 14, 2017, 1:02:57 PM11/14/17
to Native-Client-Discuss
I ended up transitioning my project to use this: https://github.com/kazuki/video-codec.js

Performance (decoding speed) is about 40% of what Chrome's native video decoder was capable of :(

van...@gmail.com

unread,
Dec 18, 2017, 6:58:46 PM12/18/17
to Native-Client-Discuss
Yea we also need pp::VideoDecoder, and rely on it heavily.  We are thinking that unfortunately we would have to move to a NW.js / Electron / CEF app that runs out of browser if chrome extensions also wont support NaCL.

Brian Pearce

unread,
Jul 15, 2018, 8:34:43 PM7/15/18
to Native-Client-Discuss
When I started this thread about a year ago I was under the impression that the Media Source Extensions API (HTML5 audio/video) required each video chunk (MP4 file) to begin with a keyframe, which would mean a minimum amount of streaming latency equal to the keyframe interval.  This turned out to be an incorrect assumption.  You can in fact feed individual frames into Media Source Extensions and achieve low-latency playback.  The complex part is you still need to provide the data in the form of fragmented MP4 files.  Fortunately I found a fantastic little library that hides all that needless complexity and gives you a clean API for submitting raw H.264 frames: https://github.com/samirkumardas/jmuxer

And just like that you have a passable replacement for many pp::VideoDecoder use cases, and it works in most browsers, not just Chrome on desktops/notebooks.

Now there are a few limitations to using Media Source Extensions:
  1. There's no "frame rendered" event, so it is difficult to do additional processing on decoded frames, or perfectly synchronize some other page element with the video.
  2. Low latency streaming in MS Edge is currently impossible due to the browser imposing about 3 seconds of delay.
  3. Low latency streaming doesn't work well in Firefox due to this: https://bugzilla.mozilla.org/show_bug.cgi?id=1290840

wole...@chromium.org

unread,
Aug 3, 2018, 1:14:46 PM8/3/18
to Native-Client-Discuss
Yes, MSE is not a low-level decoder API, nor was it really meant to be one. However, related to improvements around latency/liveness, MSE spec issue tracker currently has:
  1. https://github.com/w3c/media-source/issues/184 (expose the decoder, and some discussion about MSE implementation's and low-latency decoding),
  2. https://github.com/w3c/media-source/issues/21 (approximately, let web apps observe/set latency model),
  3. https://github.com/w3c/media-source/issues/160 (for streams that have gaps in them, e.g. the frames' buffered presentation ranges are disjoint due to gaps, let apps control how the playback results: stall (current behavior), or possibly not produce a gap, or possibly play through gaps in various ways)
Comments on existing MSE spec issues, or filing new issues, are welcome :)
Reply all
Reply to author
Forward
0 new messages