Sending pre-encoded h264 bitstream via WebRTC (native code)

6,556 views
Skip to first unread message

mike ads

unread,
Mar 11, 2016, 9:15:52 AM3/11/16
to discuss-webrtc
Hello!

I'm using an Nvidia h264 hardware encoder to encode some live video and output a constant bitstream. I would like to send that bitstream to some clients via WebRTC.

I've already gotten WebRTC running in my native application, and I've extended the VideoCapturer class to be able to send my own custom video stream (with help of OpenCV). Everything is working fine.

I've been reading all i could find about sending h264 streams via WebRTC and I'm not sure where to begin. Could i simply create VideoFrames with fourcc type FOURCC_H264 and pass them down to be sent to my clients? Or will it be more difficult than that?

Any point in the right direction would be really appreciated. Thanks!

Oren Shir

unread,
Mar 11, 2016, 1:21:36 PM3/11/16
to discuss-webrtc
I've been looking into doing the same thing, and all the evidence I've seen suggests it is not easy with the current architecture.
I haven't tried it yet, but from what I understand you will need to use a capturer that passes the H264 frames, and a custom encoder that doesn't try to encode them again. The custom encoder should communicate with your Nvidia encoder to respond to feedback from the peer (a key frame might be needed right away, or the client can't keep up and needs lower quality/resolution for example).

I hope I'm wrong and there is an easier solution. Perhaps using the low level mediaengine2 API?
If anyone has a solution they can share on github it would be great.

Alexandre GOUAILLARD

unread,
Mar 11, 2016, 4:52:52 PM3/11/16
to discuss...@googlegroups.com
if you modify a capturer, you need to decode and let webrtc re-encode (it's killing the purpose, really).

if you push brutally your encoded frame deeper in the code, you will need to disable/fool all the bandwidth adaptation, and ignore most of the RTCP feedback. it is unlikely you would achieve good quality that way.

if you want to do the things nicely, you would have to find a way to percolate all those feedbacks and information all the way back to the hardware encoder.

The only code in webrtc that does that is the part that handle the hardware encoders on iOS and android. That's where you should be looking for hints about what needs to be passed over, and which webrtc API/class to use when using external encoders.

The design has recently evolved to support external (video) codecs, but I do not know how stable it is yet. Maybe someone from google can comment?

--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/79fae00c-eb5d-4ff2-a60f-689de46105ed%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Alex. Gouaillard, PhD, PhD, MBA
------------------------------------------------------------------------------------
Principal Architect - Citrix, San Francisco
President - CoSMo Software Consulting, Singapore
------------------------------------------------------------------------------------

Oren Shir

unread,
Mar 11, 2016, 6:41:46 PM3/11/16
to discuss-webrtc
If you modify the capturer AND the encoder, and a few other parts in the pipe between the two, you can capture H264 frames, not decode them, not encode them, and use them as the decoder output. Everything else should behave the same. 
Sadly, it is easier said than done. It doesn't look like the design had this use case in mind, or if it does this isn't the approach they were planning for. I still hope someone will come up with a better solution.

mike ads

unread,
Mar 11, 2016, 7:00:47 PM3/11/16
to discuss...@googlegroups.com
@Alexandre: I do have access to the RAW image data for the video frames before they are encoded by my hardware encoder. So it seems like there are two ways of going about this:

1) I could pass the raw frames into my video capturer, and then add support for my NVENC HW encoder into WebRTC, allowing it to use NVENC for encoding the frames and sending them to my clients, as well as all the features you mentioned (adaptive bitrate...etc). One issue however is that these raw frames are all initially rendered in video memory and they are encoded in there as well. So WebRTC would have to work with CPU pointers to video memory without making unnecessary copies of the frame data to RAM.

OR

2) Handle all the encoding on my own, and just make enough modifications to WebRTC to bypass all of it's own encoding, and just send the frames i gave it to my clients. This sounds much simpler, but has the drawbacks of not allowing WebRTC to control the encoder settings and adjust settings for clients to have a good experience.

--

---
You received this message because you are subscribed to a topic in the Google Groups "discuss-webrtc" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/discuss-webrtc/8DC2iF0eP6s/unsubscribe.
To unsubscribe from this group and all its topics, send an email to discuss-webrt...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/CAHgZEq4%3DGqUUNJC4Xq9NJTDOSthGCPm9JFk9%3Ds7WMg0G75AnMg%40mail.gmail.com.

Alexandre GOUAILLARD

unread,
Mar 11, 2016, 7:08:32 PM3/11/16
to discuss...@googlegroups.com
"and a few other parts" LOL.

if you include the higher level (sdp parsing and generating), there are 13+ layers involved in the code, with interdependencies between the size of MTU, the type of packetization (mode 0 and mode 1), the number of chunks for a frame, and the bandwidth available, among other things ...

You can look at the early patches of people that did it for inspiration.

kaiduan xue, blackberry/RIM, did an implementation of h264 in webrtc lib (and webkit with hardware acceleration for BB phones). Proposed the first patches almost two years ago:

randel jesup, principal engineer mozilla, expert in H.264 (author of the latest specificationsported the previous patch to firefox's webrtc version and proposed the patch upstream more than a year and a half ago:

He then added some corrections in firefox not present in the patch:

The latest implementation by (google's) henrik is different, it leverages two separate libs for encoding and decoding https://bugs.chromium.org/p/chromium/issues/detail?id=500605

...

Or you can read my previous message in this thread for a digest of all of the above.

Alex.



For more options, visit https://groups.google.com/d/optout.

Alexandre GOUAILLARD

unread,
Mar 11, 2016, 7:16:17 PM3/11/16
to discuss...@googlegroups.com
IMHO the best way is to ask google how far they are in the support of external codecs in the video_engine and start from there. Anything else would be extremely Hacky, and include preprocessor time codec selection flags and other horrors. 

One bug in their new ticket system should start it, then you might want to have as many people as possible star it to get attention.

I know of a lot of companies (IoT, Drones, webcam ) that would love to have a clean way to do that (i.e. have a H.264 HW encoder supporting webrtc lib to embed in the device which then could stream browser-compliant webrtc streams) instead of current hacks (embedded ffmpeg to stream to a server, transcode, ....). So you're likely to get support on the bug. 

Since google went through the entire process (in the last two years ....) for H.264 it is supposed to be clear in the engineer memory, and it could be a good time to document "how to add a video codec in webrtc lib". 




For more options, visit https://groups.google.com/d/optout.

Bill Gibson

unread,
Mar 12, 2016, 2:59:26 PM3/12/16
to discuss-webrtc
I have successfully done this although I am only using the data channel layer. Mediasource playback on Chrome and FF work. Embedded apps like iOS and Andriod require new code logic to render the video, but both have native support for h264 streams.

Alexandre GOUAILLARD

unread,
Mar 15, 2016, 3:52:11 PM3/15/16
to discuss...@googlegroups.com
@mike

the latest H264 CL from this morning show how to encapsulate external codec (video toolbox in that case, on mac). It's smaller than the previous patches I pointed you to.
You could take a look there:

Note that this code lives in chrome, and not in webrtc standalone. How it eventually leverages webrtc classes is what you want to look at.



On Sat, Mar 12, 2016 at 11:59 AM, Bill Gibson <billgi...@gmail.com> wrote:
I have successfully done this although I am only using the data channel layer. Mediasource playback on Chrome and FF work. Embedded apps like iOS and Andriod require new code logic to render the video, but both have native support for h264 streams.
--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

mike ads

unread,
Mar 16, 2016, 9:02:14 AM3/16/16
to discuss-webrtc
@Alex: Thanks a lot for your help! The information there looks like it will be very useful when I try to add the NVENC support that I'm looking for in my case.

I'll be sure to post and update here and let everyone know how it goes.

Nicolas Tizon

unread,
Dec 29, 2016, 3:46:18 AM12/29/16
to discuss-webrtc
Hi all,

A quite easy way to do is to extend the existing cricket::VideoCapturer class in order to push the encoded video frames directly. In addition, by using the FFmpeg API it is possible to support a wide range of video sources (RTSP, MPEG2-TS,...)
If you are interested, I've committed my last experiments: https://github.com/nicotyze/Webrtc-H264Capturer

As previously mentioned, this breaks some bitrate adaptation mechanisms but in many cases it is not really possible to control the external encoder (especially hardware encoders).

nolaan.ge...@gmail.com

unread,
Jan 23, 2017, 2:43:34 AM1/23/17
to discuss-webrtc
Hi,
Unfortunately your instructions don't work. :/

nicolas tizon

unread,
Jan 23, 2017, 3:23:47 AM1/23/17
to discuss...@googlegroups.com
For the time being, this approach has not be widely tested...
Can you please tell me a bit more about the  problem you met in order to improve/fix the "instructions" ?


--

---
You received this message because you are subscribed to a topic in the Google Groups "discuss-webrtc" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/discuss-webrtc/8DC2iF0eP6s/unsubscribe.
To unsubscribe from this group and all its topics, send an email to discuss-webrtc+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/9b631dd3-026b-4554-9c55-cb355f0e2208%40googlegroups.com.

Vitaut Tryputsin

unread,
Apr 26, 2018, 4:15:03 AM4/26/18
to discuss-webrtc
I have the same problem.I have encoded h264 bitstream from Nvidia Capture Sdk (NVIFR). I can write it to native h265 video file, but the main idea is sending it to the browser as a live video stream with low latency. It is possible to send encoded h264 bitstream  to the browsers using webrts peerconnection client and server. I saw Nicolas example, but i don't clearly understand how  i can use it in my case. What is the entry point of my investigation? Should i use ffmpeg with webrts or just overwrite some classes from webrtc ? Thank You for any help or plan. I am new in the encoding and webrtc.

Vitaut Tryputsin

unread,
Apr 26, 2018, 4:15:13 AM4/26/18
to discuss-webrtc
Hello, I have the same problem. I have encoded h264/265 bitstream from the Nvidia Capture SDK. It is possible to send it to the browser using webrts as live video stream? I saw Nicolas examples but I don't clearly understand what  i should doing in my case. Should I use ffmpeg in this case, or just overwrite some classes from webrtc? Thank you for any help.

понедельник, 23 января 2017 г., 11:23:47 UTC+3 пользователь Nicolas Tizon написал:
To unsubscribe from this group and all its topics, send an email to discuss-webrt...@googlegroups.com.


понедельник, 23 января 2017 г., 11:23:47 UTC+3 пользователь Nicolas Tizon написал:
To unsubscribe from this group and all its topics, send an email to discuss-webrt...@googlegroups.com.

mike ads

unread,
Apr 26, 2018, 4:20:28 AM4/26/18
to discuss...@googlegroups.com
I figured it out on my end eventually. There’s an encoder class somewhere buried in webrtc where the actual call to Openh264 is made. Once the bitstream is returned it’s processed slightly, and passed down to webrtc until it eventually gets transmitted.

So just go there and replace the openh264 setup and calls with your own whatever encoder. And pass your bitstream down. Worked for me.

Vitaut Tryputsin

unread,
Apr 30, 2018, 3:37:58 AM4/30/18
to discuss-webrtc
Thanks. It will be the starting point for me.

четверг, 26 апреля 2018 г., 11:20:28 UTC+3 пользователь mike ads написал:

Kiran Raj

unread,
Jan 24, 2019, 2:12:49 AM1/24/19
to discuss-webrtc
Hi, 
I am trying to stream raw h264 nal units from a native webrtc peer to a browser. I've been working on  https://github.com/nicotyze/Webrtc-H264Capturer this Capturer. As of now I have a file containing raw h264 and am trying to play it in the Web client. I can successfully play most of 720p files but couldnt play 1080p resolutions. The playback is very slow. I can see only 2 or 3 frames for every 5 seconds in browser. Got any suggestions ?

Bo Zhou

unread,
Jun 26, 2019, 7:10:07 AM6/26/19
to discuss-webrtc
Hi Kiran ! Did you solve this ?
Reply all
Reply to author
Forward
0 new messages