I have been reading various online sources and code examples to try and understand if the following is possible and what the best way to go about it would be:
I would like to create a native (c/c++) webrtc peer that will capture video frames from ffmpeg desktop capture (this part I know how to do) in raw h264 NAL frames and send these NAL frames via webrtc for decode and display in the browser as a MediaStream (so that I can attach directly to a video tag for best rendering performance). Is this even remotely possible?
Looking at the peerconnection_client examples it seems that I might want to start there, but any tips or pointers in the right direction would be appreciated. One of the biggest unanswered questions I have (after looking at the example) is which class(es) or functions should I implement/overload which I need to feed the data I receive from ffmpeg so that it is properly feed to a MediaStream and received on the other side. Additionally, I am wondering if the MediaStream decode (browser side) can handle frame loss as well as support for both I-Frames and P-Frames.
I am aware of the MediaSource APIS (mediaSource.addSourceBuffer and sourcebuffer.appendBuffer) however would prefer to leverage MediaStream since I believe that will avoid any javascript code intercepting the frame data/buffers and can leverage hardware decoding of h264 if available and provide the lowest latency and lowest CPU usage experience.