Hi,
I am developing a streaming server in native C++ running on Windows and I would like to understand how does WebRTC handle UDP packet loss.
I have done this in the past using own UDP based streaming protocol, I don´t care that much on the packet loss, i just want frames to be sent to the decoder even if they have missing packets.
Is this possible using RTP or WebRTC? I mean, if it is possible to be done over RTP, will WebRTC be able to configure that?
The thing is that with my protocol i get smooth 60FPS playback and pretty decent video quality even with 10% packet loss rates. With WebRTC I get perfect video quality (i guess only complete video frames) but so imperfect framerate (so much display jitter). Can we have the same behaviour in WebRTC that i had in my protocol, just priorize framerate over video packet loss?
My main questions would be:
- Can WebRTC be configured with so small jitter buffer (single video frame for example)?
- Can WebRTC then be configure to push to the decoder frames with missing packets? So to keep smooth framerate and let the video codec do the error concealment?
- If this behaviour is possible, can we have feedback for every frame that was sent incomplete to the decoder (frame timestamp or ID) to apply FEC as reference frame invalidation?
- If all this features are to be discussed for future version of WebRTC, with who should we speak with or what is the way to make such sugestions to this beauty project?
Sorry if I look pretty lost on this subject but we are just starting and we want to build a worldwide disruptive service over WebRTC :)
It is so critical for us as we want to build a hole new service startup on top of WebRTC and this behaviour is a must.
Thanks a lot, I will appreciate so much any who can help us in this subject.
Best regards,