The work flow about video jitter buffer in webrtc

1,228 views
Skip to first unread message

xfengtes...@gmail.com

unread,
Oct 13, 2014, 10:43:30 AM10/13/14
to discuss...@googlegroups.com
Dear All,

I have taken a research about video engine of webrtc, and take a quick check about the video frame tracking, but seems not find whether is the video frames buffer queue and jitter buffer work flow. Could any one share some thing about this? thanks.

In my searching, the video frame data tracking as below,
a. I420 video frame: VideoCaptureModule, such as VideoCaptureModuleV4L2 will provide the raw video frame, then use callback function to send frame to WebRtcVideoCapturer
b. WebRtcVideoCapturer will use VideoCapturer to process the I420 video frame, after some process, the I420 frame will send to ViECapturer.
c. _encoder->Encode()
d. default_rtp_rtcp_->SendOutgoingData
e. rtp_sender_.SendOutgoingData
f. video_->SendVideo
g. RTPSenderVideo::SendVideo()

xfengtes...@gmail.com

unread,
Oct 15, 2014, 12:34:50 AM10/15/14
to discuss...@googlegroups.com
It seems VideoSender will use MediaOptimization to check whether need drop the raw video frame before sending to encoder.
I have a question that whether it has a raw video buffer queue or a encoded video frame buffer queue, and where is the jitter buffer? thanks.


在 2014年10月13日星期一UTC+8下午10时43分30秒,xfengtes...@gmail.com写道:

Kaiduan Xie

unread,
Oct 15, 2014, 9:20:50 AM10/15/14
to discuss...@googlegroups.com
Please refer jitter_buffer.cc/receiver.cc in webrtc/modules/video_coding/main/source.

/Kaiduan

--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

kapil kumar

unread,
Oct 16, 2014, 6:37:57 AM10/16/14
to discuss...@googlegroups.com
As per my understanding,
once you send data through RTPSender, it will be received by RTPReceiver at other end.
it will be used to find sequence number, timestamps, height,width of frame,keyframe etc.

And then add to jitter buffer as per timestamp and keyframe.
Which will compare timestamps and sequence number to find if any packet is being dropped.
if not, create a videoframe for decodable session and send callback to Decoder implementation(hardware/Software)

add logs for sender/receivers and you can analyze the flow using:

Hope it helps.

xfengtes...@gmail.com

unread,
Oct 21, 2014, 2:03:37 AM10/21/14
to discuss...@googlegroups.com
Got it. Thanks very much. 
videoengine will use MediaOptimization to check whether need drop the raw video frame before sending to encoder and send data through RTPSender after encoder.
BTW, I have another question that: I just can capture encoded video frame(such as VP8) but not YUV or I420 video data, how to update this in webrtc?
By now, my action is bypass all the video process(videocapture, WebrtcCapture, VieCapture, ...) and copy the encoded video frame into I420 YUV buffer, it seems not a good idea and not efficiency.



在 2014年10月16日星期四UTC+8下午6时37分57秒,kapil kumar写道:

xfengtes...@gmail.com

unread,
Oct 21, 2014, 2:12:27 AM10/21/14
to discuss...@googlegroups.com
Got it. Thanks very much. 
videoengine will use MediaOptimization to check whether need drop the raw video frame before sending to encoder and send data through RTPSender after encoder.
BTW, I have another question that: I just can capture encoded video frame(such as VP8) but not YUV or I420 video data, how to update this in webrtc?
By now, my action is bypass all the video process(videocapture, WebrtcCapture, VieCapture, ...) and copy the encoded video frame into I420 YUV buffer, it seems not a good idea and not efficiency. 
The processTime shows in VideoCaptureImpl::IncomingFrame is larger than 10ms, such as 20ms.



在 2014年10月21日星期二UTC+8下午2时03分37秒,xfengtes...@gmail.com写道:

KaPiL.rIcKy

unread,
Oct 21, 2014, 12:42:19 PM10/21/14
to discuss...@googlegroups.com
-YUV frame you can capture anywhere before giving to encoder.
video_capture-device_* or viecapturer.
what exactly is your intention ?

-timestamp of encoded frame will be ofcourse around 20-30ms larger than captured frame, same as time taken for encoding. was this your question ?



--

---
You received this message because you are subscribed to a topic in the Google Groups "discuss-webrtc" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/discuss-webrtc/At9gNDBMn4k/unsubscribe.
To unsubscribe from this group and all its topics, send an email to discuss-webrt...@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--

~~~~~~~~~~~~~~~~~~~~~~~~~~
Thanks and regards
Kapil Kumar
~~~~~~~~~~~~~~~~~~~~~~~~~~

xfengtes...@gmail.com

unread,
Oct 21, 2014, 11:24:14 PM10/21/14
to discuss...@googlegroups.com
I have a Camera which just can provide encoded video frame, such as VP8 or H.264 frame data, but without YUV frame. my plan is to use the camera works with webrtc, and the camera just can provide each encoded VP8 or H.264 frame data for webrtc videoengine.  I have taken a research and don't find a good way to support this. 
Do you have any advice or suggestion?  I just find there is a function OnIncomingCapturedEncodedFrame() in webrtcvideocapture class.


在 2014年10月22日星期三UTC+8上午12时42分19秒,kapil kumar写道:

Kaiduan Xie

unread,
Oct 22, 2014, 9:58:57 AM10/22/14
to discuss...@googlegroups.com
xfengtest01.china,

Do you want to support the pre-encoded video frame from camera in
Chrome browser or in a native application?

If you want Chrome browser to support pre-encoded video frame, then
you have to take getUserMedia into consideration too. getUserMedia
needs un-encoded YUV data to show local video.

If you want to support pre-encoded video frame in native application,
you can bypass the video encoder in webrtc media engine, and
encapsulate the pre-encoded video frame into RTP and send it out to
network. You can contact me offline to discuss this further.

/Kaiduan
> You received this message because you are subscribed to the Google Groups
> "discuss-webrtc" group.
> To unsubscribe from this group and stop receiving emails from it, send an

KaPiL.rIcKy

unread,
Oct 24, 2014, 5:03:53 AM10/24/14
to discuss...@googlegroups.com
as Kaidun mentioned, the main thing to handle local video rendering as its need YUV frame (through webmediaplayer_ms) 

so if you need to couple camera with encoded output in chrome,then maybe new interface need to define which passon encoded frame through videocapturedevice to viewcapturer. (correct me if i am wrong) and for local video , it needs to be decoded and playback. (means 1 decoding for local video , 1 for remote)

-Is there any plan in webrtc/chromium to handle this case ?
(as i saw one similar post sometime back for handling encoded frame directly from camera)
-is anyone have tried implementing this approach ?
-or lets say if someone do it, will this be accepted to upstream ?

Thanks,
Kapil Kumar


xfengtes...@gmail.com

unread,
Oct 26, 2014, 10:25:09 PM10/26/14
to discuss...@googlegroups.com
Thanks Kaidum & Kapil,
My work case is supporting pre-encoded video frame in native application.
Based on my current research, it seems we can add the pre-encoded video frame into a dummy encoder, and the encoder provides the pre-encoded video frame into RTP and send it out to network. 

-Is there any plan in webrtc/chromium to handle this case ?
(as i saw one similar post sometime back for handling encoded frame directly from camera)
>> Dear Kapil, could you provide any suggestion for handing encoded frame directly from camera? It will be very appreciated and thankful.



在 2014年10月24日星期五UTC+8下午5时03分53秒,kapil kumar写道:

KaPiL.rIcKy

unread,
Oct 30, 2014, 12:45:09 PM10/30/14
to discuss...@googlegroups.com
Sorry to say but i have not done anything for this use case yet :(
If you decide something, please share if possible.

Cheers,
KK
Reply all
Reply to author
Forward
0 new messages