Hi,
My replies inline :
2014-06-04 14:04 GMT+02:00 james gordon :
>
> Video capture is currently done using WebRTC and the audio with either Java
> or OpenSl (correct?).
Yep it's correct.
> Is there any reason why WebRTC is not used for the audio?
The code of csipsimple audio was made before code of WebRTC was
released. I also worked myself on WebRTC project topic for a big
telecom company and we used audio stack from webRTC project. Results
was (about one year ago) less good than csipsimple implementation in
terms of various devices support and totally similar in terms of
performance. There is no magic, the code is pretty similar ;). The
only big diff is that csipsimple implem is made to directly bind pjsip
callback and approach. Integrating webrtc would mean integrate another
abstraction layer to already existing pjsip abstraction layer. So the
current implem (proven working with many device) was direct to java
and openSL-ES apis... decision was took to not try to integrate webRTC
for this point.
For video, things was very different : nothing was available in
csipsimple yet (except some proof of concept not made with opengl
apis), and the webrtc code had some features to support more devices
with video cameras.
> As if it was it
> might be possible to use the VoEVideoSync from WebRTC, do you think this is
> possible?
I don't think so. WebRTC is somehow equivalent to "pjmedia" (the
module in pjsip responsible to manage av media). The audio video sync
has to be done in a central point that you'll not be able to override
in pjmedia (unless you patch pjmedia... but well would be as fast to
implement the feature in pjmedia I thin).
However, in case you'd like to go the way integrating completely
webRTC, there is a solution offered by pjsip : add your own media
adapter based on webrtc. This way you'll completly replace pjmedia
with webrtc media layer and use pjsip only for negociation.
>
> Finally, I have noticed in the webrtc_android_video_render_dev.cpp that
> 400ms is added to the render time, is there a reason for this? see below;
>
> // TODO : shall we try to use frame timestamp?
> WebRtc_Word64 nowMs = TickTime::MillisecondTimestamp();
> stream->_videoFrame.SetRenderTime( nowMs + 400);
> stream->_videoFrame.SetTimeStamp(frame->timestamp.u64/*nowMs*/);
> stream->_renderProvider->RenderFrame(0, stream->_videoFrame);
>
It looks like some value I copied paste from some sample from webRTC
and that's it's necessary to get the android rendering layer consider
this frame should not be skipped (basically when the rendering layer
of webrtc get a frame it can decide to drop frames if looks like it's
currently overrunning).
But if I added a TODO before it's because I was not sure about the way
it's done.
And indeed 400ms seems pretty big and actually should even change
depending on the video frames size + the CPU load of your device. (if
your device is able to reach the rendering loop very quickly, it
should be able to process this frame sooner).
>
> In webrtc_android_video_render_dev.cpp at line 558 the frame timestamp is
> set, if I -400ms from this the video is pretty much in sync.
>
Well if you have a device with high CPU capabilities I think it's
almost real time rendering. However, if you plan to get things run on
other (old) device it's maybe risky.