RTCVideoFrame and Image format conversion on iOS

267 views
Skip to first unread message

alfredj...@gmail.com

unread,
Aug 21, 2017, 10:29:40 AM8/21/17
to discuss-webrtc
Happy Solar Eclipse day everyone !!!

I am using a library which gives me RGB 565 image stream. Each frame is byte array and I have all the other information that I need like resolution etc. I am having hard time converting this byte array to CVPixelBuffer (kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) which webrtc expects. I end up displaying garbage on the remote with all the options that I had tried so far. Next I tried using RTCVideoFrame::newI420VideoFrame hoping it would convert the image to I420 format, that also failed. Then I thought of not passing CVPixelBuffer to WebRtc and instead modifying webrtc library and calling libyuv::ConvertToI420() from my VideoCapturer. This is what I am doing on Windows implementation as well, there I am using somewhat older version of webrtc. I first create an empty webrtc::VideoFrame with given width & height and then call ConvertToI420. This works on windows but on iOS I don't see a similar call in RTCVideoFrame. It seems like CVPixelBuffer is required. Could someone tell how to create a RTCVideoFrame that I can inject into webrtc ?

Thanks,
AJ.

Štěpán Votava

unread,
Feb 19, 2018, 11:19:36 AM2/19/18
to discuss-webrtc
Have you been able to Convert CVPixelBuffer to I420?

Patrick Weber

unread,
Mar 25, 2018, 5:26:27 AM3/25/18
to discuss-webrtc
Don't know if you ever got an answer on this. I'm just getting started with WebRTC, so I don't really know yet if the libraries support this, but it is easily done with FFMPEG. First you need to set up a color space conversion context:

// This is the scaling context, used to convert from RBG to YUV420P format.
// NOTE: While FFMPEG refers to things in terms of "pixel format", what is really
// happening is a color space conversion. See"Video Demystified", Chapter 3 for more
// details. We are converting from the RGB color space of the camera  (pixFmt in the call 
// below to the YUV color space (INTERNAL_FORMAT) which the encoding standards (MPEG1/2/4 
// and H264/265) specify as input.
(INTERNAL_FORMAT = AV_PIX_FMT_YUV420P)
_swsContext = sws_getContext (m_iFrameWidth, m_iFrameHeight,
pixFmt, _avFrame->width, _avFrame->height, INTERNAL_FORMAT,
SWS_BICUBIC, 0, 0, 0);

Then you can do the conversion using sws_scale:

uint8_t *inData [1] = {m_pImageBuffer};
int lineSize [1] = {m_iFrameBytesPerPixel * m_iFrameWidth};

sws_scale (_swsContext, inData, lineSize, 0, m_iFrameHeight,
_avFrame->data, _avFrame->linesize);

While I haven't tried this yet, but if WebRTC doesn't support the color space conversion natively, you should be able to inject the avFrame->data into WebRTC. Hope this helps, cuz I need to do the same thing!

Reply all
Reply to author
Forward
0 new messages