Don't know if you ever got an answer on this. I'm just getting started with WebRTC, so I don't really know yet if the libraries support this, but it is easily done with FFMPEG. First you need to set up a color space conversion context:
// This is the scaling context, used to convert from RBG to YUV420P format.
// NOTE: While FFMPEG refers to things in terms of "pixel format", what is really
// happening is a color space conversion. See"Video Demystified", Chapter 3 for more
// details. We are converting from the RGB color space of the camera (pixFmt in the call
// below to the YUV color space (INTERNAL_FORMAT) which the encoding standards (MPEG1/2/4
// and H264/265) specify as input.
(INTERNAL_FORMAT = AV_PIX_FMT_YUV420P)
_swsContext = sws_getContext (m_iFrameWidth, m_iFrameHeight,
pixFmt, _avFrame->width, _avFrame->height, INTERNAL_FORMAT,
SWS_BICUBIC, 0, 0, 0);
Then you can do the conversion using sws_scale:
uint8_t *inData [1] = {m_pImageBuffer};
int lineSize [1] = {m_iFrameBytesPerPixel * m_iFrameWidth};
sws_scale (_swsContext, inData, lineSize, 0, m_iFrameHeight,
_avFrame->data, _avFrame->linesize);
While I haven't tried this yet, but if WebRTC doesn't support the color space conversion natively, you should be able to inject the avFrame->data into WebRTC. Hope this helps, cuz I need to do the same thing!