Custom video capture (on Linux)

2,045 views
Skip to first unread message

Rob Agar

unread,
Oct 14, 2016, 11:26:35 AM10/14/16
to discuss-webrtc
(cross posted from webrtcbuilds)

Hi all!

I'm after some advice on how best to pass raw video frames to WebRTC in the current system.  We have working code built on the old Chromium codebase from webrtcbuilds-builder rev a9046d0 (see below), but it seems there have been some fairly major refactoring going on since then.  Apart from anything else, the DeviceManager we used to create the video capturer has gone completely.

The basic question now is, is it even possible to register a custom source of video frame data?  Looking at video_capture_linux.cc, it seems to me that for Linux it appears to be hard coded to get video from the /dev/videoX devices only.  It doesn't appear to allow for other sources of video. Please tell me I'm missing something! 

Here's our previous working (if kludgy) code:

void PeerConnectionImpl::initMediaStream(webrtc::MediaConstraintsInterface *connectionConstraints)
{
  
const char *mediaStreamLabel = "the_media_stream";
  scoped_refptr
<webrtc::MediaStreamInterface> ms = factory->CreateLocalMediaStream(mediaStreamLabel);

  mediaStream 
= ms.get();
  mediaStream
->AddRef();

  rtc
::scoped_refptr<webrtc::VideoSourceInterface> vs;
  rtc
::scoped_refptr<webrtc::VideoTrackInterface> vt;
  
if (withVideo)
  
{
    
const char *videoTrackLabel = "the_video_track";

    
// create our custom video capture
    cricket
::VideoCapturer *vc = initVideoCapturer();

    vs 
= factory->CreateVideoSource(vc, NULL);
    vt 
= factory->CreateVideoTrack(videoTrackLabel, vs);

    mediaStream
->AddTrack(vt);
  
}
  
  connection
->AddStream(mediaStream);
}


cricket
::VideoCapturer* PeerConnectionImpl::initVideoCapturer()
{
  rtc
::scoped_ptr<cricket::DeviceManagerInterface> deviceManager(cricket::DeviceManagerFactory::Create()); 

  
// use our factory which only creates our custom capturer
  
RawVideoCapturerFactory* f =  new RawVideoCapturerFactory();
  cricket
::VideoDeviceCapturerFactory *cf = static_cast<cricket::VideoDeviceCapturerFactory*>(static_cast<void*>(f));
  deviceManager
->SetVideoDeviceCapturerFactory(cf);
  
  
// create the capturer
  std
::vector<cricket::Device> devices;
  deviceManager
->GetVideoCaptureDevices(&devices);
  cricket
::VideoCapturer* c = NULL;
  
for(auto d : devices)
  
{
    c 
= deviceManager->CreateVideoCapturer(d);
    
if (!= NULL)
    
{
      
// this must be our custom one
      videoCapturer 
= static_cast<RawVideoCapturer*>(c);
      
break;
    
}
  
}

  
return c;
}

Niels Moller

unread,
Oct 18, 2016, 6:06:53 AM10/18/16
to discuss...@googlegroups.com
On Fri, Oct 14, 2016 at 5:26 PM, Rob Agar <ea.ro...@gmail.com> wrote:
(cross posted from webrtcbuilds)

The basic question now is, is it even possible to register a custom source of video frame data?  Looking at video_capture_linux.cc, it seems to me that for Linux it appears to be hard coded to get video from the /dev/videoX devices only.  It doesn't appear to allow for other sources of video. Please tell me I'm missing something! 

cricket::VideoCapturer is soon to be deprecated. The way to do a custom capturer now, is to implement VideoTrackSourceInterface. Video sinks (typically encoder and local renderer) register themselves using the AddOrUpdateSink method, and you are supposed to call each sink's OnFrame method for each captured frame.

If you want to have video adaptation (i.e., automatic reduction of resolution or frame rate if the encoder can't keep up with the original resolution and frame rate), also have a look at the class AdaptedVideoTrackSource.
 
Here's our previous working (if kludgy) code:

void PeerConnectionImpl::initMediaStream(webrtc::MediaConstraintsInterface *connectionConstraints)
{
  
const char *mediaStreamLabel = "the_media_stream";
  scoped_refptr
<webrtc::MediaStreamInterface> ms = factory->CreateLocalMediaStream(mediaStreamLabel);

  mediaStream 
= ms.get();
  mediaStream
->AddRef();

  rtc
::scoped_refptr<webrtc::VideoSourceInterface> vs;
  rtc
::scoped_refptr<webrtc::VideoTrackInterface> vt;
  
if (withVideo)
  
{
    
const char *videoTrackLabel = "the_video_track";

    
// create our custom video capture
    cricket
::VideoCapturer *vc = initVideoCapturer();

    vs 
= factory->CreateVideoSource(vc, NULL);
    vt 
= factory->CreateVideoTrack(videoTrackLabel, vs);

So you don't need vc, and you shouln't call CreateVideoSource. Instead, let vs be an instance of your own implementation of VideoTrackSourceInterface, and pass that to CreateVideoTrack.

Note that you also no longer need a custom VideoFrameFactory. I hope you find the new organization of these things is saner than the old way.

A good example of a recently rewritten custom capturer is the class AndroidVideoTrackSource.

Regards,
/Niels

Rob Agar

unread,
Oct 19, 2016, 5:50:26 AM10/19/16
to discuss-webrtc
Awesome, thanks Niels. You've given me hope!
 

Pablo Iris

unread,
Oct 24, 2016, 12:38:21 PM10/24/16
to discuss-webrtc
Hello guys,

Basically I wanted to do the same thing. Thanks Niels for your help.

Before start to implement a new VideoTrackSourceInterface, I want to test first the FakeVideoTrackSource. At the moment I have modified the class 

RTCAVFoundationVideoSource with this code:


#import "IrisRTCAVFoundationVideoSource+Private.h"


#import "RTCMediaConstraints+Private.h"


#import "RTCPeerConnectionFactory+Private.h"


#import "RTCVideoSource+Private.h"


#include "webrtc/media/base/fakevideocapturer.h"


#include "webrtc/api/test/fakevideotracksource.h"


//#include "webrtc/media/base/adaptedvideotracksource.h"




@implementation IrisRTCAVFoundationVideoSource {


  cricket::FakeVideoCapturer *_capturer;


}




- (instancetype)initWithFactory:(RTCPeerConnectionFactory *)factory


                    constraints:(RTCMediaConstraints *)constraints {


  NSParameterAssert(factory);


  // We pass ownership of the capturer to the source, but since we own


  // the source, it should be ok to keep a raw pointer to the


  // capturer.


    //_capturer = new cricket::FakeVideoCapturer();


    rtc::scoped_refptr<webrtc::VideoTrackSourceInterface> source = webrtc::FakeVideoTrackSource::Create();



  return [super initWithNativeVideoSource:source];


}







@end


Can anyone tell me why is not showing nothing at all? I can see a new peer appearing in the browser but the preview is blank. Thanks guys. I'm stuck

Niels Moller

unread,
Oct 25, 2016, 3:21:51 AM10/25/16
to discuss...@googlegroups.com
On Mon, Oct 24, 2016 at 6:38 PM, Pablo Iris <pa...@irisconnect.co.uk> wrote:
Before start to implement a new VideoTrackSourceInterface, I want to test first the FakeVideoTrackSource.

I'm not very familiar with all these test classes (and not at all with the objc interfaces).

FakeVideoTrackSource looks like a source backed by a FakeVideoCapturer. You're aware that class doesn't generate any frames spontaneously? You'd have to call CaptureFrame for each frame, something like

  source = FakeVideoTrackSource::Create();
  ...
  source->fake_video_capturer()->CaptureFrame()

For testing, you might want to try using a VideoCapturerTrackSource backed by a FrameGeneratorCapturer. If I understand correctly, that will create a thread which inserts frames into the pipeline for you. It would be nice with a VideoTrackSource specifically made for testing, generating a test video stream, but I suspect the FrameGeneratorCapturer is the closest we have now.
 
Regards,
/Niels

Pablo Iris

unread,
Oct 25, 2016, 4:10:28 AM10/25/16
to discuss-webrtc

Hello Niels,

Thanks again.

I have been trying this code but still no luck



#import "IrisRTCAVFoundationVideoSource+Private.h"




#import "RTCMediaConstraints+Private.h"


#import "RTCPeerConnectionFactory+Private.h"


#import "RTCVideoSource+Private.h"


#include "webrtc/media/base/fakevideocapturer.h"


#include "webrtc/api/test/fakevideotracksource.h"


//#include "webrtc/media/base/adaptedvideotracksource.h"




@implementation IrisRTCAVFoundationVideoSource {


  cricket::FakeVideoCapturer *_capturer;


    rtc::scoped_refptr<webrtc::FakeVideoTrackSource>_trackSource;


   


}




- (instancetype)initWithFactory:(RTCPeerConnectionFactory *)factory


                    constraints:(RTCMediaConstraints *)constraints {


  NSParameterAssert(factory);


  // We pass ownership of the capturer to the source, but since we own


  // the source, it should be ok to keep a raw pointer to the


  // capturer.


    //_capturer = new cricket::FakeVideoCapturer();


    _trackSource = webrtc::FakeVideoTrackSource::Create();


    rtc::scoped_refptr<webrtc::FakeVideoTrackSource> source = _trackSource;


      //factory.nativeFactory->CreateVideoSource(


        //  _capturer, constraints.nativeConstraints.get());




  return [super initWithNativeVideoSource:source];


}






/*- (void) sendFrame:(CVImageBufferRef) pixelBuffer{


    _trackSource->fake_video_capturer()->CaptureFrame();


}*/




-(void) sendSomething{


    _trackSource->fake_video_capturer()->CaptureFrame();


}




@end


I call sendSomething method when I press a button from the UI. But I don't receive anything in the other side.

I'm doing this only for test that a custom VideoTrackSource is working. Once I get this working I will build a custom VideoTrackSource to send frames directly from the iOS app. Do you have any example or do you have any idea why my code is not working?

Thanks

Niels Moller

unread,
Oct 25, 2016, 6:12:51 AM10/25/16
to discuss...@googlegroups.com
On Tue, Oct 25, 2016 at 10:10 AM, Pablo Iris <pa...@irisconnect.co.uk> wrote:
>
>
> I have been trying this code but still no luck
...
> I call sendSomething method when I press a button from the UI. But I don't receive anything in the other side.
>
> I'm doing this only for test that a custom VideoTrackSource is working. Once I get this working I will build a custom VideoTrackSource to send frames directly from the iOS app. Do you have any example or do you have any idea why my code is not working?

To debug, I'd suggest setting a breakpoint on sendSomething, and then
step down to see what happens to the frame. If the pipeline was
working, one would expect calls to OnFrame on various objects,
ultimately getting into the video encoder object. Maybe logs can give
you some clue. You could also add debug logging or break points on
AddOrUpdateSink, to verify that there's some sink to deliver the
frames to.

I'm not aware of any example on how to do this on iOS. For a non-iOS
example, check AndroidVideoTrackSource.

It might also make the code a bit clearer to skip the
FakeVideoCapturer and FakeVideoTrackSource classes, and implement
VideoTrackSourceInterface directly; in your setup, I think they add
complexity, for a pretty small benefit.

Regards,
/Niels

Diego Bonesso

unread,
Mar 6, 2018, 9:57:48 AM3/6/18
to discuss-webrtc

 I tried implementing the Interface and didn't work. I implemented the interface an when I have a frame then called the event OnFrame(const webrtc::VideoFrame& frame) as following:

void StreamSource::OnFrame(const webrtc::VideoFrame& frame)
{
 rtc::scoped_refptr<webrtc::VideoFrameBuffer buffer(frame.video_frame_buffer());
 broadcaster_.OnFrame(frame);

}


In conductor.cc at the event AddStreams() I create a videosource by the following code :


rtc
::scoped_refptr<webrtc::VideoTrackInterface> video_track( peer_connection_factory_->CreateVideoTrack( kVideoLabel,new mystream::StreamSource())

Niels Moller

unread,
Mar 7, 2018, 6:37:14 AM3/7/18
to discuss...@googlegroups.com
On Tue, Mar 6, 2018 at 1:55 PM, Diego Bonesso <diego....@gmail.com> wrote:
> I tried implementing the Interface and didn't work. I implemented the
> interface an when I have a frame then called the event OnFrame(const
> webrtc::VideoFrame& frame) as following:
>
> void StreamSource::OnFrame(const webrtc::VideoFrame& frame)
> {
> rtc::scoped_refptr<webrtc::VideoFrameBuffer
> buffer(frame.video_frame_buffer());
> broadcaster_.OnFrame(frame);
>
> }

That looks somewhat strange. There is nothing in webrtc to call
OnFrame on a *source*. A source has the AddOrUpdateSink method, and is
expected to spontaneously call OnFrame on registered *sinks*, e.g, by
spawning a separate thread reading the camera.

Now, the helper class AdaptedVideoTrackSource has on OnFrame method,
which can handle rotation for child classes. Child classes are then
expected to call this method spontaneously, and it will rotate frames
if appropriate and broadcast to registered sinks.

I'd suggest you add debug printouts and/or breakpoints where your
insert frames into the pipeline (i.e., the calls which to webrtc
appear spontaneous), and try to follow how far they get.

Diego Bonesso

unread,
Mar 13, 2018, 5:59:11 AM3/13/18
to discuss-webrtc
Thanks, I used the base class  AdaptedVideoTrackSource and I created a method FrameCaptured it's is called from my thread in this method I call the method OnFrame. It's work fine !!!

class StreamSource : public rtc::AdaptedVideoTrackSource
{
.
.void OnFrameCaptured(const webrtc::VideoFrame& frame);
.
}

void StreamSource::OnFrameCaptured(const webrtc::VideoFrame& frame) {
    OnFrame(frame);

Xavier Baró Solé

unread,
Aug 20, 2018, 9:38:14 AM8/20/18
to discuss-webrtc
Hello Diego,

I'm trying to do exactly the same.

I started using an extension of VideoCaptureImpl, using the external interface. This seems to do not send any frame to the peer. Following your discussion, I'm moving to extending from AdaptedVideoTrackSource, but it results in an Abstract class. Can you share your implementation of child class as an example?

Thanks in advance.


El dimarts, 13 març de 2018 10:59:11 UTC+1, Diego Bonesso va escriure:

Muhammad Firdaus Syawaludin Lubis

unread,
Aug 21, 2018, 9:03:24 AM8/21/18
to discuss-webrtc
I am also doing this now for my project. I am not really sure whether this is the best way to implement or not but at least it works for my project:

class CustomVideoTrackSourceTest : public rtc::AdaptedVideoTrackSource {
public:
    CustomVideoTrackSourceTest() {};

    void OnByteBufferFrameCaptured() {
        while (1)
        {
            int width = getMatWidth();
            int height = getMatHeight();
            int adapted_width = getMatWidth();
            int adapted_height = getMatHeight();

            int stride_y = adapted_width;
            int stride_uv = (adapted_width + 1) / 2;
            int target_width = adapted_width;
            int target_height = adapted_height;

            rtc::scoped_refptr<webrtc::I420Buffer> buffer = webrtc::I420Buffer::Create(
                target_width, abs(target_height), stride_y, stride_uv, stride_uv);
            const int conversionResult = libyuv::ConvertToI420(
                static_cast<const uint8_t*>(getMatConvertedData()), adapted_width*adapted_height * 4, buffer.get()->MutableDataY(),  // No cropping
                buffer.get()->StrideY(), buffer.get()->MutableDataU(), buffer.get()->StrideU(),
                buffer.get()->MutableDataV(), buffer.get()->StrideV(), 0, 0,
                width, height, target_width, target_height, libyuv::kRotate0, webrtc::ConvertVideoType(webrtc::VideoType::kABGR));
            OnFrame(webrtc::VideoFrame(buffer, static_cast<webrtc::VideoRotation>(0), 0));
        }
    }

    bool is_screencast() const { return false; }                                                       
    rtc::Optional<bool> needs_denoising() const {                                                       
        return rtc::Optional<bool>(false);                                                               
    }                                                                                                   
    SourceState state() const { return SourceState(); }                                                   
    bool remote() const { return false; }                                                               
};

As you see, because I don't really know the correct way to implement those virtual functions the I just ignore it (as I said, I don't know this is the correct way or not).

And, in the addStreams() function, I modified several lines as follows:

////////////////////////////////////part of AddStreams()////////////////////////////////////
  rtc::scoped_refptr<CustomVideoTrackSourceTest> empty(new rtc::RefCountedObject<CustomVideoTrackSourceTest>());
  std::thread testThread = std::thread(&CustomVideoTrackSourceTest::OnByteBufferFrameCaptured, empty);
  testThread.detach();

  rtc::scoped_refptr<webrtc::VideoTrackInterface> video_track(
      peer_connection_factory_->CreateVideoTrack(
          kVideoLabel, empty));
////////////////////////////////////part of AddStreams()////////////////////////////////////

For now, the obvious fault in my code is that if I disconnect the peer then it seems that the thread is still running. Well. probably by modifying a bit it will solve.

Anyway, I am doing it on Windows, but I think it should work for linux too....
Reply all
Reply to author
Forward
0 new messages