Use current AVCaptureSession on WebRTC on iOS devices

1,472 views
Skip to first unread message

Pablo Martinez Piles

unread,
Feb 1, 2015, 12:51:55 PM2/1/15
to discuss...@googlegroups.com
Hello guys, I'm going to explain my current project and what I want to do.

Current project:
I have an iOS App that is currently recording a video and saving it to the disk. I'm using the Avfoundation libraries by apple to record and show the capture screen on the device.

I want to do:

I want to keep the current functionality adding webrtc. The problem is webrtc project is already using AVCaptureSession and You can't have two sessions on the same app.

I was asking about this, but seems to be complicated. Someone told me about write a subclass of cricket::VideoCapturer but I'm not sure if I need to rewrite every class behind this in C++. Also i was seeing the AvCapturesession is wrote in rtc_video_capturer_ios.h but I don't understand how can I pass my AVCaptureSession to this class from my current project.

Does anyone have an example of this? I need an orientation.

Thanks so much for your help.

you...@gmail.com

unread,
Feb 2, 2015, 4:25:41 AM2/2/15
to discuss...@googlegroups.com
Hi Pablo,

I don't know the details of your app so it is hard to suggest you something.
WebRTC manages AVCaptureSession in rtc_video_capture_ios_objc.h/m so I don't think that passing AVCaptureSession to WenRTC is a good idea since you need to make sure you have the parameters you want and those are compatible with what WebRTC wants to use.

1. You could think the other way around, why not path AVCaptureSession from WebRTC to your app or even just send frames only. You could for example post notifications with frames from rtc_video_capture_ios_objc.mm captureOutput:didOutputSampleBuffer:fromConnection: to your app and then encode and save them.

2. If you use C++ API then you could really subclass cricket::VideoCapturer. You will probably have to modify things starting from DeviceManager (cricket::DeviceManagerInterface) which creates capturers so it creates proper capturer for you. Then you could just feed frames from your app to your cricket::VideoCapturer subclass which will give them further to WebRTC.

3. You could move video functionality of your app directly in rtc_video_capture_ios_objc.h/m where you have AVCaptureSession created by WebRTC.

Pablo Martinez Piles

unread,
Feb 2, 2015, 5:55:48 AM2/2/15
to discuss...@googlegroups.com
Hi,

Thanks for your response!

I have a couple of questions in your 3 points (thanks for that)

1) I would like to use my own Avcapturesession because I'm showing the view to the user before to start to recording or connecting to webrtc server. With this solution I have to wait until the connection with the server has been done.

2) If I rewritte the videocapturer the class  rtc_video_capture_ios_objc.mm will be unused right? because I will pass the buffer or frames from my current app.

3) I can't do that because the same reason that 1)

Thanks so much!

you...@gmail.com

unread,
Feb 3, 2015, 8:49:36 AM2/3/15
to discuss...@googlegroups.com

On Monday, February 2, 2015 at 11:55:48 AM UTC+1, Pablo Martinez Piles wrote:


1) I would like to use my own Avcapturesession because I'm showing the view to the user before to start to recording or connecting to webrtc server. With this solution I have to wait until the connection with the server has been done.

 You could still fake it. I mean show your users video from your AVCaptureSession and when you start webrtc start passing frames from WebRTC's AVCaptureSession.


2) If I rewritte the videocapturer the class  rtc_video_capture_ios_objc.mm will be unused right? because I will pass the buffer or frames from my current app.
Well, I haven't done that myself but I believe it will be so. I do not think that writing your own videocapturer subclass is that much work but you should probably be prepared to maintain your own branch of WebRTC then. Although this seems to be the case anyway.
 

3) I can't do that because the same reason that 1)
Yep, it seems this case in not feasible for you.
 

Pablo Martinez Piles

unread,
Feb 3, 2015, 11:37:08 AM2/3/15
to discuss...@googlegroups.com
Thanks for your response,

Indeed I'm writing a subclass of cricket::VideoCapturer because I think is the best way to do this. but I have problems :) that is driving me crazy.

1) I have created a new subclass of VideoCapturer and I did this (see the comment where I'm writing the custom code):


+ (RTCVideoCapturer*)capturerWithDeviceName:(NSString*)deviceName {


  const std::string& device_name = std::string([deviceName UTF8String]);


  rtc::scoped_ptr<cricket::DeviceManagerInterface> dev_manager(


      cricket::DeviceManagerFactory::Create());


  bool initialized = dev_manager->Init();


  NSAssert(initialized, @"DeviceManager::Init() failed");

   // HERE: Add my VideoCapturer factory


    cricket::DeviceManager* device_manager = static_cast<cricket::DeviceManager*>(dev_manager.get());


    device_manager->SetVideoDeviceCapturerFactory(new cricket :: VideoCapturerFactoryCustom());

   
//End my custom code

 
   cricket
::Device device;


  if (!dev_manager->GetVideoCaptureDevice(device_name, &device)) {


    LOG(LS_ERROR) << "GetVideoCaptureDevice failed";


    return 0;


  }


  rtc::scoped_ptr<cricket::VideoCapturer> capturer(


      dev_manager->CreateVideoCapturer(device));


  RTCVideoCapturer* rtcCapturer =


      [[RTCVideoCapturer alloc] initWithCapturer:capturer.release()];


  return rtcCapturer;


}


This is based in this example http://sourcey.com/webrtc-custom-opencv-video-capture/#videocapturerocvh


2) What is the best way to send frames from my method captureOutput:didOutputSampleBuffer:fromConnection:  to webrtc? Notifications, delegates?

and I have to send directly to my VideoCapturer or another class?


Thanks so much! I'm looking forward your response!

you...@gmail.com

unread,
Feb 5, 2015, 5:44:18 AM2/5/15
to discuss...@googlegroups.com
Hmm, what I had in mind is not exactly what you are trying to do. If, of course, I got your idea correctly.

As you can see in rtc_video_capture_ios_objc.mm in
- (void)captureOutput:(AVCaptureOutput*)captureOutput
    didOutputSampleBuffer
:(CMSampleBufferRef)sampleBuffer
           fromConnection
:(AVCaptureConnection*)connection

you get a data buffer which is then fed to _owner->IncomingFrame(...);

So I would try(please remember that I didn't do it) to create my subclass of cricket::VideoCapturer which implements required virtual methods and is connected to your class which manages AVCaptureSession (lets call your class that manages AVCaptureSession YourManager).
Then YourManager is in
- (void)captureOutput:(AVCaptureOutput*)captureOutput
    didOutputSampleBuffer
:(CMSampleBufferRef)sampleBuffer
           fromConnection
:(AVCaptureConnection*)connection

You can do whatever you need for your app and then pass frame data to YourVideoCapturer. You can probably create correct frame class like in video_capture_impl.cc in int32_t VideoCaptureImpl::IncomingFrame(..).
So basically it is answer to your second question. Performance wise it is best that you pass it directly. I believe notifications is a bad idea in this case.

Hope this helps.

jordi domenech flores

unread,
Feb 8, 2015, 5:54:10 PM2/8/15
to discuss...@googlegroups.com
if you just want to get the AVCaptureSession there's a simpler way: just add an observer for capture session related notifications:
https://developer.apple.com/library/prerelease/ios/documentation/AVFoundation/Reference/AVCaptureSession_Class/index.html#//apple_ref/doc/constant_group/Notification_User_Info_Key

hope it helps

jordi domenech flores

unread,
Feb 8, 2015, 5:55:58 PM2/8/15
to discuss...@googlegroups.com
actually:
AVCaptureSessionDidStartRunningNotification
AVCaptureSessionDidStopRunningNotification

jordi domenech flores

unread,
Feb 8, 2015, 5:59:13 PM2/8/15
to discuss...@googlegroups.com
sorry for the partioned answer :) just forgot to metion that the 'object' notification property is the AVCaptureSession instance.
Reply all
Reply to author
Forward
0 new messages