Feed video source from CMMediaBuffer( screen capture ) from ReplayKit in ios

5,480 views
Skip to first unread message

Rakesh E

unread,
Jan 29, 2018, 3:00:26 AM1/29/18
to discuss-webrtc
I am using latest version of webrtc updated from pod, GoogleWebRTC (1.1.21631) , There is an option to capture from camera, but i did not found any option in RTCPeerConectionFactory to create RTCVideoSource from RTCVideoFrame. So i got stuck up for last one week. Can anyone help me out. I am capturing screen using Replaykit framework it generates cmmediabuffer, using that i can create RTCVideoFrame, now i need an method to create RTCVideoSource using RTCVideoFrame in RTCPeerConnectionFactory.

VladimirTechMan

unread,
Jan 29, 2018, 10:24:15 PM1/29/18
to discuss-webrtc
Hi Rakesh,

As long as you can create an instance of RTCVideoFrame with the YUV data in it coming from, or converted from, the captured frame, the rest is not that difficult. I will use some Swift pseudo-code below; I hope that you will be comfortable with it (even for pure Objective-C developers, I think that they should be able to figure out what the related API methods are, quite easily).

Now, let's assume that you have an instance of RTCPeerConnectionFactory and it is referenced by a variable or constant called "peerConnectionFactory". Let's also assume that you have used that factory to create an instance of RTCPeerConnection and the constant or variable referencing it is called "newPeerConnection". With those assumptions, you can now create a new video source, a new video track, and a new video sender, bound to each other:

let localVideoSource = peerConnectionFactory.videoSource()
let localVideoTrack = peerConnectionFactory.videoTrack(with: localVideoSource, trackId: trackIdString)
let videoSender = newPeerConnection.sender(withKind: kRTCMediaStreamTrackKindVideo, streamId: streamIdString)
videoSender.track = localVideoTrack


Here, "localVideoSource" references an instance of class implementing the RTCVideoSource interface. And RTCVideoSource itself implements the RTCVideoCapturerDelegate protocol, which only has one method, to pass a new available video frame to it. Thus, you could do:

let videoCapturer = RTCVideoCapturer(...)
let videoFrame = RTCVideoFrame(...)
localVideoSource.capturer(videoCapturer, didCapture: videoFrame)

In reality, you are not supposed to create a new video capturer for every frame. Instead, it is your class implementing the frame capturing with the Replaykit framework that can be derived from RTCVideoCapturer or it can create and own an instance of RTCVideoCapturer for the duration of the capturing session or even the application lifetime, depending on the required logic. The RTCVideoCapturer class itself, right now, is nothing more than a holder of a reference to an instance of some other class implementing the RTCVideoCapturerDelegate protocol. This design may first look like an unnecessary complication or twist in the Objective-C / Swift WebRTC API, but it simply follows the long-established Objective-C pattern of doing delegate protocols where a delegate can be assigned to more than one owner and thus each delegate method gets a reference to the actual calling owner as its first argument. Yet, at this point, the passed reference to video capturer is simply ignored by the internal video source implementation. Only the frame is used.

I hope the above summary will help you in figuring out how the things work and how to apply them in your app. Happy coding!

Štěpán Votava

unread,
Feb 12, 2018, 4:40:18 PM2/12/18
to discuss-webrtc
Hi Rakesh E, did you manage to build some working prototype which you could share?
Message has been deleted

BWSwift

unread,
Jun 18, 2018, 4:15:53 AM6/18/18
to discuss-webrtc
I've created stricter that you described, but there is no stream on receiver side. Looks like some issues with frame sending. 

I’ve got callback from the system that provides me CMSampleBuffer, I convert it to the RTCVideoFrame and send to the videoSource (emulate VideoCapturer)
 override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) {
        switch sampleBufferType {
            case RPSampleBufferType.video:
                // Handle video sample buffer
                guard peerManager != nillet imageBuffer: CVImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
                    break
                }


                let pixelFormat = CVPixelBufferGetPixelFormatType(imageBuffer) // kCVPixelFormatType_420YpCbCr8BiPlanarFullRange

                

                let timeStampNs: Int64 = Int64(CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) * 1000000000)

                

                let rtcPixlBuffer = RTCCVPixelBuffer(pixelBuffer: imageBuffer)
                let rtcVideoFrame = RTCVideoFrame(buffer: rtcPixlBuffer, rotation: ._0, timeStampNs: timeStampNs)

                

                peerManager.push(videoFrame: rtcVideoFrame)

            

            case RPSampleBufferType.audioApp:
                break

            

            case RPSampleBufferType.audioMic:
                break
        }
    }

Code from peerManager, it is implementation of push functions from code above.
    func push(videoFrame: RTCVideoFrame) {
        guard isConnectedvideoCapturer != nilisProcessed else {
            return
        }
        videoSource.capturer(videoCapturer, didCapture: videoFrame)
    }

Setup code for videoTrack, I call it after generate local offer.

private func setupVideoStreaming() {

        

        videoSource = webRTCPeer.peerConnectionFactory.videoSource()

        let videoTrack = webRTCPeer.peerConnectionFactory.videoTrack(with: videoSource, trackId: "screen_share_track_id")

        let videoSender = webRTCPeer.localPeerConnection.peerConnection.sender(withKind: kRTCMediaStreamTrackKindVideo, streamId: "\(personID)_screen_sharing")

        videoSender.track = videoTrack

        videoCapturer = RTCVideoCapturer(delegate: videoSource)

    }


I also tried another way to setup video track with local stream, but it doesn't help.

    private func setupVideoStreaming() {

        

        

        localStream = webRTCPeer.peerConnectionFactory.mediaStream(withStreamId: "\(personID)_screen_sharing")


        videoSource = webRTCPeer.peerConnectionFactory.videoSource()

        videoCapturer = RTCVideoCapturer(delegate: videoSource)

        videoSource.adaptOutputFormat(toWidth: 441, height: 736, fps: 15)


        let videoTrack = webRTCPeer.peerConnectionFactory.videoTrack(with: videoSource, trackId: "screen_share_track_id")

        videoTrack.isEnabled = true

        localStream.addVideoTrack(videoTrack)


        for localStream in webRTCPeer.localPeerConnection.peerConnection.localStreams {

            webRTCPeer.localPeerConnection.peerConnection.remove(localStream)

        }

        webRTCPeer.localPeerConnection.peerConnection.add(localStream)

    }

VladimirTechMan

unread,
Jun 19, 2018, 10:48:12 PM6/19/18
to discuss-webrtc
Hello BWSwift,

Actually, there is a working code example of broadcast extension for iOS doing screen sharing over the WebRTC mechanisms: If you look at the source code of AppRTCMobile app for iOS, which is a part of the examples in the WebRTC repository, it now includes a demo implementation of broadcast extension.

I would definitely recommend you to look at it – and also, to build that sample app on your Mac, install it on an iPhone or iPad, and see that the broadcast extension works.

As ReplayKit 2 was only introduced last year, the extension source code is not included and built by default, after you run "gn". To include it, and generate an Xcode workspace, you can do:

  gn gen out/ios --args='target_os="ios" target_cpu="arm64" rtc_apprtcmobile_broadcast_extension=true' --ide=xcode
  open -a Xcode.app out/ios/all.xcworkspace

After that, you can open the workspace in Xcode, enable code signing with your developer signature and update the bundle identifiers for the app and the two extensions as necessary.
Message has been deleted

BWSwift

unread,
Jul 3, 2018, 8:29:00 AM7/3/18
to discuss-webrtc
Thank you for the help, I've checked AppRTCMobile and found ARDExternalSampleCapturer,I am doing the same, no problems here. I've found issue in my code, I wrote above: "Setup code for videoTrack, I call it after generate local offer." but it is wrong, first of all I need setup track and stream and than generate offer, because of this I've got wrong sdpOffer and doesn't see anything on other side, when I did it in correct order I've got correct sdpOffer with "sendOnly" description and it works.

Now I faced up with memory leak issue (attached screenshot), I am using VP8 codec and transferring frames to WebRTC lead to allocations of 208 bytes, 48 bytes, 1.5kb pieces, for some reason __DIRTY_DATA has strong pointer to them and they increase extension memory usage to 35+ mb, after that system decide to kill extension with (null) reason. I've switched to h264 and memory leak gone but stream doesn't appear on other side, change codec again to vp8 resolve stream disappearing but lead to memory leak. I've checked supported codecs for encode/decode: h264 and vp8 (thats explain why vp9 doesn't work, when I tried it). I've found that when I am process frame buffer to the videoSource webRTC allocate new buffer inside void ObjCVideoTrackSource::OnCapturedFrame(RTCVideoFrame *frame) to adapt it to the width, height and crop it, but doesn't release old buffer, I am guessing that problem can be here. Do we have some api or instruments to release RTCVideoFrmae buffer manually after it has been used (or I should release sampleBuffer manually? ). I've tried move my code to the autoreleasepool {} but it doesn't help. 
Also there is correlation between frame size and time before killing (large size -> short time, small size -> long time). I call videoSource.adaptOutputFormat(toWidth: Int32(UIScreen.main.bounds.width/2), height: Int32(UIScreen.main.bounds.height/2), fps: 15) before push buffer to videoSource to control frame size. 

Do you have any ideas in which direction to dig?
Screen Shot 2018-07-03 at 11.26.14.png

BWSwift

unread,
Jul 11, 2018, 5:06:42 AM7/11/18
to discuss-webrtc
Resolve the issue by removing call of  videoSource.adaptOutputFormat(toWidth: Int32(UIScreen.main.bounds.width/2), height: Int32(UIScreen.main.bounds.height/2), fps: 15) before push every frame. (call it once when setup videoTrack and localVideoStream)

Zach W

unread,
Jul 15, 2018, 4:14:09 PM7/15/18
to discuss-webrtc
Hi BWSwift,

Does the code above that you have work as a basic screen broadcast extension in Swift? I've been having issues as I'm not too experienced in Objective C and the example broadcaster keeps crashing when someone joins the room with a null error (VladimirTechMando do you know why?) I can't seem to track. I'd prefer to use a swift baseline. Also which Cocoa pod did you use for this that supports Swift. Thanks!

Diego Quimbo

unread,
Aug 21, 2018, 9:03:24 AM8/21/18
to discuss-webrtc
Hi, 
To get the SMSampleBufferRef and convert to RTCVideoFrame inside of my app I'm using:

RPScreenRecorder.shared().startCapture(handler: { (sampleBuffer, bufferType, error) in

Inside of the handler I convert the SMSampleBufferRef to RTCVideoFrame and send it to RTCVideoSource. 

This works perfectly.. The problem arises when I start sending the app to background and come back several times; then ReplayKit stops calling his capture handler.
The only way I can get it to work again is to restart the device. Even killing the app and restarting doesn't work.
Do you have any ideas about the problem? 

BWSwift

unread,
Aug 21, 2018, 9:12:17 AM8/21/18
to discuss-webrtc
Hi, try to check memory of the extension, there are a lot of restriction of extension resources, if your extension will use a lot of memory it will be killed by the system, so as hot fix try to reduce quality of your stream from the extension and addition check that you create SENDONLY peer.

BWSwift

unread,
Aug 21, 2018, 9:19:36 AM8/21/18
to discuss-webrtc
I had the same behavior when I kill app during the broadcast (also when debugger is attached), but after I've create notification layer between app and extension it has been resolved. When app go off I am sending notification to the extension, on extension site I am handle it and stop extension with some error. You can create some logic based on app groups to notify extension about some events from main app, where you can handle/cover all cases when you've got this behavior. First of all try to test without debugger and then try to cover all cases one by one.

Diego Quimbo

unread,
Aug 24, 2018, 6:19:24 AM8/24/18
to discuss-webrtc
Thanks for you reply,
I guess there is a misunderstand, I'm trying to recording the screen inside of my app without using an extension. 
For that I'm using:

RPScreenRecorder.shared().startCapture(handler: { (sampleBuffer, bufrferType, error) in


    // Getting buffers

}, completionHandler: nil)


I will try to reduce the quality of the stream and check the SENDONLY peer. 
Message has been deleted

maparthi vengababu

unread,
Sep 20, 2018, 9:33:56 AM9/20/18
to discuss-webrtc
Hai BWSwift,

can u share how u achieve the video source from CMMediaBuffer(screen capture) little more elaborately,
it will be a great help.
Thank you

BWSwift

unread,
Sep 20, 2018, 9:49:25 AM9/20/18
to discuss-webrtc

maparthi vengababu

unread,
Sep 24, 2018, 3:34:06 AM9/24/18
to discuss-webrtc
Hai BWSwift,
can u please provide your mail id, i have some doubts to clarify.
Thank you.


On Monday, 29 January 2018 13:30:26 UTC+5:30, Rakesh E wrote:

utkarsh agarwal

unread,
Jan 4, 2019, 12:53:19 PM1/4/19
to discuss-webrtc
I have used the approach suggested by BW Swift for screen sharing using replaykit.
I am able to send the video packets to webrtc but a green screen appears on receiver side.
Can anyone suggest what could be the issue.
I think the issue might be with the media configuration.
Currently i have below configuration


   NBMMediaConfiguration *config = [[NBMMediaConfiguration alloc] init];

    config.rendererType = NBMRendererTypeOpenGLES;

    config.audioBandwidth = 0;

    config.videoBandwidth = 0;

    config.audioCodec = NBMAudioCodecOpus;

    config.videoCodec = NBMVideoCodecVP8;

    

    NBMVideoFormat format;

    format.dimensions = (CMVideoDimensions){720, 480};

    format.frameRate = 30;

    format.pixelFormat = NBMPixelFormat420f;

    config.receiverVideoFormat = format;

    

    config.cameraPosition = NBMCameraPositionAny;

Ashish Verma

unread,
Jan 13, 2019, 1:50:01 AM1/13/19
to discuss-webrtc
Hi VladimirTechMan,

I have checked the source code of AppRTCMobile app for iOS for Broadcast extension.
I have few things to ask:
1. How the code has been shared between our Main app and Broadcast extension i.e. which classes has been shared with the broadcast upload extension ?
2. In AppRTC demo app, we can have maximum two participants. So, when we share our screen with Broadcast extension, do we need to join another room?
3. I have implemented the code to share screen using Broadcast extension as provided in AppRTCMobile  Demo app. And I am able to join the room but no video packets are being sent. What can be the possible reason behind it ?

Bhamin Patel

unread,
Feb 28, 2020, 9:55:41 AM2/28/20
to discuss-webrtc
Hi @BWSwift! Your efforts saved lot of my time. I successfully implemented video steaming using Replaykit and GoogleWebRTC but now im stuck with audio streaming. how to feed Audio Samples(CMSampleBuffer) received in processSampleBuffer() to RTCAudioTrack? Any help would be appreciated!

Vishal Dalsania

unread,
Feb 28, 2020, 10:06:18 AM2/28/20
to discuss...@googlegroups.com
You will have to create a custom ADM and inject it into peer connection. Not possible as direct implementation in iOS sdk of webrtc. It must be implemented in native api using c++.

Sent from my iPhone

On Feb 28, 2020, at 8:25 PM, Bhamin Patel <bhamin...@zeuslearning.com> wrote:


--

---
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/11833f24-3429-46ed-a03d-5d49664bd3f1%40googlegroups.com.

Bhamin Patel

unread,
Mar 3, 2020, 4:13:20 AM3/3/20
to discuss-webrtc
I have custom audio source implemented in c++ in my other project. Is there a way i can reuse that c++ code in my iOS swift project. Some how by integrating it as dependency. I am new to iOS developement so any help with guiding resources would be helpful! Thanks in advance!


On Friday, February 28, 2020 at 8:36:18 PM UTC+5:30, Vishal Dalsania wrote:
You will have to create a custom ADM and inject it into peer connection. Not possible as direct implementation in iOS sdk of webrtc. It must be implemented in native api using c++.

Sent from my iPhone
To unsubscribe from this group and stop receiving emails from it, send an email to discuss...@googlegroups.com.

Vishal Dalsania

unread,
Mar 3, 2020, 6:36:21 AM3/3/20
to discuss-webrtc
Here is a sample implemented for .NET


You have to do your own implementation iOS SDK

Manpreet Singh

unread,
Mar 17, 2020, 5:36:10 AM3/17/20
to discuss-webrtc
Hello @Bhamin,

Are you using Extension to upload screen capture?

Thanks,
Manpreet

WebrtcTest None

unread,
May 9, 2020, 9:01:49 AM5/9/20
to discuss-webrtc
Any Update on how to get screen sharing working on iOS using extension?

WebrtcTest None

unread,
May 11, 2020, 12:10:28 AM5/11/20
to discuss-webrtc
Hey Guys, I have been waiting to get this working has anyone successfully able to share screen to apprtc?

Manpreet Singh

unread,
May 11, 2020, 4:53:00 AM5/11/20
to discuss-webrtc
The above is working but not with Broadcast Extension. 

WebrtcTest None

unread,
May 12, 2020, 11:25:41 AM5/12/20
to discuss-webrtc
Hey @Manpreet,
                            Can you give me an example how are you getting  to work without broadcast extension? A github project or some code will be really helpful. Thanks!
Reply all
Reply to author
Forward
0 new messages