Q: Since navigator.mediaDevices.getDisplayMedia is only available on some desktop browsers...

913 views
Skip to first unread message

Neil Young

unread,
Apr 15, 2019, 9:46:00 AM4/15/19
to discuss-webrtc
does one know, if there is a way to use a screen capture as input for WebRTC on mobile devices?


Message has been deleted

Neil Young

unread,
Apr 16, 2019, 10:12:22 AM4/16/19
to discuss-webrtc
Sorry, the other post went out too early

Again: I think I have a solution. I'm having a working solution based on the camera as input. I just want to replace it by a capture stream.

My working solution is like so:

1) Preparing track and stream
2) Creating OFFER and send it to remote
3) Start capturing if ICE is connected


Regarding 1)

    private func createMediaSenders() {
        let videoSource = self.factory.videoSource()
        self.videoCapturer = MyRTCCameraVideoCapturer(delegate: videoSource)
let videoTrack = self.factory.videoTrack(with: videoSource, trackId: "track0")
self.peerConnection.add(videoTrack, streamIds: ["stream0"])
self.localVideoTrack = videoTrack
    }

I'm using a slightly modified RTCCameraVideoCapturer class since I had issues with the orientation with the original class, that is not the issue

Regarding 3)
    
    func startCaptureLocalVideo()  {
        guard let capturer = self.videoCapturer as? MyRTCCameraVideoCapturer else {
            return
        }

        guard
            let camera = (MyRTCCameraVideoCapturer.captureDevices().first { $0.position == (AppSettings.useFrontCamera ? .front : .back) }) else {
                // Alert here
                return
        }

        // Stick to 640x480 and 30fps
        for format in MyRTCCameraVideoCapturer.supportedFormats(for: camera) {
            if (CMVideoFormatDescriptionGetDimensions(format.formatDescription).width == 640 &&
                CMVideoFormatDescriptionGetDimensions(format.formatDescription).height == 480) {

                capturer.startCapture(with: camera,
                                      format: format,
                                      fps: 30)

                break
            }
        }
   }



As said: This works. And it is now replaced by this:

Regarding 1)

   private func createMediaSenders() {
        let videoSource = self.factory.videoSource()
        self.videoCapturer = RTCVideoCapturer(delegate: videoSource)
        videoSource.adaptOutputFormat(toWidth: 640, height: 480, fps: 30)
let videoTrack = self.factory.videoTrack(with: videoSource, trackId: "track0")
self.peerConnection.add(videoTrack, streamIds: ["stream0"])
self.localVideoTrack = videoTrack
    }

Regarding 3)

    func startCaptureLocalVideo()  {

        guard let _ = self.videoCapturer else {
            return
        }

        self.startRecording()
    }



with

    func startRecording() {

        

        guard recorder.isAvailable else {
            print("recording is not available at this time.")
            return
        }

                

        recorder.isMicrophoneEnabled = false

        

        if #available(iOS 11.0, *) {
            recorder.startCapture(handler: { (sampleBuffer, bufferType, error) in
                if (bufferType == .video) {
                    guard CMSampleBufferIsValid(sampleBuffer), CMSampleBufferDataIsReady(sampleBuffer), CMSampleBufferGetNumSamples(sampleBuffer) == 1 else {
                        print("invalid sampleBuffer")
                        return
                    }
                    
                    let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)

                    

                    let rtcpixelBuffer = RTCCVPixelBuffer(pixelBuffer: pixelBuffer!)
                    let timestamp = NSDate().timeIntervalSince1970 * 1000 * 1000
                    let videoFrame = RTCVideoFrame(buffer: rtcpixelBuffer, rotation: RTCVideoRotation._0, timeStampNs: Int64(timestamp))
                    //print("vfw \(videoFrame.width) \(videoFrame.height)")
                    self.factory.videoSource().capturer(self.videoCapturer!, didCapture: videoFrame)

                }

            })

        } else {
            // Fallback on earlier versions
        }

    }


This is pretty much exactly the way which is shown in various samples on the web, for instance like here https://stackanswers.net/questions/replaykit-using-webrtc-stops-working-after-going-to-background-repeatedly

What I can see is:

1) The screen capturer is asking for my OK, once in the beginning and after 8 minutes of inactivity (as documented)
2) I'm getting sampleBuffers (which pass my validity checks) at 60 fps (measuring code not shown here)
3) The width and height of the final video frame is 1920 886 (iPhone XS, iOS 12.4, landscape), so my attempt to adjust frame rate and w/h seem to have no impact
4) The sequence of events is kept as shown in the initial statement
5) My SDP offer is the same in both cases and contains H.264 elements as well as VP8

But: There is no video sent...

Anybody able to see, what I'm missing?

Neil Young

unread,
Apr 16, 2019, 10:18:11 AM4/16/19
to discuss-webrtc
Forgot to mention: My GoogleWebRTC  pod version used is 1.1.25744

Neil Young

unread,
Apr 16, 2019, 12:01:23 PM4/16/19
to discuss-webrtc
Nobody? Can't believe that...

Nagamuni reddy

unread,
Apr 16, 2019, 1:47:26 PM4/16/19
to discuss-webrtc
Hey Neil,

Can you tell, how you calculated the timestamp?

Neil Young

unread,
Apr 16, 2019, 1:52:57 PM4/16/19
to discuss...@googlegroups.com
Sure. I have tried two ways, which seem to make no difference:

First:

let timestampNs = NSDate().timeIntervalSince1970 * 1000 * 1000
                    
Second:

let timeStampNs: Int64 = Int64(CMTimeGetSeconds(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) * 1000000000)

Both produce different values, but make no difference in the result.

What drives me crazy currently is the following: My derived MyRTCCameraVideoCapturer class clearly calls

[self.delegate capturer:self didCaptureVideoFrame:videoFrame];

with every video frame. This works.


This delegate seems to be removed in Swift 3 and replaced by

                    self.factory.videoSource().capturer(self.videoCapturer!, didCapture: videoFrame)

At least this is what the compiler tells me. Any attempt to change didCapture to didCaptureVideFrame fails, but the selector is there (checked with "respondsToSelector")

I thought this might be the reason: I'm just feeding the wrong callback....

Not sure...


-- 

--- 
You received this message because you are subscribed to the Google Groups "discuss-webrtc" group.
To unsubscribe from this group and stop receiving emails from it, send an email to discuss-webrt...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/discuss-webrtc/8140d9de-56ed-489a-a56f-49ac81a488a3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Neil Young

unread,
Apr 16, 2019, 7:08:25 PM4/16/19
to discuss...@googlegroups.com
No, I cannot find any way to make this run :( I don't know what happens, but this delegate does obviously nothing at all:

                    videoSource.capturer(self.videoCapturer!, didCapture: videoFrame)

What a mess...

Daxesh Nagar

unread,
Jul 8, 2020, 2:37:11 AM7/8/20
to discuss-webrtc
did you find any solution?
Reply all
Reply to author
Forward
0 new messages