Hi folks,
I've recently taken over a project which uses WebRTC's native code on MacOS. It's reasonably simple, just capturing the screen and streaming it to a remote browser. Our code is based on RTCCameraVideoCapture, except it uses AVCaptureScreenInput to capture the screen in place of a camera device. However, we're seeing a fairly consistent bad access exception after the stream has been running for a short time (just a few seconds in many cases).
The bad access occurs in sdk/objc/components/video_codec/RTCVideoEncoderH264.mm, in a method called resetCompressionSessionIfNeededWithFrame. The problem is the line shown below, which eventually attempts to access a pixel buffer pool after it's been deallocated:
NSDictionary *poolAttributes =
(__bridge NSDictionary *)CVPixelBufferPoolGetPixelBufferAttributes(_pixelBufferPool);
This seems to happen after the resolution of the video stream has increased. I think having a slightly unreliable network connection makes the issue more likely to happen. After the resolution change, the pixel buffer pool gets released and reallocated during a call to VTCompressionSessionEncodeFrame. However, RTCVideoEncoderH264 is still holding the old memory address in _pixelBufferPool. Next time it tries to use it, the bad access occurs.
I'm currently using a fairly simple fix to resolve the problem in our application. Near the top of RTCVideoEncoderH264::encode(), I've added this line to update the stored reference:
_pixelBufferPool = VTCompressionSessionGetPixelBufferPool(_compressionSession);
I suspect a better approach would be to avoid storing a reference to the pixel buffer pool at all. I think it should possibly be fetched from the compression session every time it's needed. There may be performance implications in doing that though.
Is anyone else able to confirm the issue please? I'm very new to MacOS so it's possible I've got something wrong.
Many thanks.