I am trying to use multiplex encoder to encode some custom data into video frames. I do this by creating a custom capturer and build AugmentedVideoFrameBuffer with my custom data. But I found the current webrtc video pipeline does not handle multiplex encoded frames friendly, i.e. the internal conversions of video frame do not respect multiplex frames and custom data.
To name a few examples:
These kind of conversions will discard the custom data and change the AugmentedVideoFrameBuffer to pure I420 buffer, which in turns will surprise the MultiplexEncoderAdapter::Encode function, when it is trying to static_cast the frame buffer back to AugmentedVideoFrameBuffer, and the application will crash.
One way to work around this is to create multiplex-awared version of the involved classes so when they convert the video frames they can correctly handle the multiplex frames. But I think this is not a good approach because this means basically I have to maintain my own video pipeline.
Another option may be to build the AugmentedVideoFrameBuffer latter in the video pipeline, i.e., instead of encode custom data in my custom capturer, I should create my own MultiplexEncoderAdapter class and encode the custom data there?
Is there any better way to handle such multiplex encoder problems?
Thanks,
Bin
I am trying to use multiplex encoder to encode some custom data into video frames. I do this by creating a custom capturer and build AugmentedVideoFrameBuffer with my custom data. But I found the current webrtc video pipeline does not handle multiplex encoded frames friendly, i.e. the internal conversions of video frame does not respect multiplex frames and custom data.
To name a few examples:
These kind of conversions will discard the custom data and change the AugmentedVideoFrameBuffer to pure I420 buffer, which in turns will surprise the MultiplexEncoderAdapter::Encode function, when it is trying to static_cast the frame buffer back to AugmentedVideoFrameBuffer, and the application will crash.
One way to work around this is to create multiplex-awared version of the involved classes so when they convert the video frames they can correctly handle the multiplex frames. But I think this is not a good approach because this means basically I have to maintain my own video pipeline.
Another option may be to build the AugmentedVideoFrameBuffer latter in the video pipeline, i.e., instead of encode custom data in my custom capturer, I should create my own MultiplexEncoderAdapter class and encode the custom data there?
Is there any better way to handle multiplex encoder problems?
Thanks,
Bin