I currently have software-based decoding working in WebRTC using H264DecoderImpl::Decode. Now, I am planning to implement hardware-based decoding. I understand the decoder is required to decode the frames and send them back to the compositor. In WebRTC, decoded frames are delivered to the rendering pipeline using the provided frame callback.
While this approach can be made to work fine for clear (non-DRM) content, how can this be achieved for DRM-protected content? We already have a custom renderer that interacts with our embedded hardware using CreateVideoHoleFrame for MSE playback. Can this same mechanism be used for WebRTC? Additionally, why is it necessary to deliver decoded frames back to WebRTC’s rendering pipeline via the frame callback?
Is there any sample hardware decoder code which is implemented somewhere and can I get the sample code for the same.
I guess something similar needs to be implemented - https://docs.nvidia.com/jetson/archives/r36.4/DeveloperGuide/SD/HardwareAccelerationInTheWebrtcFramework.html?

--
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev
---
You received this message because you are subscribed to the Google Groups "Chromium-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev...@chromium.org.
To view this discussion visit https://groups.google.com/a/chromium.org/d/msgid/chromium-dev/21daac38-20aa-49b0-9c2f-71ce93f1ce8fn%40chromium.org.