So, as i understand, it could work like that :- The demuxer from the media pipeline would provide encoded buffers containing frames to the ozone platform using the VideoDecodeAccelerator- The ozone platform, interfaced with the trustzone code, would decode these buffers and extract frames. An arbitrary handle would be assigned to each frame.- Those handles would then be given back to chromium to live in the media pipeline- When it would be time to display a frame, chromium would use this "ScheduleOverlayPlane" interface to notify the ozone platform which frame should be displayed.- Finally on the ozone platform, the notification would be forwarded to the trust zone code which will be in charge of rendering the frame
Feel free to correct me or add details !Additionally, I took a look at the webmedplayer_impl.* files. How difficult do you think it would be to replace this implementation with our own (backed by the library and the low level video player) ?There is also code from chromecast in chromium, especially the CMA part that seems to expose interface corresponding to our use. What do you think of it ? Is it usable ?Thank you very much,Aubin
Hi Aubin,So, as i understand, it could work like that :- The demuxer from the media pipeline would provide encoded buffers containing frames to the ozone platform using the VideoDecodeAccelerator- The ozone platform, interfaced with the trustzone code, would decode these buffers and extract frames. An arbitrary handle would be assigned to each frame.- Those handles would then be given back to chromium to live in the media pipeline- When it would be time to display a frame, chromium would use this "ScheduleOverlayPlane" interface to notify the ozone platform which frame should be displayed.- Finally on the ozone platform, the notification would be forwarded to the trust zone code which will be in charge of rendering the frameYour understanding is correct. The interface we expose for the handle is NativePixmap. The expectation is that you provide either a dmabuf fd or an EGLClientBuffer containing video data to Chromium. This way we could render the frame on the GPU using an EGLImage if you are doing a CSS transformation or if your overlay engine can't handle some scenario.In your particular case, it sounds like you never want video data accessible to the gpu, so you could provide a small 16x16 (or similar) dummy buffer or maybe even a low-res frame for chrome to use as an EGLClientBuffer. You could stash the hi-res internal handle in your NativePixmap implementation, which will be given back to you via ScheduleOverlayPlane. This way, Chrome has something to display as a fallback and your video data remains private in your trust zone.Feel free to correct me or add details !Additionally, I took a look at the webmedplayer_impl.* files. How difficult do you think it would be to replace this implementation with our own (backed by the library and the low level video player) ?There is also code from chromecast in chromium, especially the CMA part that seems to expose interface corresponding to our use. What do you think of it ? Is it usable ?
Thank you Alex, for these explanations, this is quite clearer.However, We want to decode UHD streams but the framebuffer is not large enough for it so the low level player library is our best solution for now.Nevertheless, the ScheduleOverlayPlane method as a fall back implementation could be useful if other methods fail.Andrew, do you know if the CMA path is usable yet ? And starting in which version of chromium ?This solution, if available, would be of great help to us.