Sending out this CL to kickoff review (% a webcodecs test hang which was recently added and is being suppressed shortly by eugene@)
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Code-Review | +1 |
"AlwaysUseMappablSIForRenderableGpuMemoryBufferVideoFramePool",
s/Always// also spelling is wrong.
if (base::FeatureList::IsEnabled(
Should this be injected based on the bool?
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Commit-Queue | +1 |
"AlwaysUseMappablSIForRenderableGpuMemoryBufferVideoFramePool",
s/Always// also spelling is wrong.
Done
Should this be injected based on the bool?
updated code to remove passing and caching the bool in Frameresource since we actually don't need it and can just use FeatureList::IsEnabled() here.
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Question: We don't serialize native buffer inside CSI over mojo yet, how does it work?
Do we never send these VideoFrames over mojo or they didn't need to be mappable in a first place? Or we need them to be mappable only in the same process?
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Question: We don't serialize native buffer inside CSI over mojo yet, how does it work?
Do we never send these VideoFrames over mojo or they didn't need to be mappable in a first place? Or we need them to be mappable only in the same process?
I looked into all the client which uses RenderableGpuMemoryBufferVideoFramePool.
1. Webrtc uses it in its readback path [1]
webrtc gets a VF backed by gmb from this pool and then does a readback from source texture to this GMB. GMB is always mapped in the renderer/same process.
2. WebGraphicsContext3DProvider uses it to blit shared images into the GpuMemoryBuffer backed VideoFrames coming from pool [2].
it seems like we never use vf csi across process to map in any of above cases.
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
vikas soniQuestion: We don't serialize native buffer inside CSI over mojo yet, how does it work?
Do we never send these VideoFrames over mojo or they didn't need to be mappable in a first place? Or we need them to be mappable only in the same process?
I looked into all the client which uses RenderableGpuMemoryBufferVideoFramePool.
1. Webrtc uses it in its readback path [1]
webrtc gets a VF backed by gmb from this pool and then does a readback from source texture to this GMB. GMB is always mapped in the renderer/same process.2. WebGraphicsContext3DProvider uses it to blit shared images into the GpuMemoryBuffer backed VideoFrames coming from pool [2].
it seems like we never use vf csi across process to map in any of above cases.
[2] Passes frame to the [callback](https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/renderer/platform/graphics/web_graphics_context_3d_video_frame_pool.cc;drc=4268052f5025da8b928c9e59a04493b396acaad3;l=340) after that, so that VideoFrame can end up in many places.
I think it should be on a meet path, and meet was using hardware encoders sometimes. But it's hard to track all destinations of the VideoFrame.
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
vikas soniQuestion: We don't serialize native buffer inside CSI over mojo yet, how does it work?
Do we never send these VideoFrames over mojo or they didn't need to be mappable in a first place? Or we need them to be mappable only in the same process?
Vasiliy TelezhnikovI looked into all the client which uses RenderableGpuMemoryBufferVideoFramePool.
1. Webrtc uses it in its readback path [1]
webrtc gets a VF backed by gmb from this pool and then does a readback from source texture to this GMB. GMB is always mapped in the renderer/same process.2. WebGraphicsContext3DProvider uses it to blit shared images into the GpuMemoryBuffer backed VideoFrames coming from pool [2].
it seems like we never use vf csi across process to map in any of above cases.
[2] Passes frame to the [callback](https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/renderer/platform/graphics/web_graphics_context_3d_video_frame_pool.cc;drc=4268052f5025da8b928c9e59a04493b396acaad3;l=340) after that, so that VideoFrame can end up in many places.
I think it should be on a meet path, and meet was using hardware encoders sometimes. But it's hard to track all destinations of the VideoFrame.
yes the callback goes to gpu process and then its difficult to track all the destinations after it. i guess we have a kill-switch here in case test doesnt cover some use case which we are missing. we can disable it if needed.
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
vikas soniQuestion: We don't serialize native buffer inside CSI over mojo yet, how does it work?
Do we never send these VideoFrames over mojo or they didn't need to be mappable in a first place? Or we need them to be mappable only in the same process?
Vasiliy TelezhnikovI looked into all the client which uses RenderableGpuMemoryBufferVideoFramePool.
1. Webrtc uses it in its readback path [1]
webrtc gets a VF backed by gmb from this pool and then does a readback from source texture to this GMB. GMB is always mapped in the renderer/same process.2. WebGraphicsContext3DProvider uses it to blit shared images into the GpuMemoryBuffer backed VideoFrames coming from pool [2].
it seems like we never use vf csi across process to map in any of above cases.
vikas soni[2] Passes frame to the [callback](https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/renderer/platform/graphics/web_graphics_context_3d_video_frame_pool.cc;drc=4268052f5025da8b928c9e59a04493b396acaad3;l=340) after that, so that VideoFrame can end up in many places.
I think it should be on a meet path, and meet was using hardware encoders sometimes. But it's hard to track all destinations of the VideoFrame.
yes the callback goes to gpu process and then its difficult to track all the destinations after it. i guess we have a kill-switch here in case test doesnt cover some use case which we are missing. we can disable it if needed.
atleast from looking at VideoFrame::MapGMBOrSharedImage() [1], it doesnt seems like we might be mapping cross process.
vikas soniQuestion: We don't serialize native buffer inside CSI over mojo yet, how does it work?
Do we never send these VideoFrames over mojo or they didn't need to be mappable in a first place? Or we need them to be mappable only in the same process?
Vasiliy TelezhnikovI looked into all the client which uses RenderableGpuMemoryBufferVideoFramePool.
1. Webrtc uses it in its readback path [1]
webrtc gets a VF backed by gmb from this pool and then does a readback from source texture to this GMB. GMB is always mapped in the renderer/same process.2. WebGraphicsContext3DProvider uses it to blit shared images into the GpuMemoryBuffer backed VideoFrames coming from pool [2].
it seems like we never use vf csi across process to map in any of above cases.
vikas soni[2] Passes frame to the [callback](https://source.chromium.org/chromium/chromium/src/+/main:third_party/blink/renderer/platform/graphics/web_graphics_context_3d_video_frame_pool.cc;drc=4268052f5025da8b928c9e59a04493b396acaad3;l=340) after that, so that VideoFrame can end up in many places.
I think it should be on a meet path, and meet was using hardware encoders sometimes. But it's hard to track all destinations of the VideoFrame.
vikas soniyes the callback goes to gpu process and then its difficult to track all the destinations after it. i guess we have a kill-switch here in case test doesnt cover some use case which we are missing. we can disable it if needed.
atleast from looking at VideoFrame::MapGMBOrSharedImage() [1], it doesnt seems like we might be mapping cross process.
except chromsOS probably
ok. i was looking into adding support for passing gmb handles in ExportedSharedImage and just realized that we already send the gmb handle in mojo for VideoFrames via VideoFrame::GetGpuMemoryBufferHandle() here [1] which is very nice. It grabs the handle from mappableSI.
so ideally we can convert all use of VideoFrame::WrapExternalGpuMemoryBuffer() first and update this mojo site as last client.
wdyt?
Code-Review | +1 |
lgtm, thanks.
Ah, that's how it works. VideoFrame is still mappable, but just using old path. Then we can convert in any order, I guess.
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Auto-Submit | +1 |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |
[MappableSI] Enabled MappableSI in
RenderableGpuMemoryBufferVideoFramePool.
1. Enable MappableSI in RenderableGpuMemoryBufferVideoFramePool.
2. Add a kill-switch for it.
3. Update relevant unittests.
Inspect html for hidden footers to help with email filtering. To unsubscribe visit settings. |