YUV SkImages (CVPixelBuffer and AHardwareBuffer)

Skip to first unread message


Apr 10, 2024, 1:05:43 PMApr 10
to skia-discuss
Hey all!

I'm working on a Skia project where frames from the Camera are streamed in realtime.

1. On iOS, I receive CMSampleBuffers (which contain a CVPixelBuffer, which can be converted to an MTLTexture), and they are normally in 8-bit full-range YUV (4:2:0).
I found a way to convert those YUV buffers to a SkImage by using GrYUVABackendTextures.
Here is my implementation: react-native-skia/SkiaCVPixelBufferUtils.h - it is quite advanced by correctly reading the CVPixelBuffer's format to generate the correct GrYUVABackendTexture (e.g. supporting 4:4:4, 4:2:0, 4:2:2 layouts, 8-bit and 10-bit, full- and video-range, and single-, bi-, and tri-planar buffers).
(as of writing this, I have a small memory-leak somewhere in that code, not sure where. If anyone reading this spots the mistake in my code, please let me know!)
2. On Android, I receive AHardwareBuffers, which I convert to an SkImage by using SkImages::DeferredFromAHardwareBuffer

I have two questions now:

1. iOS; is there an API similar to the Android API for creating an SkImage easily, or do I have to do all of the format checking logic myself (as seen in the SkiaCVPixelBufferUtils.h file I shared above)? A one-stop method like SkImages::DeferredFromCVPixelBuffer would be dope.
2. Android; the SkImages::DeferredFromAHardwareBuffer API doesn't seem to support YUV buffers (AHARDWAREBUFFER_FORMAT_Y8Cb8Cr8_420 for default YUV_420 or AHARDWAREBUFFER_FORMAT_YCbCr_P010 for HDR), only RGB. Since YUV_420 is defacto the standard format for getting application-access to such realtime frames (e.g. in ImageReader instances), is there any suggested solution or maybe a potential future plan to support YUV AHardwareBuffers?



Apr 10, 2024, 1:09:17 PMApr 10
to skia-discuss
Note: 8-bit YUV_420 is the default format when streaming Frames from the Camera (on iOS; AVCaptureVideoDataOutput, on Android; ImageReader), which is why I'm asking for direct YUV support in those APIs.

It is possible for me to do the YUV -> RGB conversion on my end, and then working with RGB CVPixelBuffer/AHardwareBuffers, but the conversion comes with an overhead (on older Android phones this is performed on the CPU without NEON) and I wanted to experiment if YUV Textures (which will be converted to RGB on the GPU, so less latency in my render code) are more efficient than RGB Textures.
Also, YUV supports 10-bit HDR, which RGB does not support in Camera frames (CVPixelBuffer/AHardwareBuffer), so this would be the only way to use Skia for rendering to realtime HDR frames - assuming Skia supports 10-bit HDR (haven't reached that point yet lol)

Jim Van Verth

Apr 10, 2024, 1:12:34 PMApr 10
to skia-d...@googlegroups.com
1. There is no iOS API like that, and we don't really have the bandwidth to work on one at the moment. I think we'd be happy to add one though if you'd be willing to code it up.
2. I defer to the Android experts on this one.

You received this message because you are subscribed to the Google Groups "skia-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to skia-discuss...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/skia-discuss/ed3aff65-55a2-482b-9bff-dd61e082d8dbn%40googlegroups.com.


Jim Van Verth |
 Software Engineer | Google.com


Apr 10, 2024, 1:46:15 PMApr 10
to skia-discuss
Cool - How do I get familiar with the contributing flow for Skia? Can I contribute through the GitHub mirror? I can submit the code I wrote - that supports BGRA_32, YUV 4:2:0, YUV 4:4:4, YUV 4:2:2, and all in video- and full-range, as well as 10-bit and 8-bit. I just need help with writing tests.

Jim Van Verth

Apr 10, 2024, 1:47:45 PMApr 10
to skia-d...@googlegroups.com


Apr 11, 2024, 4:10:59 AMApr 11
to skia-discuss

Btw, a quick side question; which approach is faster to convert a CVPixelBuffer to an SkImage?

1. Using CVMetalTextureCacheCreateTextureFromImage -> MTLTexture

CVMetalTextureCacheRef textureCache = getTextureCache();

// Convert CMSampleBuffer* -> CVMetalTexture*
CVMetalTextureRef cvTexture;
CVReturn result = CVMetalTextureCacheCreateTextureFromImage(
kCFAllocatorDefault, textureCache, pixelBuffer, nil,
MTLPixelFormatBGRA8Unorm, width, height,
0, &cvTexture);
id<MTLTexture> mtlTexture = CVMetalTextureGetTexture(cvTexture);

// Convert the rendered MTLTexture to an SkImage
GrMtlTextureInfo textureInfo;
textureInfo.fTexture.retain((__bridge void *)mtlTexture);
GrBackendTexture backendTexture((int)mtlTexture.width, (int)mtlTexture.height,
skgpu::Mipmapped::kNo, textureInfo);

// TODO: Adopt or Borrow?
auto image = SkImages::AdoptTextureFrom(
context.get(), backendTexture, kTopLeft_GrSurfaceOrigin,
kBGRA_8888_SkColorType, kOpaque_SkAlphaType, SkColorSpace::MakeSRGB());

2. Using SkData Without Copy?

// Step 1: Extract the CVPixelBufferRef from the CMSampleBufferRef
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

// Step 2: Lock the pixel buffer to access the raw pixel data
CVPixelBufferLockBaseAddress(pixelBuffer, 0);

// Step 3: Get information about the image
void *baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);

// Assuming the pixel format is 32BGRA, which is common for iOS video frames.
// You might need to adjust this based on the actual pixel format.
SkImageInfo info = SkImageInfo::Make(width, height, kRGBA_8888_SkColorType,

// Step 4: Create an SkImage from the pixel buffer
sk_sp<SkData> data =
SkData::MakeWithoutCopy(baseAddress, height * bytesPerRow);
sk_sp<SkImage> image = SkImages::RasterFromData(info, data, bytesPerRow);
auto texture = SkiaMetalSurfaceFactory::makeTextureFromImage(image);
// Step 5: Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
return texture;

I have a feeling that the SkData approach downloads memory to the CPU, while MTLTexture stays on the GPU but I wanted to confirm with you guys since the time it takes to generate that SkImage is almost identical. I haven't benchmarked actual GPU frame time/flush differences yet.


Apr 11, 2024, 5:17:24 AMApr 11
to skia-discuss
I just found an interesting "bug" (?) in the code with MTLTexture (the first one I sent).

This is how I create the offscreen surface that I use to render my Frames into:

auto ctx = new OffscreenRenderContext(
device, ThreadContextHolder::ThreadSkiaMetalContext.skContext,
ThreadContextHolder::ThreadSkiaMetalContext.commandQueue, width, height);

// Create a GrBackendTexture from the Metal texture
GrMtlTextureInfo info;
info.fTexture.retain((__bridge void *)ctx->texture);
GrBackendTexture backendTexture(width, height, skgpu::Mipmapped::kNo, info);

// Create a SkSurface from the GrBackendTexture
auto surface = SkSurfaces::WrapBackendTexture(
backendTexture, kTopLeft_GrSurfaceOrigin, 0, kBGRA_8888_SkColorType,
nullptr, nullptr,
[](void *addr) { delete (OffscreenRenderContext *)addr; }, ctx);

And then I just render like this:

const surface = getOffscreenSurface(1080, 1920)
const onFrame = (frame) => {
const canvas = surface.getCanvas()
canvas.clear(red) // clear with red background
const image = cvPixelBufferToSkImage(frame.pixelBuffer)
canvas.drawImage(image, 0, 0)

When I re-use that single surface for my rendering on each frame (as seen above), it will start to flicker a lot and for some reason the SkImage that contains contents of the CVPixelBuffer (MTLTexture) will just not draw (hence the flickering; the background will be visible)

When I re-create the Surface on each render (aka when I move the getOffscreenSurface(..) call into the onFrame function), it will draw perfectly fine, and to my surprise also pretty smoothly.

Is there anything I'm doing wrong? Why does seemingly randomly fail to draw the SkImage when I re-use the Surface?
It works fine with the SkData approach (second one), it only fails when using MTLTexture.

Jim Van Verth

Apr 19, 2024, 12:10:36 PMApr 19
to skia-d...@googlegroups.com
Sorry about the delay -- been swamped.

I don't think there's much difference between the approaches in terms of performance. CVMetalTextureCacheCreateTextureFromImage will, as you said, create a GPU image that wraps the MTLTexture in GPU memory. That said, I think there's still an upload that happens from the CVPixelBuffer to the MTLTexture, as the documentation says that the CVPixelBuffer is in main memory, similar to our SkPixmap. So there's an upload either way. The SkData approach might be slightly better as the upload is batched with our rendering commands as opposed to whatever Apple is doing, but I wouldn't be surprised if they're quite similar.

And I don't have quite enough context, but that's probably related to what's going on with your offscreen surface rendering issue -- the CVPixelBuffer isn't getting consistently uploaded to the MTLTexture you created the first time through. When you recreate the surface it creates a new MTLTexture and performs the upload, and the same thing happens when you use the SkData approach.


Apr 20, 2024, 4:09:12 AMApr 20
to skia-discuss
Hey no worries, thank you for your detailed reply!

I've actually been measuring much better results with the MTLTexture approach - less CPU usage and faster frame time (in Camera 2x, in Video/AssetReader almost 5x) - not sure if my SkData code does something wrong but it's much slower.

Anyways, I fixed all the issues now I render perfectly.
I can render YUV and RGB CVPixelBuffers on iOS, and RGB buffers on Android.
Would be dope to have YUV HardwareBuffer support for Android.
I can try to make that contribution the coming days for iOS to add CVPixelBuffer methods just like on Android.

Reply all
Reply to author
0 new messages