Implementing a High-Performance Film Emulation Pipeline in CameraX 1.5: Best Practices for Pre-Save Processing

85 views
Skip to first unread message
Assigned to charco...@google.com by tah...@google.com

Akshay Sharma

unread,
Apr 19, 2026, 4:41:16 PMApr 19
to Android CameraX Discussion Group
Hi everyone,

I am working on integrating a custom film emulation pipeline into an Android application using CameraX 1.5. The goal is to apply complex color science—specifically 3D LUTs, film grain, and halation—directly to the image data before it is persisted to storage via ImageCapture.

Given the updates in version 1.5, I want to ensure I'm using the most efficient "CameraX-native" way to intercept the buffer without introducing significant shutter lag or memory overhead.

I have a few specific questions regarding the implementation architecture:

1. The Interception Point: Effects API vs. ImageAnalysis
With the maturing of the CameraEffect API, is it now the recommended standard for heavy-duty color grading (3D LUTs or using custom glsl shaders using mathematics calculation) intended for the final save, or is it still more performant to process an ImageProxy via ImageAnalysis and manually save the bitmap?

2. Color Space & Format Handling
When applying film simulation, working in the correct color space is critical.

Does CameraX 1.5 provide a streamlined way to ensure the CameraEffect receives a linear RGB buffer, or are we still primarily limited to YUV_420_888 which requires manual conversion before LUT application?

How can we best handle 10-bit HDR input if the device supports it, ensuring the emulation doesn't clip the highlights before the final encode?

3. NPU/GPU Interleaving for Film Grain
Film grain and bloom often require spatial processing that is heavy for the CPU.

What is the best practice for interleaving Vulkan compute shaders within the CameraX pipeline to minimize latency?

Are there specific hooks in 1.5 to prevent the "double-buffering" overhead when moving data between the camera producer and the GPU consumer?

4. Zero Shutter Lag (ZSL) Compatibility
If I implement a custom CameraEffect, will it break the Zero Shutter Lag functionality? How does the pipeline prioritize the "processed" frame versus the "raw" frame in the ZSL ring buffer?

Technical Stack Context:

Target: CameraX 1.5

Processing: 3D LUTs (Cube/PNG), Film Grain (Procedural), Halation (Gaussian Blur/Threshold)

Hardware: Focusing on devices with dedicated NPU/high-end Adreno GPUs.

Looking forward to hearing how others are structuring their imaging engines in the latest builds!

Charcoal Chen

unread,
May 4, 2026, 3:34:19 AM (8 days ago) May 4
to Android CameraX Discussion Group, thetec...@gmail.com

Thank you for this detailed technical discussion! I’ve been researching the artistic terms you mentioned—like 3D LUTs and halation—to better map them to our library. Please note that I am less experienced with the specific math of film science. My reply is based on how these technical requirements fit into our current and future APIs.

If I've misunderstood your implementation goals, please let me know and we can clarify further!

Critical Note on Resolution (Still Images)

Before choosing an architecture, it's important to note a key trade-off with the CameraEffect API: If you apply a CameraEffect to ImageCapture, the final image is generated through the OpenGL pipeline. This typically limits the resolution to the size of the OpenGL surface (often 1080p for preview consistency) rather than the camera sensor's maximum resolution (e.g., 12MP or 50MP). If high-resolution still capture is your priority, you may need to process the full-resolution ImageProxy via ImageAnalysis or ImageCapture callbacks instead.

1. Effects API vs. ImageAnalysis: Performance Trade-offs

There isn't a single "recommended" solution; it depends on your performance requirements and whether you solution can in the selected path:

  • CameraX 1.5/1.6: ImageAnalysis in these versions is designed for CPU-output (YUV/RGB). Running complex math like 3D LUTs on the CPU can be intensive. If your emulation logic can be handled in a GLSL shader, the CameraEffect (GPU) approach will likely offer better performance.
  • Upcoming CameraX 1.7: We are merging support for GPU-based ImageAnalysis using ImageFormat.PRIVATE. This allows you to access buffers as a HardwareBuffer, enabling a zero-copy path to the NPU or Vulkan (Adreno GPUs). You may find that this 1.7 enhancement provides even better performance for heavy ML/AI pipelines than the standard GLSL shader approach.
2. Color Space & Linear RGB

CameraX does not currently provide an automatic toggle for linear RGB.

  • The automatic YUV-to-RGB conversion handled by samplerExternalOES in shaders typically results in non-linear (gamma-corrected) sRGB.
  • For professional LUT application, we recommend a manual "de-gamma" step at the start of your shader to move the data into linear space.
3. NPU/GPU Interleaving for Film Grain
    3.1 Vulkan support
    Please try ImageAnalysis with OUTPUT_IMAGE_FORMAT_PRIVATE in CameraX 1.7.
    3.2Handling 10-bit Data

    While ImageAnalysis is currently limited to SDR, you can use the CameraEffect API as a workaround for 10-bit processing:

  • By configuring your session for a 10-bit dynamic range (e.g., HLG10), the shader will receive 10-bit data. You can then extract, analyze, and convert this high-fidelity data entirely within the GPU layer before the final save.
4. Zero Shutter Lag (ZSL)

Applying a CameraEffect to ImageCapture will break Zero Shutter Lag. ZSL requires a raw sensor reprocessing path that bypasses the OpenGL effect pipeline.


PS. CameraX 1.6 stable release has been released. I'll recommend you to upgrade to use 1.6.


Reply all
Reply to author
Forward
0 new messages