Akshay Sharma
unread,Apr 19, 2026, 4:41:16 PM (2 days ago) Apr 19Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Android CameraX Discussion Group
Hi everyone,
I am working on integrating a custom film emulation pipeline into an Android application using CameraX 1.5. The goal is to apply complex color science—specifically 3D LUTs, film grain, and halation—directly to the image data before it is persisted to storage via ImageCapture.
Given the updates in version 1.5, I want to ensure I'm using the most efficient "CameraX-native" way to intercept the buffer without introducing significant shutter lag or memory overhead.
I have a few specific questions regarding the implementation architecture:
1. The Interception Point: Effects API vs. ImageAnalysis
With the maturing of the CameraEffect API, is it now the recommended standard for heavy-duty color grading (3D LUTs or using custom glsl shaders using mathematics calculation) intended for the final save, or is it still more performant to process an ImageProxy via ImageAnalysis and manually save the bitmap?
2. Color Space & Format Handling
When applying film simulation, working in the correct color space is critical.
Does CameraX 1.5 provide a streamlined way to ensure the CameraEffect receives a linear RGB buffer, or are we still primarily limited to YUV_420_888 which requires manual conversion before LUT application?
How can we best handle 10-bit HDR input if the device supports it, ensuring the emulation doesn't clip the highlights before the final encode?
3. NPU/GPU Interleaving for Film Grain
Film grain and bloom often require spatial processing that is heavy for the CPU.
What is the best practice for interleaving Vulkan compute shaders within the CameraX pipeline to minimize latency?
Are there specific hooks in 1.5 to prevent the "double-buffering" overhead when moving data between the camera producer and the GPU consumer?
4. Zero Shutter Lag (ZSL) Compatibility
If I implement a custom CameraEffect, will it break the Zero Shutter Lag functionality? How does the pipeline prioritize the "processed" frame versus the "raw" frame in the ZSL ring buffer?
Technical Stack Context:
Target: CameraX 1.5
Processing: 3D LUTs (Cube/PNG), Film Grain (Procedural), Halation (Gaussian Blur/Threshold)
Hardware: Focusing on devices with dedicated NPU/high-end Adreno GPUs.
Looking forward to hearing how others are structuring their imaging engines in the latest builds!