Thank you for this detailed technical discussion! I’ve been researching the artistic terms you mentioned—like 3D LUTs and halation—to better map them to our library. Please note that I am less experienced with the specific math of film science. My reply is based on how these technical requirements fit into our current and future APIs.
If I've misunderstood your implementation goals, please let me know and we can clarify further!
Critical Note on Resolution (Still Images)Before choosing an architecture, it's important to note a key trade-off with the CameraEffect API: If you apply a CameraEffect to ImageCapture, the final image is generated through the OpenGL pipeline. This typically limits the resolution to the size of the OpenGL surface (often 1080p for preview consistency) rather than the camera sensor's maximum resolution (e.g., 12MP or 50MP). If high-resolution still capture is your priority, you may need to process the full-resolution ImageProxy via ImageAnalysis or ImageCapture callbacks instead.
1. Effects API vs. ImageAnalysis: Performance Trade-offsThere isn't a single "recommended" solution; it depends on your performance requirements and whether you solution can in the selected path:
CameraX does not currently provide an automatic toggle for linear RGB.
While ImageAnalysis is currently limited to SDR, you can use the CameraEffect API as a workaround for 10-bit processing:
Applying a CameraEffect to ImageCapture will break Zero Shutter Lag. ZSL requires a raw sensor reprocessing path that bypasses the OpenGL effect pipeline.
PS. CameraX 1.6 stable release has been released. I'll recommend you to upgrade to use 1.6.