Hi, based on my personal experience, I'd say there are at least 3 ways to interface with a camera:
1) use CameraX in Java/Kotlin, this is probably the easiest but probably the least flexible
https://developer.android.com/media/camera/camerax2) use Camera2 in Java/Kotlin, this is the low level api in Java/kotin
https://developer.android.com/media/camera/camera23) use Camera2 ndk, which is the equivalent of camera2 with C API.
https://developer.android.com/ndk/reference/group/cameraIf you decide to go with point 3, some things to remember are:
1) the docs on ndk side are ... well you will find out, but you can always refer to the equivalent Java side and you may find there the details that you were looking for.
2) you will need to write your own camera manager, and that could become quite complex, dealing with details specific for camera management, like AutoExposure, AutoFocus, frame rate intervals etc.
3) you have different destinations for the images you grab, you can direct them to textures (SurfaceTexture), media writers (for example to save a short clip on disk) or to ImageReader, which is the class that allow you to get the buffer in memory on the CPU side.
4) you will have to deal with different formats for the image, and not all of them are supported on all devices. I went with YUV_420_888, but then you have to convert to RGB on your own (a little fun on its own).
5) some links to start with are: a tutorial
https://www.sisik.eu/blog/android/ndk/camera and the codebase
https://github.com/sixo/native-camera the ndk sample at
https://github.com/android/ndk-samples/tree/main/camera An example (by myself) with native camera and ImageReader only
https://github.com/AndrewBloom/AndroidOpenCVCamera this has 2 branches, one is a basic example i found with OpenCV image processing, and the other branch uses only ImageReader to get the Image data. There may be errors but also nice things like a lockless triple buffer used.
Hope this helps,
Andrea