1. Currently we use every use-case camerax can give us :D We use:
- a preview use-case
- a image-analysis (yuv) use-case to search with tensorflow-lite for the object we want to analyze (really low res here, like 96x96)
- we use the bounding box of the object detection result to configure the camera --> focus point, ae-point and whitebalance (metering points in general)
- if everything is metered, and our quality assumptions from image analysis are met, we assume to have a good capture situation and triggere the capture use-case in minimal latency mode (since everything is already metered)
- a image capture (yuv or jpeg fallback) use-case (without saving) in minimal latency mode, since we expect that everything is already correctly metered, when the capture is triggered. This result is passed into a yuv-rgb convertion and analyzed in a c++ core implementation. This is the use-case were we need more fine-grained resolution control. The yuv config is achieved by `captureBuilder.setBufferFormat(ImageFormat.YUV_420_888);`
Our core image scan pipeline is able to handle arbitrary aspect ratios. The object we scan has only 1:1 aspect ratio, so everything above is no problem, but involves unnecessary copy operations. So we do not need more than 1:1 aspect ratio and we try to spare some ressources by getting already cropped images. This is achieved by putting all use-cases into a group and applying there the desired aspect ratio:
```
UseCaseGroup.Builder()
.addUseCase(mPreviewUseCase)
.addUseCase(mImageAnalysisUseCase)
.addUseCase(mImageCaptureUseCase)
.setViewPort(new ViewPort.Builder(new Rational(1,1),
activity.getWindowManager().getDefaultDisplay().getRotation()).build())
.build();
```
2. yes we do use this interface --> see answer in part 1. - a image capture use-case
When i try to summarize on what we really need here is:
1. set explicitly the image format --> already achievable by ImageCapture.Builder#setBufferFormat(int)
2. set explicitly the capture resolution --> some kind of writing own resolution selector --> can we implement some own selector or bypass the camerax setTargetResolution mechanics here? It seems we should also handle the jpeg fallback here ourself, if the camera hardware level does not provide sufficient resolution on yuv-capture.
3. handle explicitly the cropping , like crop all use-cases to 1:1 aspect ratio, keeping the shorter axis on maximum resolution --> does this already the use-case group achieve?