We are working on updating our legacy Camera application to use CameraX APIs instead of Camera1 APIs. We have used same camera configurations in both Camera1 and CameraX, we read the JPEG image byte array from Camera, in-memory and then upload that to a server. However, we have noticed that the byte array length that is got from CameraX is around 2X that of the byte array that we get from Camera1. Does CameraX output has some additional data or metadata in the byte array that is got in the callback?
Camera Initialization in Camera1 along with setting target resolution, JPEG quality:
camera = Camera.open(cameraId); //cameraId is the ID of back camera
parameters = camera.getParameters();
//set target resolution to 1600 * 1200
parameters.setPictureSize(1600, 1200);
//set JPEG quality to 100
parameters.setJpegQuality(100);
//set image format to JPEG
parameters.setPictureFormat(ImageFormat.JPEG);
Camera Initliazation in CameraX along with setting target resolution, JPEG quality
//select back camera
CameraSelector cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA;
//create preview
Preview preview = new Preview.Builder().build();
//create Image Capture use case, set JPEG quality to 100, target resolution to 1600*1200
ImageCapture imageCapture = new ImageCapture.Builder()
.setJpegQuality(100)
.setTargetResolution(new Size(1600, 1200))
.build();
//start camera
provider.bindToLifecycle(
this, cameraSelector, preview, imageCapture
);
Callback method passed to takePicture method in Camera API.
camera.takePicture(shutterCallback, null, pictureTaken);
private Camera.PictureCallback pictureTaken = new Camera.PictureCallback() {
@Override
public void onPictureTaken(final byte[] data, final Camera camera){
Log.d(“Data byte array length”, data.length); // for same image this is around 400000 bytes
///Logic to upload data to server
}
Callback method passed to takePicture method in CameraX API.
imageCapture.takePicture(ContextCompat.getMainExecutor(this),
new ImageCapture.OnImageCapturedCallback() {
@Override
public void onCaptureSuccess(@NonNull ImageProxy imageProxy) {
//convert imageProxy to byte array
ByteBuffer buffer = imageProxy.getPlanes()[0].getBuffer();
buffer.rewind();
byte[] data = new byte[buffer.capacity()];
buffer.get(data);
Log.d(“Data byte array length”, data.length); //for same image this is around 600000 bytes
})
Final image size is also considerably higher in CameraX implementation. In Camera1 its around 400kb and in CameraX its around 700kb. Is there any configuration we are missing or is there any metadata in the byte array that we need to strip off ?
The resolutionSize
should be expressed in the coordinate frame after rotating the supported sizes by the target rotation. For example, a device with portrait natural orientation in natural target rotation requesting a portrait image may specify 480x640, and the same device, rotated 90 degrees and targeting landscape orientation may specify 640x480.
--
You received this message because you are subscribed to the Google Groups "Android CameraX Discussion Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to camerax-develop...@android.com.
To view this discussion on the web visit https://groups.google.com/a/android.com/d/msgid/camerax-developers/5374ccae-fd41-41f7-b8b8-c1ba43ccac96n%40android.com.
public void onCaptureSuccess(@NonNull ImageProxy imageProxy) {
//convert imageProxy to byte array
ByteBuffer buffer = imageProxy.getPlanes()[0].getBuffer();
...
}
To view this discussion on the web visit https://groups.google.com/a/android.com/d/msgid/camerax-developers/cbedac18-09c6-4d8a-a216-581a1e87f60bn%40android.com.