Android started out with android.hardware.Camera. This was a simple set of APIs which enabled developers to quickly implement a camera feature in their app. But having been added back in Android SDK 1, they started to get outdated. The market changed and users began to demand a lot more from their phone camera.
Android introduced the Camera2 APIs. These gave manufacturers and developers the ability to add more complex camera features. Although a comprehensive API, many developers found it too complicated to implement for a simple use case.
Recognising the complexity of Camera2, Google has now released a Jetpack library CameraX. This set of APIs wraps around the Camera2 APIs. CameraX introduces the idea of use-cases. These use-cases wrap up a whole set of functionality into a simple API.
CameraX is launching with 3 use-cases. Preview, which is getting an image on a display. Image analysis, which gives access to a stream of images for use in your algorithms, such as to pass into MLKit. Image capture, to save high-quality images.
These 3 use-cases will cover the vast majority of developers needs. They can be combined together or used individually.
Not only does CameraX provide use-cases, but it also takes away all the device-specific nuances. Android is investing in an automated test lab in order to ensure that all the CameraX APIs behave the same. Regardless of what device you are on.
Of course Android doesn’t want to reduce the number of features available to manufacturers (including themselves for the Pixel range). To cater for these weird and wonderful camera features, they have created an extensions API. With this API you can query for the availability of a particular extension and enable it. If it’s not available then graceful fallbacks will be used.
First, you need to add the required dependencies. Ensure that you add both of these, even though the second one might sound optional in the developer documentation.
If you don’t add the second one you will see an exception like this:
Caused by: java.lang.IllegalStateException: CameraX not initialized yet.
Next, you will need to add a simple TextureView to your fragment or activities layout.
You will need to request the android.permission.CAMERA as you have come to expect.
Once you have permission we can start the camera up.
Notice that we called startCamera() within a post on the TextureView. This is to ensure that the TextureView has been inflated.
We use setOnPreviewOutputUpdateListener() to add the preview to our TextureView. We do however have to remove and re-add our TextureView to the layout. This is because TextureView internally creates its own SurfaceTexture when it is attached to its parent. If you don’t do this, you won’t see a camera preview and you will get a message in the error logs: SurfaceTexture is not attached to a View
In updateTransform() we correct for the changes in device orientation.
To capture an image we use the ImageCapture we created in startCamera() and call takePicture(). We need to pass it a File and an OnImageSavedListener.
To scan barcodes in real time we make use of the image analysis use-case.
In addition to the CameraX dependencies, you need to follow the setup detailed here.
Most of the rest of the code matches with the image capture detailed above.
Notably, we have ImageAnalysisConfig.Builder, which gives you access to the stream of images coming from the camera.
We call setAnalyzer() to add an Analyzer which is where we will call into Firebase ML. We use fromMediaImage() to get a FirebaseVisionImage, and pass it to the FirebaseVisionBarcodeDetector.
In the snippet above we have an AtomicBoolean. This keeps track of whether Firebase is currently processing a barcode. This is so that we don’t process a new frame whilst a previous frame is still being processed.
We also check for the Lifecycle.State to stop Firebase from continuing to process frames before the lifecycle is resumed.
The CameraX library has brought a lot of simplicity back to the camera universe. It simplifies the code for the majority of use-cases, whilst still providing the niche capabilities.
It felt natural to use and performed well (written at the time of alpha02). Some features still feel a little complicated, for example, tap to focus. But hopefully, as this library progresses, these features will get addressed.