Building Barcode/QR code scanner for Android using Google ML Kit and CameraX

In this article, we will learn how to create Barcode scanner using Google ML Kit and Jetpack CameraX .

June 3,2020
Android: 16.0.0 / iOS: 0.60.0:
This is the first release of ML Kit as a standalone SDK, independent from Firebase. This SDK offers all the on-device APIs that were previously offered through the ML Kit for Firebase SDK. See ML Kit Release Notes.

A standalone library for on-device ML, which you can use with or without Firebase.

Introduction

What’s CameraX?

CameraX is a Jetpack support library, built to help you make camera app development easier. It provides a consistent and easy-to-use API surface that works across most Android devices, with backward-compatibility to Android 5.0 (API level 21).While it leverages the capabilities of camera2, it uses a simpler, use case-based approach that is lifecycle-aware.

What’s Google ML Kit?

ML Kit brings Google’s machine learning expertise to mobile developers in a powerful and easy-to-use package. Make your iOS and Android apps more engaging, personalized, and helpful with solutions that are optimized to run on device.

ML Kit’s Barcode Scanning API

With ML Kit’s barcode scanning API, you can read data encoded using most standard barcode formats. Barcode scanning happens on the device, and doesn’t require a network connection.

Setting up the project

  1. Create a new project in Android Studio from File ⇒ New Project and select Empty Activity from templates.
  2. Open app/build.gradle and add Google ML Kit barcode and Jetpack CameraX dependencies:
// ViewModel and LiveData
implementation "androidx.lifecycle:lifecycle-livedata:2.2.0"
implementation "androidx.lifecycle:lifecycle-viewmodel:2.2.0"

// Barcode model dependencies
implementation 'com.google.mlkit:barcode-scanning:16.0.1'

// CameraX dependencies
implementation "androidx.camera:camera-camera2:1.0.0-beta06"
implementation "androidx.camera:camera-lifecycle:1.0.0-beta06"
implementation "androidx.camera:camera-view:1.0.0-alpha13"
android . compileOptions  sourceCompatibility JavaVersion.VERSION_1_8 
targetCompatibility JavaVersion.VERSION_1_8
>
.
>

3. Open your AndroidManifest.xml file to add required permissions and google play service meta-data:

 androidx.camera.camera2, androidx.camera.core, 
androidx.camera.view, androidx.camera.lifecycle" />


android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">

android:name="com.google.android.gms.version"
android:value="@integer/google_play_services_version"/>
.

4. Add PreviewView to the main activity layout (activity_main.xml).

PreviewView

Custom View that displays the camera feed for CameraX’s Preview use case.This class manages the Surface lifecycle, as well as the preview aspect ratio and orientation. Internally, it uses either a TextureView or SurfaceView to display the camera feed.


xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">

android:id="@+id/preview_view"
android:layout_width="match_parent"
android:layout_height="match_parent" />

5. Now we need to check camera permission, here’s a code to check if camera permission was granted and request it.

Request Camera permission in android activity

Implement Preview use case

CameraX introduces use cases, which allow you to focus on the task you need to get done instead of spending time managing device-specific nuances. There are several basic use cases:

Preview: get an image on the display

Image analysis: access a buffer seamlessly for use in your algorithms, such as to pass into MLKit, we will use it to detect barcode.

Image capture: save high-quality images

6. Implement camera preview use case
In a camera application, the viewfinder is used to let the user preview the photo they will be taking. We can implement a viewfinder using the CameraX Preview class.

To use Preview , we’ll first need to define a configuration, which then gets used to create an instance of the use case. The resulting instance is what you need to bind to the CameraX lifecycle.

a- Let’s do this in a view model, create new class CameraXViewModel, Create a livedata instance of ProcessCameraProvider . This is used to bind the lifecycle of cameras to the lifecycle owner. This allows you to not worry about opening and closing the camera since CameraX is lifecycle aware.

b- Add a listener to the cameraProviderFuture . Add a Runnable in as one argument, get cameraProviderLiveData value from cameraProviderFuture. Add ContextCompat.getMainExecutor() as the second argument, this returns an Executor that runs on the main thread.

CameraXViewModel

c- Now, Let’s setup camera in MainActivity, Create new function setupCamera and get previewView value, Create CameraSelector object and use the CameraSelector.Builder.requireLensFacing method to pass in the lens you prefer.

private var lensFacing = CameraSelector.LENS_FACING_BACK
cameraSelector = CameraSelector.Builder().requireLensFacing(lensFacing).build()

Observe processCameraProvider from CameraXViewModel our function will be like this:

Setup cameraX by observing processCameraProvider from CameraXViewModel

d- Inside bindPreviewUseCase function block, make sure nothing is bound to your cameraProvider , and then bind your cameraSelector and preview object to the cameraProvider . Attach the viewFinder 's surface provider to the preview use case.

Bind camera preview use case

e- Please check my code sample on GitHub to get how to detect the most suitable ratio for dimensions provided, I added function there, you can see this on the full code here on branch camera_preview.

By this we handled camera preview, now let’s handle analyzer part to detect bar code.

Detecting Barcode 🚀🚀🚀

We’ve a great way to implement this using ImageAnalysis feature. It allows us to define a custom class implementing the ImageAnalysis.Analyzer interface, which will be called with incoming camera frames. We won’t have to worry about managing the camera session state or even disposing of images; binding to our app’s desired lifecycle is sufficient, like with other lifecycle-aware components.

7. Implement ImageAnalysis use case

a- Create new function bindAnalyseUseCase() to implement analyze use case and instantiate analysisUseCase and setAnalyzer

analysisUseCase = ImageAnalysis.Builder() 
.setTargetAspectRatio(screenAspectRatio)
.setTargetRotation(previewView. display.rotation)
.build()
analysisUseCase?.setAnalyzer(cameraExecutor, ImageAnalysis.Analyzer imageProxy ->
processImageProxy(barcodeScanner, imageProxy)
>)

b- In processImageProxy we process the imageProxy and detect barcode with ML barcode scanner. To recognize barcodes in an image, create an InputImage object. Then, pass the InputImage object to the BarcodeScanner 's process method.

Process camera image proxy function

Once the image being analyzed it’s closed by calling ImageProxy.close()

BTW, Implementing preview, image capture and image analysis concurrently will not work for Android Studio’s device emulator if you are running Q or lower. We recommend using a real device to test this portion of the code lab.

This’s how to detect barcode with new ML Kit which’s released on 30 June/2020, you can see full code here on Github.