In our earlier blog post, we had built a React Native app for detecting objects from an image using TensorFlow.js

In this post, we are going to build a React Native app for detecting objects from an image using TensorFlow.js and React Hooks.

Assuming that you have followed our earlier blog and created the Object Detection App we will proceed further building the new React Native Object Detection App using React Hooks.

What are React Hooks?

React Hooks are the functions that let us use state and other React features, like lifecycle methods, without writing a class i.e., inside function components.

It means that React Hooks offers us the flexibility to easily manipulate the state of our function component without actually converting them into class components. 

Note: React Hooks don’t work inside classes and were added to React in version 16.8.

Why React Hooks?

In the earlier versions of  React i.e., React <= 16.7, if a certain component were to have state or have access to life cycle methods, it had to be a class component. Whereas in the newer versions of React, i.e., React > 16.7 as hooks were introduced, it meant that a function component could also access state and lifecycle methods.

Apart from enabling function components to use state and to access React lifecycle methods, hooks also make it effortless to reuse stateful logic between components.

By using React Hooks, one can completely avoid using lifecycle methods, such as componentDidMount, componentDidUpdate, componentWillUnmount.

Types of Hooks:

  1. State Hook
  2. Effect Hook
  3. Other Hooks

For more information visit here.

Tensorflow.js:

TensorFlow.js is a JavaScript library for training and deploying machine learning models in the browser and in Node.js. It provides many pre-trained models that ease the time-consuming task of training a new machine learning model from scratch.

Overview:

Here we will capture an image or select it from the file system. We will send that image to API Gateway where it triggers the Lambda function which will store the image in S3 Bucket and returns the stored image URL.

Installing Dependencies: 

Let’s go-to React Native Docs, select React Native CLI Quickstart and select our appropriate Development OS and the Target OS as Android, as we are going to build an android application.

Follow the docs for installing dependencies, then create a new React Native Application. 

Use the command-line interface to generate a new React Native project called ObjectDetection.

npx react-native init ObjectDetection 

Preparing the Android device:

We shall need an Android device to run our React Native Android app. If you have a physical android device, you can use it for development by connecting it to your computer using a USB cable and following the instructions here.

Now go to the command line and run the react-native run-android command inside your React Native app directory.

cd ObjectDetection && react-native run-android

If everything is set up correctly, you should see your new app running on your physical device.

Next, we need to install the react-native-image-picker package to capture or select an image. To install the package run the following command inside the project directory.

npm install react-native-image-picker --save

We would also need a few other packages as well. To install them run the following commands inside the project directory.

npm install expo-permissions --save
npm install expo-constants --save
npm install jpeg-js --save

We are using an expo-permissions package which allows us to use prompts for various permissions to access device sensors, device cameras, etc.

We are using an expo-constants package that provides system information that remains constant throughout the lifetime of the app.

We are using the jpeg-js package, which will be used to decode the data from the image.

Integrating TensorFlow.js in our React Native App:

Follow this link to integrate TensorFlow.js in our React Native App. After that, we must also install @tensorflow-models/mobilenet. To install, run the following command inside the project directory.

npm install @tensorflow-models/mobilenet --save

We also need to set up an API in the AWS console and also create a Lambda function which will store the image in S3 Bucket and will return the stored image URL.

API Creation in AWS Console:

Before going further, create an API in your AWS console following Working with API Gateway paragraph in the following post:

https://medium.com/zenofai/serverless-web-application-architecture-using-react-with-amplify-part1-5b4d89f384f7

After you are done with creating the API come back to the React Native application. Go to your project directory and replace your App.js file with the following code.
App.js
:

import React, { useState, useEffect } from 'react';
import {
	StyleSheet,
	Text,
	View,
	ScrollView,
	TouchableHighlight,
	Image
} from 'react-native';
import * as tf from '@tensorflow/tfjs';
import * as mobilenet from '@tensorflow-models/mobilenet';
import { fetch } from '@tensorflow/tfjs-react-native';
import Constants from 'expo-constants';
import * as Permissions from 'expo-permissions';
import * as jpeg from 'jpeg-js';
import ImagePicker from "react-native-image-picker";
import Amplify, { API } from "aws-amplify";

Amplify.configure({
	API: {
		endpoints: [
			{
				name: "<Your-API-Name>",
				endpoint: "<Your-API-Endpoint-URL>"
			}
		]
	}
});

function App() {
	const [isTfReady, setIsTfReady] = useState(false);
	const [isModelReady, setIsModelReady] = useState(false);
	const [predictions, setPredictions] = useState(null);
	const [base64String, setBase64String] = useState('');
	const [capturedImage, setCapturedImage] = useState('');
	const [imageSubmitted, setImageSubmitted] = useState(false);
	var setS3ImageUrl = '';

	useEffect(() => {
		loadTensorflowModel();
	}, []);

	async function loadTensorflowModel() {
		await tf.ready();
		setIsTfReady(true);
		this.model = await mobilenet.load();
		setIsModelReady(true);
		console.log("Model Ready");
		if (Constants.platform.android) {
			const { status } = await Permissions.askAsync(Permissions.CAMERA_ROLL);
			if (status !== 'granted') {
				alert('We need camera roll permissions to make this work!');
			}
		}
	}

	function imageToTensor(rawImageData) {
		const TO_UINT8ARRAY = true;
		const { width, height, data } = jpeg.decode(rawImageData, TO_UINT8ARRAY);
		// Drop the alpha channel info for mobilenet
		const buffer = new Uint8Array(width * height * 3);
		let offset = 0; // offset into original data
		for (let i = 0; i < buffer.length; i += 3) {
			buffer[i] = data[offset];
			buffer[i + 1] = data[offset + 1];
			buffer[i + 2] = data[offset + 2];

			offset += 4;
		}
		return tf.tensor3d(buffer, [height, width, 3]);
	}

	async function classifyImage() {
		try {
			const imageAssetPath = setS3ImageUrl;
			const response = await fetch(imageAssetPath, {}, { isBinary: true });
			const rawImageData = await response.arrayBuffer();
			const imageTensor = imageToTensor(rawImageData);
			predictionsResult(imageTensor);
		} catch (error) {
			console.log(error);
		}
	}

	async function predictionsResult(imageTensor) {
		const predictions = await this.model.classify(imageTensor);
		setPredictions(predictions);
	}

	const renderPrediction = prediction => {
		return (
			<Text key={prediction.className} style={styles.text}>
				{prediction.className}
			</Text>
		)
	}

	function captureImageButtonHandler() {
		setPredictions(null);
		setImageSubmitted(false);
		ImagePicker.showImagePicker({ title: "Pick an Image", maxWidth: 800, maxHeight: 600 }, (response) => {
			if (response.didCancel) {
				console.log('User cancelled image picker');
			} else if (response.error) {
				console.log('ImagePicker Error: ', response.error);
			} else if (response.customButton) {
				console.log('User tapped custom button: ', response.customButton);
			} else {
				const source = { uri: 'data:image/jpeg;base64,' + response.data };
				setCapturedImage(response.uri);
				setBase64String(source.uri);
			}
		});
	}

	function submitButtonHandler() {
		if (capturedImage === '' || capturedImage === undefined || capturedImage === null) {
			alert("Please Capture the Image");
		} else {
			setImageSubmitted(true);
			const apiName = "Your-API-Name";
			const path = "<Your-API-Path>";
			const init = {
				headers: {
					'Accept': 'application/json',
					"Content-Type": "application/x-amz-json-1.1"
				},
				body: JSON.stringify({
					Image: base64String,
					name: "testImage.jpg"
				})
			}

			API.post(apiName, path, init).then(response => {
				setS3ImageUrl = response;
				setS3ImageUrl !== '' ? classifyImage() : " ";
			});
		}
	}

	const capturedImageUri = capturedImage;
	const imageSubmittedCheck = imageSubmitted;
	return (
		<View style={styles.MainContainer}>
			<ScrollView>
				<Text style={{ fontSize: 20, color: "#000", textAlign: 'center', marginBottom: 15, marginTop: 10 }}>Object Detection</Text>
				{capturedImage !== "" && <View style={styles.imageholder} >
					<Image source={{ uri: capturedImage }} style={styles.previewImage} />
				</View>}

				{capturedImage != '' && imageSubmittedCheck && (
					<View style={styles.predictionWrapper}>
						{capturedImageUri && imageSubmittedCheck && (
							<Text style={styles.text}>
								Predictions: {predictions ? '' : 'Loading...'}
							</Text>
						)}
						{predictions &&
							predictions.map(p => renderPrediction(p))}
					</View>
				)
				}

				<TouchableHighlight style={[styles.buttonContainer, styles.captureButton]} onPress={captureImageButtonHandler}>
					<Text style={styles.buttonText}>Capture Image</Text>
				</TouchableHighlight>

				<TouchableHighlight style={[styles.buttonContainer, styles.submitButton]} onPress={submitButtonHandler}>
					<Text style={styles.buttonText}>Submit</Text>
				</TouchableHighlight>

			</ScrollView>
		</View>
	);
}

const styles = StyleSheet.create({
	MainContainer: {
		flex: 1,
		backgroundColor: '#CCFFFF',
	},
	text: {
		color: '#000000',
		fontSize: 16
	},
	predictionWrapper: {
		height: 100,
		width: '100%',
		flexDirection: 'column',
		alignItems: 'center'
	},
	buttonContainer: {
		height: 45,
		flexDirection: 'row',
		alignItems: 'center',
		justifyContent: 'center',
		marginBottom: 20,
		width: "80%",
		borderRadius: 30,
		marginTop: 20,
		marginLeft: 30,
	},
	captureButton: {
		backgroundColor: "#337ab7",
		width: 350,
	},
	buttonText: {
		color: 'white',
		fontWeight: 'bold',
	},
	submitButton: {
		backgroundColor: "#C0C0C0",
		width: 350,
		marginTop: 5,
	},
	imageholder: {
		borderWidth: 1,
		borderColor: "grey",
		backgroundColor: "#eee",
		width: "50%",
		height: 150,
		marginTop: 10,
		marginLeft: 100,
		flexDirection: 'row',
		alignItems: 'center'
	},
	previewImage: {
		width: "100%",
		height: "100%",
	}
})

export default App;

Note: Before using Tensorflow.js in a React Native app, you need to call tf.ready() and wait for it to complete. This is an async function so you might want to do this before the app is rendered.

Once it gets completed, we have to also load the Tensorflow models using the following line of code.

this.model = await mobilenet.load();

As this also executes asynchronously you need to wait for it to complete. In the above code, we are configuring amplify with the API name and Endpoint URL that you created as shown below.

Amplify.configure({
 API: {
   endpoints: [
     {
       name: '<Your-API-Name>, 
       endpoint: '<Your-API-Endpoint-URL>',
     },
   ],
 },
});

Then click on the capture button will trigger the captureImageButtonHandler function. It will then ask the user to take a picture or select an image from the file system. When a user captures the image or selects an image from the file system, we will store that image in the state as shown below.

function captureImageButtonHandler() {
		setPredictions(null);
		setImageSubmitted(false);
		ImagePicker.showImagePicker({ title: "Pick an Image", maxWidth: 800, maxHeight: 600 }, (response) => {
			if (response.didCancel) {
				console.log('User cancelled image picker');
			} else if (response.error) {
				console.log('ImagePicker Error: ', response.error);
			} else if (response.customButton) {
				console.log('User tapped custom button: ', response.customButton);
			} else {
				const source = { uri: 'data:image/jpeg;base64,' + response.data };
				setCapturedImage(response.uri);
				setBase64String(source.uri);
			}
		});
	}

After capturing the image we will preview that image. By Clicking on the submit button, the submitButtonHandler function will get triggered where we will send the image to the endpoint as shown below.

function submitButtonHandler() {
		if (capturedImage === '' || capturedImage === undefined || capturedImage === null) {
			alert("Please Capture the Image");
		} else {
			setImageSubmitted(true);
			const apiName = "<Your-API-Name>";
			const path = "<Path-to-your-API>";
			const init = {
				headers: {
					'Accept': 'application/json',
					"Content-Type": "application/x-amz-json-1.1"
				},
				body: JSON.stringify({
					Image: base64String,
					name: "testImage.jpg"
				})
			}

			API.post(apiName, path, init).then(response => {
				setS3ImageUrl = response;
				setS3ImageUrl !== '' ? classifyImage() : " ";
			});
		}
	}

After submitting the image, the API gateway triggers the Lambda function. The Lambda function stores the submitted image into s3 Bucket and returns its URL which is served as a response. The received URL is then stored in a variable and classifyImage function is called as shown above.

classifyImage function will read the raw data from an image and yield results upon classification in the form of Predictions.


async function classifyImage() {
		try {
			const imageAssetPath = setS3ImageUrl;
			const response = await fetch(imageAssetPath, {}, { isBinary: true });
			const rawImageData = await response.arrayBuffer();
			const imageTensor = imageToTensor(rawImageData);
			predictionsResult(imageTensor);
		} catch (error) {
			console.log(error);
		}
	}

	
async function predictionsResult(imageTensor) {
		const predictions = await this.model.classify(imageTensor);
		setPredictions(predictions);
	}

The image is going to be read from a source, so the path to that image source has to be saved in the state of the app component. Similarly, the results yielded by this asynchronous method must also be saved. We are storing them in the predictions variable.

The package jpeg-js decodes the width, height, and binary data from the image inside the handler method imageToTensor, which accepts a parameter of the raw image data.

function imageToTensor(rawImageData) {
		const TO_UINT8ARRAY = true;
		const { width, height, data } = jpeg.decode(rawImageData, TO_UINT8ARRAY);
		// Drop the alpha channel info for mobilenet
		const buffer = new Uint8Array(width * height * 3);
		let offset = 0; // offset into original data
		for (let i = 0; i < buffer.length; i += 3) {
			buffer[i] = data[offset];
			buffer[i + 1] = data[offset + 1];
			buffer[i + 2] = data[offset + 2];

			offset += 4;
		}
		return tf.tensor3d(buffer, [height, width, 3]);
	}

Here the TO_UINT8ARRAY array represents an array of 8-bit unsigned integers.

Lambda Function:

Add the below code into your lambda function(Nodejs) that you created in your AWS Console. The below Lambda function stores the captured image in S3 Bucket and returns the URL of that image.

const AWS = require('aws-sdk');
var s3BucketName = "<Your-S3-BucketName>";
var s3Bucket = new AWS.S3( { params: {Bucket: s3BucketName, Region: "<Your-S3-Bucket-Region>"} } );

exports.handler = (event, context, callback) => {
    let parsedData = JSON.parse(event);
    let encodedImage = parsedData.Image;
    var filePath = parsedData.name;
    let buf = new Buffer(encodedImage.replace(/^data:image\/\w+;base64,/, ""),'base64');
    var data = {
        Key: filePath, 
        Body: buf,
        ContentEncoding: 'base64',
        ContentType: 'image/jpeg'
    };
    s3Bucket.putObject(data, function(err, data){
        if (err) { 
            callback(err, null);
        } else {
            var s3Url = "https://" + s3BucketName + '.' + "s3.amazonaws.com/" + filePath;
            callback(null, s3Url);
        }
    });
};

Running the App:

Run the application by executing the react-native run-android command from the terminal window. 

Below are the screenshots of the app running on an android device.

This story is authored by Dheeraj Kumar and Santosh Kumar. Dheeraj is a software engineer specializing in React Native and React based frontend development. Santosh specializes on Cloud Services based development.

Last modified: April 3, 2020

Author

Comments

Write a Reply or Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.