Building React Native object detection app using Tensorflow with React Hooks

In our earlier blog post, we had built a React Native app for detecting objects from an image using TensorFlow.js

In this post, we are going to build a React Native app for detecting objects from an image using TensorFlow.js and React Hooks.

Assuming that you have followed our earlier blog and created the Object Detection App we will proceed further building the new React Native Object Detection App using React Hooks.

What are React Hooks?

React Hooks are the functions that let us use state and other React features, like lifecycle methods, without writing a class i.e., inside function components.

It means that React Hooks offers us the flexibility to easily manipulate the state of our function component without actually converting them into class components. 

Note: React Hooks don’t work inside classes and were added to React in version 16.8.

Why React Hooks?

In the earlier versions of  React i.e., React <= 16.7, if a certain component were to have state or have access to life cycle methods, it had to be a class component. Whereas in the newer versions of React, i.e., React > 16.7 as hooks were introduced, it meant that a function component could also access state and lifecycle methods.

Apart from enabling function components to use state and to access React lifecycle methods, hooks also make it effortless to reuse stateful logic between components.

By using React Hooks, one can completely avoid using lifecycle methods, such as componentDidMount, componentDidUpdate, componentWillUnmount.

Types of Hooks:

  1. State Hook
  2. Effect Hook
  3. Other Hooks

For more information visit here.

Tensorflow.js:

TensorFlow.js is a JavaScript library for training and deploying machine learning models in the browser and in Node.js. It provides many pre-trained models that ease the time-consuming task of training a new machine learning model from scratch.

Overview:

Here we will capture an image or select it from the file system. We will send that image to API Gateway where it triggers the Lambda function which will store the image in S3 Bucket and returns the stored image URL.

Installing Dependencies: 

Let’s go-to React Native Docs, select React Native CLI Quickstart and select our appropriate Development OS and the Target OS as Android, as we are going to build an android application.

Follow the docs for installing dependencies, then create a new React Native Application. 

Use the command-line interface to generate a new React Native project called ObjectDetection.

npx react-native init ObjectDetection 

Preparing the Android device:

We shall need an Android device to run our React Native Android app. If you have a physical android device, you can use it for development by connecting it to your computer using a USB cable and following the instructions here.

Now go to the command line and run the react-native run-android command inside your React Native app directory.

cd ObjectDetection && react-native run-android

If everything is set up correctly, you should see your new app running on your physical device.

Next, we need to install the react-native-image-picker package to capture or select an image. To install the package run the following command inside the project directory.

npm install react-native-image-picker --save

We would also need a few other packages as well. To install them run the following commands inside the project directory.

npm install expo-permissions --save
npm install expo-constants --save
npm install jpeg-js --save

We are using an expo-permissions package which allows us to use prompts for various permissions to access device sensors, device cameras, etc.

We are using an expo-constants package that provides system information that remains constant throughout the lifetime of the app.

We are using the jpeg-js package, which will be used to decode the data from the image.

Integrating TensorFlow.js in our React Native App:

Follow this link to integrate TensorFlow.js in our React Native App. After that, we must also install @tensorflow-models/mobilenet. To install, run the following command inside the project directory.

npm install @tensorflow-models/mobilenet --save

We also need to set up an API in the AWS console and also create a Lambda function which will store the image in S3 Bucket and will return the stored image URL.

API Creation in AWS Console:

Before going further, create an API in your AWS console following Working with API Gateway paragraph in the following post:

https://medium.com/zenofai/serverless-web-application-architecture-using-react-with-amplify-part1-5b4d89f384f7

After you are done with creating the API come back to the React Native application. Go to your project directory and replace your App.js file with the following code.
App.js
:

import React, { useState, useEffect } from 'react';
import {
	StyleSheet,
	Text,
	View,
	ScrollView,
	TouchableHighlight,
	Image
} from 'react-native';
import * as tf from '@tensorflow/tfjs';
import * as mobilenet from '@tensorflow-models/mobilenet';
import { fetch } from '@tensorflow/tfjs-react-native';
import Constants from 'expo-constants';
import * as Permissions from 'expo-permissions';
import * as jpeg from 'jpeg-js';
import ImagePicker from "react-native-image-picker";
import Amplify, { API } from "aws-amplify";

Amplify.configure({
	API: {
		endpoints: [
			{
				name: "<Your-API-Name>",
				endpoint: "<Your-API-Endpoint-URL>"
			}
		]
	}
});

function App() {
	const [isTfReady, setIsTfReady] = useState(false);
	const [isModelReady, setIsModelReady] = useState(false);
	const [predictions, setPredictions] = useState(null);
	const [base64String, setBase64String] = useState('');
	const [capturedImage, setCapturedImage] = useState('');
	const [imageSubmitted, setImageSubmitted] = useState(false);
	var setS3ImageUrl = '';

	useEffect(() => {
		loadTensorflowModel();
	}, []);

	async function loadTensorflowModel() {
		await tf.ready();
		setIsTfReady(true);
		this.model = await mobilenet.load();
		setIsModelReady(true);
		console.log("Model Ready");
		if (Constants.platform.android) {
			const { status } = await Permissions.askAsync(Permissions.CAMERA_ROLL);
			if (status !== 'granted') {
				alert('We need camera roll permissions to make this work!');
			}
		}
	}

	function imageToTensor(rawImageData) {
		const TO_UINT8ARRAY = true;
		const { width, height, data } = jpeg.decode(rawImageData, TO_UINT8ARRAY);
		// Drop the alpha channel info for mobilenet
		const buffer = new Uint8Array(width * height * 3);
		let offset = 0; // offset into original data
		for (let i = 0; i < buffer.length; i += 3) {
			buffer[i] = data[offset];
			buffer[i + 1] = data[offset + 1];
			buffer[i + 2] = data[offset + 2];

			offset += 4;
		}
		return tf.tensor3d(buffer, [height, width, 3]);
	}

	async function classifyImage() {
		try {
			const imageAssetPath = setS3ImageUrl;
			const response = await fetch(imageAssetPath, {}, { isBinary: true });
			const rawImageData = await response.arrayBuffer();
			const imageTensor = imageToTensor(rawImageData);
			predictionsResult(imageTensor);
		} catch (error) {
			console.log(error);
		}
	}

	async function predictionsResult(imageTensor) {
		const predictions = await this.model.classify(imageTensor);
		setPredictions(predictions);
	}

	const renderPrediction = prediction => {
		return (
			<Text key={prediction.className} style={styles.text}>
				{prediction.className}
			</Text>
		)
	}

	function captureImageButtonHandler() {
		setPredictions(null);
		setImageSubmitted(false);
		ImagePicker.showImagePicker({ title: "Pick an Image", maxWidth: 800, maxHeight: 600 }, (response) => {
			if (response.didCancel) {
				console.log('User cancelled image picker');
			} else if (response.error) {
				console.log('ImagePicker Error: ', response.error);
			} else if (response.customButton) {
				console.log('User tapped custom button: ', response.customButton);
			} else {
				const source = { uri: 'data:image/jpeg;base64,' + response.data };
				setCapturedImage(response.uri);
				setBase64String(source.uri);
			}
		});
	}

	function submitButtonHandler() {
		if (capturedImage === '' || capturedImage === undefined || capturedImage === null) {
			alert("Please Capture the Image");
		} else {
			setImageSubmitted(true);
			const apiName = "Your-API-Name";
			const path = "<Your-API-Path>";
			const init = {
				headers: {
					'Accept': 'application/json',
					"Content-Type": "application/x-amz-json-1.1"
				},
				body: JSON.stringify({
					Image: base64String,
					name: "testImage.jpg"
				})
			}

			API.post(apiName, path, init).then(response => {
				setS3ImageUrl = response;
				setS3ImageUrl !== '' ? classifyImage() : " ";
			});
		}
	}

	const capturedImageUri = capturedImage;
	const imageSubmittedCheck = imageSubmitted;
	return (
		<View style={styles.MainContainer}>
			<ScrollView>
				<Text style={{ fontSize: 20, color: "#000", textAlign: 'center', marginBottom: 15, marginTop: 10 }}>Object Detection</Text>
				{capturedImage !== "" && <View style={styles.imageholder} >
					<Image source={{ uri: capturedImage }} style={styles.previewImage} />
				</View>}

				{capturedImage != '' && imageSubmittedCheck && (
					<View style={styles.predictionWrapper}>
						{capturedImageUri && imageSubmittedCheck && (
							<Text style={styles.text}>
								Predictions: {predictions ? '' : 'Loading...'}
							</Text>
						)}
						{predictions &&
							predictions.map(p => renderPrediction(p))}
					</View>
				)
				}

				<TouchableHighlight style={[styles.buttonContainer, styles.captureButton]} onPress={captureImageButtonHandler}>
					<Text style={styles.buttonText}>Capture Image</Text>
				</TouchableHighlight>

				<TouchableHighlight style={[styles.buttonContainer, styles.submitButton]} onPress={submitButtonHandler}>
					<Text style={styles.buttonText}>Submit</Text>
				</TouchableHighlight>

			</ScrollView>
		</View>
	);
}

const styles = StyleSheet.create({
	MainContainer: {
		flex: 1,
		backgroundColor: '#CCFFFF',
	},
	text: {
		color: '#000000',
		fontSize: 16
	},
	predictionWrapper: {
		height: 100,
		width: '100%',
		flexDirection: 'column',
		alignItems: 'center'
	},
	buttonContainer: {
		height: 45,
		flexDirection: 'row',
		alignItems: 'center',
		justifyContent: 'center',
		marginBottom: 20,
		width: "80%",
		borderRadius: 30,
		marginTop: 20,
		marginLeft: 30,
	},
	captureButton: {
		backgroundColor: "#337ab7",
		width: 350,
	},
	buttonText: {
		color: 'white',
		fontWeight: 'bold',
	},
	submitButton: {
		backgroundColor: "#C0C0C0",
		width: 350,
		marginTop: 5,
	},
	imageholder: {
		borderWidth: 1,
		borderColor: "grey",
		backgroundColor: "#eee",
		width: "50%",
		height: 150,
		marginTop: 10,
		marginLeft: 100,
		flexDirection: 'row',
		alignItems: 'center'
	},
	previewImage: {
		width: "100%",
		height: "100%",
	}
})

export default App;

Note: Before using Tensorflow.js in a React Native app, you need to call tf.ready() and wait for it to complete. This is an async function so you might want to do this before the app is rendered.

Once it gets completed, we have to also load the Tensorflow models using the following line of code.

this.model = await mobilenet.load();

As this also executes asynchronously you need to wait for it to complete. In the above code, we are configuring amplify with the API name and Endpoint URL that you created as shown below.

Amplify.configure({
 API: {
   endpoints: [
     {
       name: '<Your-API-Name>, 
       endpoint: '<Your-API-Endpoint-URL>',
     },
   ],
 },
});

Then click on the capture button will trigger the captureImageButtonHandler function. It will then ask the user to take a picture or select an image from the file system. When a user captures the image or selects an image from the file system, we will store that image in the state as shown below.

function captureImageButtonHandler() {
		setPredictions(null);
		setImageSubmitted(false);
		ImagePicker.showImagePicker({ title: "Pick an Image", maxWidth: 800, maxHeight: 600 }, (response) => {
			if (response.didCancel) {
				console.log('User cancelled image picker');
			} else if (response.error) {
				console.log('ImagePicker Error: ', response.error);
			} else if (response.customButton) {
				console.log('User tapped custom button: ', response.customButton);
			} else {
				const source = { uri: 'data:image/jpeg;base64,' + response.data };
				setCapturedImage(response.uri);
				setBase64String(source.uri);
			}
		});
	}

After capturing the image we will preview that image. By Clicking on the submit button, the submitButtonHandler function will get triggered where we will send the image to the endpoint as shown below.

function submitButtonHandler() {
		if (capturedImage === '' || capturedImage === undefined || capturedImage === null) {
			alert("Please Capture the Image");
		} else {
			setImageSubmitted(true);
			const apiName = "<Your-API-Name>";
			const path = "<Path-to-your-API>";
			const init = {
				headers: {
					'Accept': 'application/json',
					"Content-Type": "application/x-amz-json-1.1"
				},
				body: JSON.stringify({
					Image: base64String,
					name: "testImage.jpg"
				})
			}

			API.post(apiName, path, init).then(response => {
				setS3ImageUrl = response;
				setS3ImageUrl !== '' ? classifyImage() : " ";
			});
		}
	}

After submitting the image, the API gateway triggers the Lambda function. The Lambda function stores the submitted image into s3 Bucket and returns its URL which is served as a response. The received URL is then stored in a variable and classifyImage function is called as shown above.

classifyImage function will read the raw data from an image and yield results upon classification in the form of Predictions.


async function classifyImage() {
		try {
			const imageAssetPath = setS3ImageUrl;
			const response = await fetch(imageAssetPath, {}, { isBinary: true });
			const rawImageData = await response.arrayBuffer();
			const imageTensor = imageToTensor(rawImageData);
			predictionsResult(imageTensor);
		} catch (error) {
			console.log(error);
		}
	}

	
async function predictionsResult(imageTensor) {
		const predictions = await this.model.classify(imageTensor);
		setPredictions(predictions);
	}

The image is going to be read from a source, so the path to that image source has to be saved in the state of the app component. Similarly, the results yielded by this asynchronous method must also be saved. We are storing them in the predictions variable.

The package jpeg-js decodes the width, height, and binary data from the image inside the handler method imageToTensor, which accepts a parameter of the raw image data.

function imageToTensor(rawImageData) {
		const TO_UINT8ARRAY = true;
		const { width, height, data } = jpeg.decode(rawImageData, TO_UINT8ARRAY);
		// Drop the alpha channel info for mobilenet
		const buffer = new Uint8Array(width * height * 3);
		let offset = 0; // offset into original data
		for (let i = 0; i < buffer.length; i += 3) {
			buffer[i] = data[offset];
			buffer[i + 1] = data[offset + 1];
			buffer[i + 2] = data[offset + 2];

			offset += 4;
		}
		return tf.tensor3d(buffer, [height, width, 3]);
	}

Here the TO_UINT8ARRAY array represents an array of 8-bit unsigned integers.

Lambda Function:

Add the below code into your lambda function(Nodejs) that you created in your AWS Console. The below Lambda function stores the captured image in S3 Bucket and returns the URL of that image.

const AWS = require('aws-sdk');
var s3BucketName = "<Your-S3-BucketName>";
var s3Bucket = new AWS.S3( { params: {Bucket: s3BucketName, Region: "<Your-S3-Bucket-Region>"} } );

exports.handler = (event, context, callback) => {
    let parsedData = JSON.parse(event);
    let encodedImage = parsedData.Image;
    var filePath = parsedData.name;
    let buf = new Buffer(encodedImage.replace(/^data:image\/\w+;base64,/, ""),'base64');
    var data = {
        Key: filePath, 
        Body: buf,
        ContentEncoding: 'base64',
        ContentType: 'image/jpeg'
    };
    s3Bucket.putObject(data, function(err, data){
        if (err) { 
            callback(err, null);
        } else {
            var s3Url = "https://" + s3BucketName + '.' + "s3.amazonaws.com/" + filePath;
            callback(null, s3Url);
        }
    });
};

Running the App:

Run the application by executing the react-native run-android command from the terminal window. 

Below are the screenshots of the app running on an android device.

This story is authored by Dheeraj Kumar and Santosh Kumar. Dheeraj is a software engineer specializing in React Native and React based frontend development. Santosh specializes on Cloud Services based development.

Create Voice Driven HealthCare Chatbot in React Native Using Google DialogFlow

In our earlier blog post, we had built a Healthcare Chatbot, in React Native using Dialogflow API. This blog is an extension of the chatbot we built earlier. We shall learn how to add support for voice-based user interaction to that chatbot. Assuming you had followed our earlier blog and created the chatbot we will proceed further.

The first thing is to specify voice permission for accessing the device’s microphone to record the audio which will help the app to operate properly. For that we just need to add the below code in AndroidManifest.xml located at project-name > android > app > src > main location.

AndroidManifest.xml

<uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW"/>
<uses-permission android:name="android.permission.RECORD_AUDIO" />

We also need to make few changes in MainApplication.java file located at project-name > android > app > src > main > java > com > project-name location.
Modify the content in MainApplication.java as below.

MainApplicaiton.java

import android.app.Application;
import android.content.Context;
import com.facebook.react.ReactApplication;
import com.facebook.react.ReactNativeHost;
import com.facebook.react.ReactPackage;
import com.facebook.soloader.SoLoader;
import java.util.List;


// Additional packages which we need to add
import net.no_mad.tts.TextToSpeechPackage;
import com.oblador.vectoricons.VectorIconsPackage;
import com.facebook.react.shell.MainReactPackage;
import com.reactnativecommunity.rnpermissions.RNPermissionsPackage;
import com.wmjmc.reactspeech.VoicePackage;
import java.util.Arrays;

public class MainApplication extends Application implements ReactApplication {

  private final ReactNativeHost mReactNativeHost =
      new ReactNativeHost(this) {
        @Override
        public boolean getUseDeveloperSupport() {
          return BuildConfig.DEBUG;
        }
        // Replace your getPackages() method with below getPackages() method
        @Override
        protected List<ReactPackage> getPackages() {
          return Arrays.<ReactPackage>asList(
              new MainReactPackage(),
              new VectorIconsPackage(),
              new RNPermissionsPackage(),
              new VoicePackage(),
              new TextToSpeechPackage()
          );
        }

        @Override
        protected String getJSMainModuleName() {
          return "index";
        }
      };

  @Override
  public ReactNativeHost getReactNativeHost() {
    return mReactNativeHost;
  }

  @Override
  public void onCreate() {
    super.onCreate();
    SoLoader.init(this, /* native exopackage */ false);
    // initializeFlipper(this); // Remove this line if you don't want Flipper enabled
  }

  /**
   * Loads Flipper in React Native templates.
   *
   * @param context
   */

  // private static void initializeFlipper(Context context) {
  //   if (BuildConfig.DEBUG) {
  //     try {
  //       /*
  //        We use reflection here to pick up the class that initializes Flipper,
  //       since Flipper library is not available in release mode
  //       */
  //       Class<?> aClass = Class.forName("com.facebook.flipper.ReactNativeFlipper");
  //       aClass.getMethod("initializeFlipper", Context.class).invoke(null, context);
  //     } catch (ClassNotFoundException e) {
  //       e.printStackTrace();
  //     } catch (NoSuchMethodException e) {
  //       e.printStackTrace();
  //     } catch (IllegalAccessException e) {
  //       e.printStackTrace();
  //     } catch (InvocationTargetException e) {
  //       e.printStackTrace();
  //     }
  //   }
  // }
}

We have added import net.no_mad.tts.TextToSpeechPackage to help the app support text to speech conversion in order to receive the response not only as text but also as voice.

To capture voice input we have to add import com.wmjmc.reactspeech.VoicePackage.

If you intend to use react-native vector icons, you will be required to add import com.oblador.vectoricons.VectorIconsPackage.

The initialize flipper() method has to be commented to avoid encountering the error shown below:

We also need to comment the line initializeFlipper(this) in public void onCreate() { method.

Moving on from MainApplication.java,  we also need to modify our existing code in the App.js file.

Initially, we need to install some packages. To do so, navigate to your project folder in the terminal and execute the below command.

npm install react-native-elements react-native-vector-icons react-native-tts uuid react-native-android-voice --save

We are using react-native-elements to use the already built UI components. Use the react-native-vector-icons package if you want to use react-native vector icons.

React Native TTS is a text-to-speech library for react-native on iOS and Android.

Uuid package is used in order to generate UUID (unique id’s) which we would be using in the app.

react-native-android-voice is a speech-to-text library for react-native for the Android platform.

Now modify the content of the App.js file as below.
App.js:

import React, { Component, useEffect } from 'react';
import { View, StyleSheet } from 'react-native';
import { GiftedChat } from 'react-native-gifted-chat';
import { Dialogflow_V2 } from 'react-native-dialogflow';
import { dialogflowConfig } from './env';
import SpeechAndroid from 'react-native-android-voice';
import uuid from "uuid";
import Tts from 'react-native-tts';
import Icon from 'react-native-vector-icons/FontAwesome';
import { Button } from 'react-native-elements';

const BOT_USER = {
  _id: 2,
  name: 'Health Bot',
  avatar: 'https://previews.123rf.com/images/iulika1/iulika11909/iulika1190900021/129697389-medical-worker-health-professional-avatar-medical-staff-doctor-icon-isolated-on-white-background-vec.jpg'
};

var speak=0;

class App extends Component {
  state = {
    messages: [
      {
        _id: 1,
        text: 'Hi! I am the Healthbot 🤖.\n\nHow may I help you today?',
        createdAt: new Date(),
        user: BOT_USER,
      }
    ],
  };

  componentDidMount() {
    Dialogflow_V2.setConfiguration(
      dialogflowConfig.client_email,
      dialogflowConfig.private_key,
      Dialogflow_V2.LANG_ENGLISH_US,
      dialogflowConfig.project_id
    );
  }

  onSend(messages = []) {
    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, messages)
    }));

    let message = messages[0].text;
    Dialogflow_V2.requestQuery(
      message,
      result => this.handleGoogleResponse(result),
      error => console.log(error)
    );
  }

  handleGoogleResponse(result) {
    let receivedText = '';
    let splitReceivedText = '';
    var extractDate = '';
    const dateRegex = /\d{4}\-\d{2}\-\d{2}?/gm;

    receivedText = result.queryResult.fulfillmentMessages[0].text.text[0];
    splitReceivedText = receivedText.split('on')[0];
    extractDate = receivedText.match(dateRegex);

    if (extractDate != null) {
      var completeTimeValue = splitResponseForTime(receivedText);
      var timeValue = getTimeValue(completeTimeValue);
      function splitResponseForTime(str) {
        return str.split('at')[1];
      }

      function getTimeValue(str) {
        let time1 = str.split('T')[1];
        let hour = time1.split(':')[0];
        let min = time1.split(':')[1];
        return hour + ":" + min;
      }
      splitReceivedText = splitReceivedText + 'on ' + extractDate[0] + ' at ' + timeValue;
    }
    this.sendBotResponse(splitReceivedText);
  }

  sendBotResponse(text) {
    let msg = {
      _id: this.state.messages.length + 1,
      text,
      createdAt: new Date(),
      user: BOT_USER
    };

    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, msg)
    }));

    if (speak) {
      speak=0; 
      Tts.getInitStatus().then(() => {
        Tts.speak(text);
      }, (err) => {
        if (err.code === 'no_engine') {
          Tts.requestInstallEngine();
        }
      });
    }
  }

  uuidGen = () => {
    return uuid.v4();
  }

  micHandler = async () => {
    try {
      let textSpeech = await SpeechAndroid.startSpeech("Speak now", SpeechAndroid.ENGLISH);
      speak=1;
      let StateVariable = [{
        "text": textSpeech,
        "user": {
          "_id": 1
        },
        "createdAt": new Date(),
        "_id": this.uuidGen()
      }];
      await this.onSend(StateVariable);

      await ToastAndroid.show(spokenText, ToastAndroid.LONG);
    } catch (error) {
      switch (error) {
        case SpeechAndroid.E_VOICE_CANCELLED:
          ToastAndroid.show("Voice Recognizer cancelled", ToastAndroid.LONG);
          break;
        case SpeechAndroid.E_NO_MATCH:
          ToastAndroid.show("No match for what you said", ToastAndroid.LONG);
          break;
        case SpeechAndroid.E_SERVER_ERROR:
          ToastAndroid.show("Google Server Error", ToastAndroid.LONG);
          break;
      }
    }
  }


  render() {
    return (
      < View style={styles.screen} >
        <GiftedChat
          messages={this.state.messages}
          onSend={messages => this.onSend(messages)}
          user={{
            _id: 1
          }}
        />
        <Button
          icon={
            <Icon
              name="microphone"
              size={24}
              color="white"
            />
          }
          onPress={this.micHandler}
          />
        </View>
    );
  }
}

const styles = StyleSheet.create({
  screen: {
    flex: 1,
    backgroundColor: '#fff'
  },
});
export default App;

Now when we run the app using the react-native run-android command, we can see a microphone button that will enable speech to text feature in the app.

Once we click the mic button, it triggers micHandler() function which contains the logic to convert speech into text. This will automatically start recognizing and adjusting for the English language.

However, you can use different languages for speech. Read this for more information.

micHandler = async () => {
    try {
      let textSpeech = await SpeechAndroid.startSpeech("Speak now", SpeechAndroid.ENGLISH);
      speak=1;
      let messagesArray = [{
        "text": textSpeech,
        "user": {
          "_id": 1
        },
        "createdAt": new Date(),
        "_id": this.uuidGen()
      }];
      await this.onSend(messagesArray);
      await ToastAndroid.show(spokenText, ToastAndroid.LONG);
    } catch (error) {
      switch (error) {
        case SpeechAndroid.E_VOICE_CANCELLED:
          ToastAndroid.show("Voice Recognizer cancelled", ToastAndroid.LONG);
          break;
        case SpeechAndroid.E_NO_MATCH:
          ToastAndroid.show("No match for what you said", ToastAndroid.LONG);
          break;
        case SpeechAndroid.E_SERVER_ERROR:
          ToastAndroid.show("Google Server Error", ToastAndroid.LONG);
          break;
      }
    }
  }

micHandler() is an asynchronous function. Here, we will capture the data and store it in messagesArray array and send it as an argument to the onSend() function. As we need unique id for every message, we use uuidGen() function which will simply return unique id every time we call it.

If the response is successful, onSend() function will trigger handleGoogleResponse() function.
handleGoogleResponse() function is as shown below.

handleGoogleResponse(result) {
    let receivedText = '';
    let splitReceivedText = '';
    var extractDate = '';
    const dateRegex = /\d{4}\-\d{2}\-\d{2}?/gm;

    receivedText = result.queryResult.fulfillmentMessages[0].text.text[0];
    splitReceivedText = receivedText.split('on')[0];
    extractDate = receivedText.match(dateRegex);

    if (extractDate != null) {
      var completeTimeValue = splitResponseForTime(receivedText);
      var timeValue = getTimeValue(completeTimeValue);
      function splitResponseForTime(str) {
        return str.split('at')[1];
      }

      function getTimeValue(str) {
        let time1 = str.split('T')[1];
        let hour = time1.split(':')[0];
        let min = time1.split(':')[1];
        return hour + ":" + min;
      }
      splitReceivedText = splitReceivedText + 'on ' + extractDate[0] + ' at ' + timeValue;
    }
    this.sendBotResponse(splitReceivedText);
  }

In this method, we have written the logic which helps in structuring the response sent by Dialogflow API in the way we require. The structured text is then passed on to sendBotResponse() function to set the state of the messages array.

 sendBotResponse(text) {
    let msg = {
      _id: this.state.messages.length + 1,
      text,
      createdAt: new Date(),
      user: BOT_USER
    };

    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, msg)
    }));

    if (speak) {
      speak=0; 
      Tts.getInitStatus().then(() => {
        Tts.speak(text);
      }, (err) => {
        if (err.code === 'no_engine') {
          Tts.requestInstallEngine();
        }
      });
    }
  }

If the user is interacting with the bot using the voice, the response from the bot will not only be in the form of a text but also as a voice. That is handled using Tts.speak(text). It can take some time to initialize the TTS engine, and Tts.speak() will fail to speak until the engine is ready. To wait for successful initialization, we are using getInitStatus() call.

Below are the snapshots of the app running on an Android device.

That’s all. Thank you for the read!

This story is authored by Dheeraj Kumar and Santosh Kumar. Dheeraj is a software engineer specializing in React Native and React based frontend development. Santosh specializes on Cloud Services based development.

Creating a Chatbot for Healthcare in React Native using Dialogflow

In this blog, we shall learn how to build an AI virtual assistant or a Chatbot using React Native and Dialogflow API.

Why are chatbots important?
A chatbot is a piece of software that helps in conducting a conversation through voice based or textual methods. Chatbots offer companies new opportunities to improve the customer engagement process and operational efficiency by reducing the typical cost of customer service.

Image result for dialogflow

What is Dialogflow?
Dialogflow (previously known as API.AI) is a Natural Language Processing (NLP) platform which can be greatly helpful to build conversational applications for a company’s customers in various languages and also across multiple platforms. Dialogflow enables developers to create text-based and voice conversation interfaces for responding to customer queries in different languages.

Why Dialogflow?
There are different chatbot SDK’s like Dialogflow, Amazon Lex, IBM Watson, Microsoft Bot Framework etc. The reasons to why we chose to use Dialogflow are:

  1. Dialogflow supports multiple platforms.
  2. Dialogflow supports all the devices like wearables, phones and other devices.
  3. Dialogflow also supports multiple languages.

How Dialogflow works?
In Dialogflow, the typical flow of any conversation involves these steps:

  1. The user providing an input.
  2. Dialogflow agent parsing that input based on the intent.
  3. Agent returning a response to the user.

Setting up Dialogflow account:

Navigate to console in the official website. After navigating to console you will be prompted to sign in with Google, go ahead and sign-in. After successfully signing in you can see a dashboard.

Before we dive into the platform and start building the bot/agent, let us learn about the terms used in Dialogflow.

After signing in, you could see a Create Agent tab. An agent is nothing but the bot that you would like to create. Give a name of your choice and click on the Create button. After creating successfully you could see multiple tabs on the left side of the screen like:

  1. Intents
  2. Entities
  3. Fulfillment etc

Intents:
An Intent is a specific action that the user can invoke by using one of the defined terms in the Dialogflow console. 

For example, the user could ask “What’s the time?” or “What is today’s date?” if these terms are defined within the console, then they will be detected by Dialogflow and intents that are defined under will get triggered.

You can create an intent by clicking on create intent as shown below.

You shall see some default intents already available. We can create the new intents here.

Entities:
An Entity is a property which can be used by Dialogflow to answer the request from the user. The entity will usually be a keyword within the request such as a name, date, time etc. 

Dialogflow has a rich set of predefined entities and also has an option that enables the developer to define custom entities as well.

Fulfillment:
When the user provides the input, Dialogflow needs to process the user input which might contain entities as well. Hence Dialogflow needs to request the information from web-hook so as to fulfill the users request. The input provided by the user along with entities is then sent to the web-hook so that the required information can be retrieved. Once the Dialogflow receives the information from web-hook it sends the response back to the user in the desired manner.

For example, if the user wants to know about weather conditions, a web-hook could be used to get info about weather and pass it on to the user.

Response:
It is the content which Dialogflow sends back to the user once the user’s query is processed.

Creating a ChatBot for Health care:

Now that we have learnt about some basic terms of Dialogflow, let us start building a chatbot (in this case Healthbot) which helps the user (patient) to schedule an appointment with a specific doctor in an organization.

Let’s go ahead and create an agent first. Here we are creating an agent with the name HealthBot.

After clicking the create button, the HealthBot agent would be created. It would look like below.

You could see some default intents there. We can create our own intents here. So, let’s move forward and start creating the intents.

The intent we will be creating here is “Schedule an Appointment”.

Save the intent after creating. In the Training Phrases section, we can add our own training phrases to train the agent.

When we add a particular training phrase , Dialogflow would look for predefined entities in the phrase, if found it will highlight them as shown.

Add few other relative training phrases and click on save.

Next in the Action and Parameters section we can make the @sys.person, @sys.date, @sys.time as required by checking on the Required checkbox. We can also define the prompts for the required fields so that if the user does not provide any one of them the defined prompt will be shown up asking the user to provide the required parameters.

The prompts for the entities could be defined by clicking define prompts under Prompts. Below are the prompts for the respective entities.

Next we have to add the response in the Response section.

After receiving all the required parameters from the user , we can phrase a response like shown.

Now we have to create a front end app using React Native which would communicate with the HealthBot agent.

Let’s go to React Native Docs, select React Native CLI Quickstart and select the appropriate development OS and the target OS as Android, as we are going to build an android application. 

Follow the docs for installing dependencies, then create a new react native application. Use the command line interface to create a new react native project.

react-native init <project-name>

By using the below commands you can run the app on android device. You could see the default welcome page. 

cd <project-name>
Npm install
React-native run-android

Note:  If you face an issue like “Failed to install the app. Make sure you have the Android development environment set up”, just traverse to <project-name>/android folder and create a file named local.properties and add the Android SDK path in it as shown here.

sdk.dir = Your Android SDK Path

We also need to install some dependencies using below command.

npm install react-native-gifted-chat
react-native-dialogflow -save

We are using react-native-gifted-chat package as it provides a customizable and complete chat UI interface.

We are also using react-native-dialogflow so that we can bridge our app with Google Dialogflow’s SDK. 

For our app to communicate with Dialogflow agent, we need to configure few things. For that create any .js file in your project root folder (in this env.js).
We need to configure few values in env.js file.

To get the values click on the Service Account link as shown in the image.
You can get this by clicking on the gear icon present beside the agent name on the left side of the screen.

After clicking the link , you would be shown a table called Service accounts for project “<Agent Name>”. Click on Actions and select create key option from there. A prompt will appear asking to choose an option. Select JSON and click on create. A json file would be downloaded. Just copy the contents of the json file and add it in env.js.

Your env.js file would look like below.

env.js

export const dialogflowConfig = {
  "type": "service_account",
  "project_id": "Health-bot",
  "private_key_id": "xxxx",
  "private_key": "-----BEGIN PRIVATE KEY-----\n xxxx\n-----END PRIVATE KEY-----\n",
  "client_email": "xxxx",
  "client_id": "xxxx",
  "auth_uri": "xxxx",
  "token_uri": "xxxx",
  "auth_provider_x509_cert_url": "xxxx",
  "client_x509_cert_url": "xxxx"
}

Now go to <project-name> directory and open App.js. Modify the content of App.js as below.

App.js

import React, { Component } from 'react';
import {View} from 'react-native';
import { GiftedChat } from 'react-native-gifted-chat';
import { Dialogflow_V2 } from 'react-native-dialogflow';
import { dialogflowConfig } from './env';

const BOT_USER = {
  _id: 2,
  name: 'Health Bot',
  avatar: 'https://previews.123rf.com/images/iulika1/iulika11909/iulika1190900021/129697389-medical-worker-health-professional-avatar-medical-staff-doctor-icon-isolated-on-white-background-vec.jpg'
};
class App extends Component {

  state = {
    messages: [
      {
        _id: 1,
        text: 'Hi! I am the Healthbot 🤖.\n\nHow may I help you today?',
        createdAt: new Date(),
        user: BOT_USER
      }
    ]
  };

  componentDidMount() {
    Dialogflow_V2.setConfiguration(
      dialogflowConfig.client_email,
      dialogflowConfig.private_key,
      Dialogflow_V2.LANG_ENGLISH_US,
      dialogflowConfig.project_id
    );
  }

  onSend(messages = []) {
    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, messages)
    }));

    let message = messages[0].text;
    Dialogflow_V2.requestQuery(
      message,
      result => this.handleGoogleResponse(result),
      error => console.log(error)
    );
  }

  handleGoogleResponse(result) {
    let text = result.queryResult.fulfillmentMessages[0].text.text[0];
    this.sendBotResponse(text);
  }

  sendBotResponse(text) {
    let msg = {
      _id: this.state.messages.length + 1,
      text,
      createdAt: new Date(),
      user: BOT_USER
    };

    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, [msg])
    }));
  }

  render() {
    return (
      <View style={{ flex: 1, backgroundColor: '#fff' }}>
        <GiftedChat
          messages={this.state.messages}
          onSend={messages => this.onSend(messages)}
          user={{
            _id: 1
          }}
        />
      </View>
    );
  }
}
export default App;

When the App.js file renders, the first thing it renders is componentDidMount() where we set the configuration of Dialogflow as given below.


componentDidMount() {
    Dialogflow_V2.setConfiguration(
      dialogflowConfig.client_email,
      dialogflowConfig.private_key,
      Dialogflow_V2.LANG_ENGLISH_US,
      dialogflowConfig.project_id
    );
  }

When you click on send , it will trigger the onSend() method where the user message gets stored in the state variable and we will send a request to Dialogflow using Dialogflow_V2.requestQuery. If the response is successful, handleGoogleResponse() method gets triggered.

onSend(messages = []) {
    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, messages)
    }));

    let message = messages[0].text;
    Dialogflow_V2.requestQuery(
      message,
      result => this.handleGoogleResponse(result),
      error => console.log(error)
    );
  }

handleGoogleResponse() will get the text from the response and triggers sendBotResponse() method where it will set the state to response as shown below

handleGoogleResponse(result) {
    let text = result.queryResult.fulfillmentMessages[0].text.text[0];
    this.sendBotResponse(text);
  }

  sendBotResponse(text) {
      let msg = {
        _id: this.state.messages.length + 1,
        text,
        createdAt: new Date(),
        user: BOT_USER
      };

      this.setState(previousState => ({
        messages: GiftedChat.append(previousState.messages, [msg])
      }));
    }

Below are the images of the app running on an Android device.

That’s it folks, we hope it was fun and useful.

This story is authored by Dheeraj Kumar and Santosh Kumar. Dheeraj is a software engineer specializing in React Native and React based frontend development. Santosh specializes on Cloud Services based development.

Object Detection in React Native App using AWS Rekognition

In this post, we are going to build a React Native app for detecting objects from an image using Amazon Rekognition.

Here we will capture an Image or Select it from file system. We will send that image to API Gateway where it triggers the Lambda Function which will store in S3 Bucket. The stored image is sent to Amazon Recognition which will detect the objects from the image.

Installing dependencies:

Let’s go to React Native Docs, select React Native CLI Quickstart and select our appropriate Development OS and the Target OS as Android, as we are going to build an android application.

Follow the docs for installing dependencies, then create a new React Native Application. Use the command line interface to generate a new React Native project called ObjectDetection.

react-native init ObjectDetection

Preparing the Android device:

We shall need an Android device to run our React Native Android app. This can be either a physical Android device, or more commonly, we can use an Android Virtual Device (AVD) which allows us to emulate an Android device on our computer (using Android Studio).

Either way, we shall need to prepare the device to run Android apps for development. If you have a physical Android device, you can use it for development in place of an AVD by connecting it to your computer using a USB cable and following the instructions here.

If you are using a virtual device follow this link. I shall be using a physical Android device.

Now go to the command line and run react-native run-android inside your React Native app directory

cd ObjectDetection && react-native run-android

If everything is set up correctly, you should see your new app running on your physical device or Android emulator.

API Creation in AWS Console: 

Before going further, create an API in your AWS console following Working with API Gateway paragraph in the following post:
https://medium.com/zenofai/serverless-web-application-architecture-using-react-with-amplify-part1-5b4d89f384f7
Once you are done with creating API come back to the React Native application.
Now, go to your project directory and Replace your App.js file with the following code.

import React, {Component} from 'react';
import {
 StyleSheet,
 View,
 Text,
 TextInput,
 Image,
 ScrollView,
 TouchableHighlight,
} from 'react-native';
import ImagePicker from 'react-native-image-picker';
import Amplify, {API} from 'aws-amplify';
import Video from 'react-native-video';
 
// Amplify configuration for API-Gateway
Amplify.configure({
 API: {
   endpoints: [
     {
       name: 'LabellingAPI',   //your api name
       endpoint:’<Endpoint-URL>’, //Your Endpoint URL
     },
   ],
 },
});
 
class Registration extends Component {
 constructor(props) {
   super(props);
   this.state = {
     username: 'storeImage.png',
     userId: '',
     image: '',
     capturedImage: '',
     objectName: '',
   };
 }
 
// It selects image from filesystem or capture from camera
 captureImageButtonHandler = () => {
   this.setState({
     objectName: '',
   });
 
   ImagePicker.showImagePicker(
     {title: 'Pick an Image', maxWidth: 800, maxHeight: 600},
     response => {
       console.log('Response = ', response);
       if (response.didCancel) {
         console.log('User cancelled image picker');
       } else if (response.error) {
         console.log('ImagePicker Error: ', response.error);
       } else if (response.customButton) {
         console.log('User tapped custom button: ', response.customButton);
       } else {
         // You can also display the image using data:
         const source = {uri: 'data:image/jpeg;base64,' + response.data};
         this.setState({
           capturedImage: response.uri,
           base64String: source.uri,
         });
       }
     },
   );
 };
 
// this method triggers when you click submit. If the image is valid then It will send the image to API Gateway. 
 submitButtonHandler = () => {
   if (
     this.state.capturedImage == '' ||
     this.state.capturedImage == undefined ||
     this.state.capturedImage == null
   ) {
     alert('Please Capture the Image');
   } else {
     const apiName = 'LabellingAPI';
     const path = '/storeimage';
     const init = {
       headers: {
         Accept: 'application/json',
         'Content-Type': 'application/x-amz-json-1.1',
       },
       body: JSON.stringify({
         Image: this.state.base64String,
         name: 'storeImage.png',
       }),
     };
 
     API.post(apiName, path, init).then(response => {
       if (JSON.stringify(response.Labels.length) > 0) {
         this.setState({
           objectName: response.Labels[0].Name,
         });
       } else {
         alert('Please Try Again.');
       }
     });
   }
 };
 
 render() {
   if (this.state.image !== '') {
   }
   return (
     <View style={styles.MainContainer}>
       <ScrollView>
         <Text
           style={{
             fontSize: 20,
             color: '#000',
             textAlign: 'center',
             marginBottom: 15,
             marginTop: 10,
           }}>
           Capture Image
         </Text>
         {this.state.capturedImage !== '' && (
           <View style={styles.imageholder}>
             <Image
               source={{uri: this.state.capturedImage}}
               style={styles.previewImage}
             />
           </View>
         )}
         {this.state.objectName ? (
           <TextInput
             underlineColorAndroid="transparent"
             style={styles.TextInputStyleClass}
             value={this.state.objectName}
           />
         ) : null}
         <TouchableHighlight
           style={[styles.buttonContainer, styles.captureButton]}
           onPress={this.captureImageButtonHandler}>
           <Text style={styles.buttonText}>Capture Image</Text>
         </TouchableHighlight>
 
         <TouchableHighlight
           style={[styles.buttonContainer, styles.submitButton]}
           onPress={this.submitButtonHandler}>
           <Text style={styles.buttonText}>Submit</Text>
         </TouchableHighlight>
       </ScrollView>
     </View>
   );
 }
}
 
const styles = StyleSheet.create({
 TextInputStyleClass: {
   textAlign: 'center',
   marginBottom: 7,
   height: 40,
   borderWidth: 1,
   marginLeft: 90,
   width: '50%',
   justifyContent: 'center',
   borderColor: '#D0D0D0',
   borderRadius: 5,
 },
 inputContainer: {
   borderBottomColor: '#F5FCFF',
   backgroundColor: '#FFFFFF',
   borderRadius: 30,
   borderBottomWidth: 1,
   width: 300,
   height: 45,
   marginBottom: 20,
   flexDirection: 'row',
   alignItems: 'center',
 },
 buttonContainer: {
   height: 45,
   flexDirection: 'row',
   alignItems: 'center',
   justifyContent: 'center',
   marginBottom: 20,
   width: '80%',
   borderRadius: 30,
   marginTop: 20,
   marginLeft: 5,
 },
 captureButton: {
   backgroundColor: '#337ab7',
   width: 350,
 },
 buttonText: {
   color: 'white',
   fontWeight: 'bold',
 },
 horizontal: {
   flexDirection: 'row',
   justifyContent: 'space-around',
   padding: 10,
 },
 submitButton: {
   backgroundColor: '#C0C0C0',
   width: 350,
   marginTop: 5,
 },
 imageholder: {
   borderWidth: 1,
   borderColor: 'grey',
   backgroundColor: '#eee',
   width: '50%',
   height: 150,
   marginTop: 10,
   marginLeft: 90,
   flexDirection: 'row',
   alignItems: 'center',
 },
 previewImage: {
   width: '100%',
   height: '100%',
 },
});
 
export default Registration;

In the above code, we are configuring amplify with the API name and Endpoint URL that you created as shown below.

Amplify.configure({
 API: {
   endpoints: [
     {
       name: '<Your-API-Name>, 
       endpoint:’<Endpoint-URL>’,
     },
   ],
 },
});

By clicking the capture button it will trigger the captureImageButtonHandler function. It will then ask the user to take a picture or select from file system. When user captures the image or selects from file system, we will store that image in the state as shown below.

captureImageButtonHandler = () => {
   this.setState({
     objectName: '',
   });
 
   ImagePicker.showImagePicker(
     {title: 'Pick an Image', maxWidth: 800, maxHeight: 600},
     response => {
       console.log('Response = ', response);
       if (response.didCancel) {
         console.log('User cancelled image picker');
       } else if (response.error) {
         console.log('ImagePicker Error: ', response.error);
       } else if (response.customButton) {
         console.log('User tapped custom button: ', response.customButton);
       } else {
         // You can also display the image using data:
         const source = {uri: 'data:image/jpeg;base64,' + response.data};
         this.setState({
           capturedImage: response.uri,
           base64String: source.uri,
         });
       }
     },
   );
 };

After capturing the image we will preview that image. By Clicking on submit button, submitButtonHandler function will get triggered where we will send the image to the end point as shown below.

submitButtonHandler = () => {
   if (
     this.state.capturedImage == '' ||
     this.state.capturedImage == undefined ||
     this.state.capturedImage == null
   ) {
     alert('Please Capture the Image');
   } else {
     const apiName = 'LabellingAPI';
     const path = '/storeimage';
     const init = {
       headers: {
         Accept: 'application/json',
         'Content-Type': 'application/x-amz-json-1.1',
       },
       body: JSON.stringify({
         Image: this.state.base64String,
         name: 'storeImage.png',
       }),
     };
 
     API.post(apiName, path, init).then(response => {
       if (JSON.stringify(response.Labels.length) > 0) {
         this.setState({
           objectName: response.Labels[0].Name,
         });
       } else {
         alert('Please Try Again.');
       }
     });
   }
 };

Lambda Function:

Add the following code into your lambda function that you created in your AWS Console.

const AWS = require('aws-sdk')
var rekognition = new AWS.Rekognition()
var s3Bucket = new AWS.S3( { params: {Bucket: "<Your-Bucket>"} } );
var fs = require('fs');
exports.handler = (event, context, callback) => {
   let parsedData = JSON.parse(event)
   let encodedImage = parsedData.Image;
   var filePath = parsedData.name;
   let buf = new Buffer(encodedImage.replace(/^data:image\/\w+;base64,/, ""),'base64')
   var data = {
       Key: filePath,
       Body: buf,
       ContentEncoding: 'base64',
       ContentType: 'image/jpeg'
   };
   s3Bucket.putObject(data, function(err, data){
       if (err) {
           console.log('Error uploading data: ', data);
           callback(err, null);
       } else {
           var params = {
             Image: {
              S3Object: {
               Bucket: "<Your-Bucket>",
               Name: filePath
              }
             },
             MaxLabels: 10,
             MinConfidence: 90
            };
           rekognition.detectLabels(params, function(err, data) {
               if (err){
                   console.log(err, err.stack);
                   callback(err)
               }
               else{
                   console.log(data);
                   callback(null, data);
               }
           });
       }
   });
};

In the above code, we would receive the image from React Native which we are storing in S3 Bucket. The stored image is sent to Amazon Recognition which has detectLabels method that detects the labels from the image and sends the response with the detected labels in JSON format.

capture image screen

Once you capture an image you can see a preview of that image as shown below.

Nike backpack

On submitting the captured image you can see the label of that image as shown below:

Object recognised as backpack

That’s all folks! I hope it was helpful.
For any queries drop them in the comments section.

This story is authored by Dheeraj Kumar and Venu Vaka. Dheeraj is a software engineer specializing in React Native and React based frontend development. Venu is a software engineer specializing in ReactJS and AWS Cloud.

Understanding React Native Navigation

This post focuses on building a react-native app which supports navigation using react-native-navigation.

Why React Native Navigation?

There are many ways in which we can implement navigation functionality in mobile apps developed using React Native. The two most popular among them are
‘React Navigation’ and ‘React Native Navigation’.

As React Native Navigation uses the native modules with a JS bridge, the performance will be better when compared to React Navigation.

In the newer versions of react-native it is not necessary to link the libraries manually. The auto linking feature was introduced in the react-native version 0.6. The newer versions of react-native ( above 0.60 ) use CocoaPods by default.

Hence it is not necessary to link the libraries in Xcode. The libraries will be linked automatically when we install the libraries using the ‘pod install’ command. Over the course of this post, we will be building a simple app for iOS in react-native (version above 0.60) which supports navigation using ‘react-native-navigation’.

Create a react-native app:

To set up react-native environment in your machine refer to the link below.

https://facebook.github.io/react-native/docs/getting-started.html

Create a react-native app using the following command.

react-native init sample-app

It will create a project folder with the name sample-app. Then run the following commands to install the dependencies.

cd sample-app
npm install

After that run the following command inside the project directory to open the app in an iPhone Simulator.

react-native run-ios

You should see the app running in an iOS simulator in a while. And it will display a welcome screen if you haven’t made any code changes in App.js file.

Steps to install React-Native-Navigation:

To install react-native-navigation, run the following command inside the project directory.

npm install react-native-navigation --save

It will install the react-native-navigation package into the project. Add the following line to the Podfile.

pod 'ReactNativeNavigation', :podspec => '../node_modules/react-native-navigation/ReactNativeNavigation.podspec'

Then move to the ios folder in the project and install the dependencies using the following commands.

cd ios
pod install

Pod install command will install the SDKs specified in the Podspec, along with any dependencies they may have.

After that we need to edit AppDelegate.m file in Xcode.
To do so, open the navigation.xcworkspace file located in sample-app/ios folder. Remove the content inside the AppDelegate.m file and add the following code into the file.

File : AppDelegate.m

#import "AppDelegate.h"

#import <React/RCTBundleURLProvider.h>
#import <React/RCTRootView.h>
#import <ReactNativeNavigation/ReactNativeNavigation.h>

@implementation AppDelegate

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
  NSURL *jsCodeLocation = [[RCTBundleURLProvider sharedSettings] jsBundleURLForBundleRoot:@"index" fallbackResource:nil];
  [ReactNativeNavigation bootstrap:jsCodeLocation launchOptions:launchOptions];
  
  return YES;
}

@end

Navigation Workflow:

First we will create a landing/home screen for the app. Create a Landing Screen inside the project folder by following the below steps.

  1. Create a folder named src inside the project.
  2. Add another folder inside src and name it screens.
  3. Create Welcome.js file in screens folder and add the following code into it.

File : Welcome.js

import React, {Component} from 'react';
import {View, Text, StyleSheet, Button} from 'react-native';

class WelcomeScreen extends Component {
  render() {
    return (
      <View style={styles.container}>
        <Text style={styles.textStyles}>React Native Navigation</Text>
        <View style={styles.loginButtonContainer}>
          <Button
            title="Login"
            color="#FFFFFF"
          />
        </View>
        <View style={styles.signUpButtonContainer}>
          <Button
            title="Signup"
            color="#FFFFFF"
          />
        </View>
      </View>
    );
  }
}

export default WelcomeScreen;

const styles = StyleSheet.create({
  container: {
    flex: 1,
    alignItems: 'center',
    justifyContent: 'flex-start',
    backgroundColor: '#CCCCCC',
  },
  textStyles: {
    alignItems: 'flex-start',
    fontSize: 25,
    fontWeight: 'bold',
    marginTop: 200,
  },
  loginButtonContainer: {
    backgroundColor: '#2494fb',
    borderRadius: 10,
    marginTop: 40,
    width: '50%',
    shadowColor: '#000000',
  },
  signUpButtonContainer: {
    backgroundColor: '#2494fb',
    borderRadius: 10,
    marginTop: 10,
    width: '50%',
    shadowColor: '#000000',
  },
});

We need to register the screens for the navigator to work. For that we need to modify the contents of index.js file. Replace the contents of index.js file with the following code.

File : index.js

import { Navigation } from 'react-native-navigation';
import Welcome from './src/screens/Welcome'; 

Navigation.registerComponent('Welcome', ()=>Welcome)
Navigation.events().registerAppLaunchedListener(()=>{
    Navigation.setRoot({
        root:{
            component:{
                name:'Welcome'
            }
        }
    })
})

Explanation:

Navigation.registerComponent() is used to register the screen. As we have already created the Welcome screen component, we would use that to register.

Navigation.registerComponent('Welcome', () => Welcome).

The above step only registers our screen but not the application.So to register our application we would use Navigation.events().registerAppLaunchedListener().

Inside the callback of the registerAppLaunchedListener we will set the root of the application using Navigation.setRoot(). The code would be like the following.

Navigation.events().registerAppLaunchedListener(()=>{
    Navigation.setRoot({
        root:{
            component:{
                name:'Welcome'
            }
        }
    })
})

Here the component name is Welcome which is the screen we created and registered.

Note: The name of the component and the name of the registerComponent screen must be same.

Run the file from Xcode which you opened earlier using navigation.xcworkspace. And you will be seeing the landing screen which we created in an iPhone Simulator. It would look like below.

Now we have the landing page of the app, we shall now try to navigate to other screens when we click either on Login or SignUp button. To obtain that mechanism we will create two more screens inside the screens folder as Login.js and SignUp.js.

Place the below code snippet in Login.js file.

File : Login.js

import React, { Component } from "react";
import { View, Text, StyleSheet } from "react-native";

class Login extends Component {
    render() {
        return(
            <View style={styles.container}>
                <Text style={styles.textStyles}>
                    Welcome to Login Page.
                </Text>
            </View> 
        );
    }
}

export default Login;

const styles = StyleSheet.create({
    container: {
        flex:1,
        alignItems:'center',
        justifyContent: 'center',
        backgroundColor: '#CCCCCC',
    },
    textStyles: {
        alignItems:'center',
        fontSize: 25
    }
});

After that place the below code in SignUp.js

File : SignUp.js

import React, { Component } from "react";
import { View, Text, StyleSheet } from "react-native";

class SignUp extends Component {
    render() {
        return(
            <View style={styles.container}>
                <Text style={styles.textStyles}>
                    Welcome to SignUp Page.
                </Text>
            </View> 
        );
    }
}

export default SignUp;

const styles = StyleSheet.create({
    container: {
        flex:1,
        alignItems:'center',
        justifyContent: 'center',
        backgroundColor: '#CCCCCC',
    },
    textStyles: {
        alignItems:'center',
        fontSize: 25
    }
});

As we have to navigate between screens, we cannot set the root screen as only component. So to navigate we must use Stack (array of screens). The Stack consists of children array where we set the component name of the landing page of the app. Hence we also need to replace the contents of index.js file as below.

File : index.js

import { Navigation } from 'react-native-navigation';
import Welcome from './src/screens/Welcome';
import Login from './src/screens/Login';
import SignUp from './src/screens/SignUp';

Navigation.registerComponent('Welcome', ()=>Welcome)
Navigation.registerComponent('Login', () => Login)
Navigation.registerComponent('SignUp', () => SignUp)

Navigation.events().registerAppLaunchedListener(()=>{
    Navigation.setRoot({
        root:{
            stack:{
                id:'navigationStack',
                children: [
                        {
                            component :{
                                name: 'Welcome'
                            },
                        },
                ]
            }
        }
    })
})

To add the title to the landing page, replace the Navigator.setRoot({}) function in the index.js file as below.

Navigation.setRoot({
        root:{
            stack:{
                id:'navigationStack',
                children: [
                        {
                            component :{
                                name: 'Welcome',
                                options: {
                                    topBar: {
                                        title: {
                                            text: 'Home'
                                        }
                                    }
                                }
                            },
                        },
                ]
            }
        }
    })

The landing screen will now look like below which has HOME as its title.

Now we will write the event handler function in Welcome.js file so that we can navigate between the screens when we click on either the Login or SignUp button. For that replace the content of Welcome.js file as below.

File : Welcome.js

import React, {Component} from 'react';
import {View, Text, StyleSheet, Button} from 'react-native';
import {Navigation} from 'react-native-navigation';

class WelcomeScreen extends Component {
  goToScreen = screenName => {
    Navigation.push(this.props.componentId, {
      component: {
        name: screenName,
      },
    });
  };
  render() {
    return (
      <View style={styles.container}>
        <Text style={styles.textStyles}>React Native Navigation</Text>
        <View style={styles.loginButtonContainer}>
          <Button
            title="Login"
            color="#FFFFFF"
            onPress={() => this.goToScreen('Login')}
          />
        </View>
        <View style={styles.signUpButtonContainer}>
          <Button
            title="Signup"
            color="#FFFFFF"
            onPress={() => this.goToScreen('SignUp')}
          />
        </View>
      </View>
    );
  }
}

export default WelcomeScreen;

const styles = StyleSheet.create({
  container: {
    flex: 1,
    alignItems: 'center',
    justifyContent: 'flex-start',
    backgroundColor: '#CCCCCC',
  },
  textStyles: {
    alignItems: 'flex-start',
    fontSize: 25,
    fontWeight: 'bold',
    marginTop: 200,
  },
  loginButtonContainer: {
    backgroundColor: '#2494fb',
    borderRadius: 10,
    marginTop: 40,
    width: '50%',
    shadowColor: '#000000',
  },
  signUpButtonContainer: {
    backgroundColor: '#2494fb',
    borderRadius: 10,
    marginTop: 10,
    width: '50%',
    shadowColor: '#000000',
  },
});

To navigate to a particular screen we need to import Navigation component from react-native-navigation.When we click either on the Login button or SignUp button, the event is handled in goToScreen method.

We are passing the name of the screen as an argument to the method and it must be same as the name which is used to register the screen in Navigator.registerComponent() in index.js file.

In the goToScreen method we are using this.props.componentId, which is the Id of the current screen and is made automatically available by react-native-navigation.Inside Navigation.push function we assign the name of the screen to be navigated to ‘component’ object and it is the argument which we are passing to the method when we press either Login or SignUp button.

Then run the project using Xcode. The Welcome.js screen appears in a while. Then if we click on the Login button, the app would be navigated to the Login.js Page and it would be navigated to the SignUp.js page if we click on the SignUp button. React Native Navigation also adds the back button at the top using which we can go back to the screen we navigated from.

The Login and SignUp screens would look like below.

I hope, the article was helpful in understanding React Native Navigation. Thanks for the read.

This story is authored by Dheeraj Kumar. Dheeraj is a software engineer specializing in React Native and React based frontend development.

Face Recognition App In React Native using AWS Rekognition

Note: Some images in this blog post aren’t showing up, we are working on resolving it. You could view the same post with all images on medium here.

https://medium.com/zenofai/face-recognition-app-in-react-native-using-aws-rekognition-c10b188a6413

In this blog we are going to build an app for registering faces and verifying faces using Amazon Rekognition in React Native.

Installing dependencies:

Let’s go to React Native Docs, select React Native CLI Quickstart and select our appropriate Development OS and the Target OS as Android, as we are going to build an android application.

Follow the docs for installing dependencies, after installing create a new React Native Application. Use the command line interface to generate a new React Native project called FaceRegister.

react-native init FaceRegister

Preparing the Android device:

We shall need an Android device to run our React Native Android app. This can be either a physical Android device, or more commonly, we can use an Android Virtual Device (AVD) which allows us to emulate an Android device on our computer (using Android Studio).

Either way, we shall need to prepare the device to run Android apps for development.
If you have a physical Android device, you can use it for development in place of an AVD by connecting it to your computer using a USB cable and following the instructions here.

If you are using a virtual device follow this link. I shall be using physical android device.
Now go to the command line and run react-native run-android inside your React Native app directory:

cd FaceRegister
react-native run-android

If everything is set up correctly, you should see your new app running in your physical device or Android emulator.

In your system, you should see a folder named FaceRegister created. Now open FaceRegister folder with your favorite code editor and create a file called Register.js. We need an input box for the username or id for referring the image and a placeholder to preview the captured image and a submit button to register.

Open your Register.js file and copy the below code:

import React from 'react';
import { StyleSheet, View, Text, TextInput, Image, ScrollView, TouchableHighlight } from 'react-native';

class LoginScreen extends React.Component {
    constructor(props){
       super(props);
       this.state =  {
           username : '',
           capturedImage : ''
       };
   }

  
   render() {
       return (
           <View style={styles.MainContainer}>
               <ScrollView>
                   <Text style= {{ fontSize: 20, color: "#000", textAlign: 'center', marginBottom: 15, marginTop: 10 }}>Register Face</Text>
              
                   <TextInput
                       placeholder="Enter Username"
                       onChangeText={UserName => this.setState({username: UserName})}
                       underlineColorAndroid='transparent'
                       style={styles.TextInputStyleClass}
                   />
                   {this.state.capturedImage !== "" && <View style={styles.imageholder} >
                       <Image source={{uri : this.state.capturedImage}} style={styles.previewImage} />
                   </View>}
                  

                   <TouchableHighlight style={[styles.buttonContainer, styles.captureButton]}>
                       <Text style={styles.buttonText}>Capture Image</Text>
                   </TouchableHighlight>

                   <TouchableHighlight style={[styles.buttonContainer, styles.submitButton]}>
                       <Text style={styles.buttonText}>Submit</Text>
                   </TouchableHighlight>
               </ScrollView>
           </View>
       );
   }
}

const styles = StyleSheet.create({
   MainContainer: {
       marginTop: 60
   },
   TextInputStyleClass: {
     textAlign: 'center',
     marginBottom: 7,
     height: 40,
     borderWidth: 1,
     margin: 10,
     borderColor: '#D0D0D0',
     borderRadius: 5 ,
   },
   inputContainer: {
     borderBottomColor: '#F5FCFF',
     backgroundColor: '#FFFFFF',
     borderRadius:30,
     borderBottomWidth: 1,
     width:300,
     height:45,
     marginBottom:20,
     flexDirection: 'row',
     alignItems:'center'
   },
   buttonContainer: {
     height:45,
     flexDirection: 'row',
     alignItems: 'center',
     justifyContent: 'center',
     marginBottom:20,
     width:"80%",
     borderRadius:30,
     marginTop: 20,
     marginLeft: 5,
   },
   captureButton: {
     backgroundColor: "#337ab7",
     width: 350,
   },
   buttonText: {
     color: 'white',
     fontWeight: 'bold',
   },
   horizontal: {
     flexDirection: 'row',
     justifyContent: 'space-around',
     padding: 10
   },
   submitButton: {
     backgroundColor: "#C0C0C0",
     width: 350,
     marginTop: 5,
   },
   imageholder: {
     borderWidth: 1,
     borderColor: "grey",
     backgroundColor: "#eee",
     width: "50%",
     height: 150,
     marginTop: 10,
     marginLeft: 90,
     flexDirection: 'row',
     alignItems:'center'
   },
   previewImage: {
     width: "100%",
     height: "100%",
   }
});

export default LoginScreen;

Now import your Register file in your App.js file which is located in your project root folder. Open your App.js file and replace it with the below code:

import React, {Component} from 'react';
import {View} from 'react-native';
import LoginScreen from './LoginScreen';

class App extends Component {
   render() {
       return (
       <View>
           <LoginScreen />
       </View>
       );
   }
}

export default App;

Now run your app again. Run below command in the project directory:

react-native run-android

You can see a Text input for username and two buttons one(Capture image) for capturing an image and another(Submit) for submitting the details as shown below:

Let’s add the functionality to preview the captured image. We have a package called react-native-image-picker that enables to capture a picture from the device’s camera or to upload an image from the gallery. Go to the command line, in the project directory run the below command to install react-native-image-picker library:

yarn add react-native-image-picker || npm install --save react-native-image-picker

react-native link react-native-image-picker

Add the required permissions in the AndroidManifest.xml file which is located at android/app/src/main/:

<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>

For more information about this package follow this link.
Now add the below code in your Register.js file.

import React from 'react';
...
...
import ImagePicker from "react-native-image-picker"; //import this

class LoginScreen extends React.Component {
    constructor(props){
      ...
   }

//Add the below method...

   captureImageButtonHandler = () => {
       ImagePicker.showImagePicker({title: "Pick an Image", maxWidth: 800, maxHeight: 600}, (response) => {
           console.log('Response = ', response);
           // alert(response)
           if (response.didCancel) {
               console.log('User cancelled image picker');
           } else if (response.error) {
               console.log('ImagePicker Error: ', response.error);
           } else if (response.customButton) {
               console.log('User tapped custom button: ', response.customButton);
           } else {
               // You can also display the image using data:
               const source = { uri: 'data:image/jpeg;base64,' + response.data };
          
               this.setState({capturedImage: response.uri, base64String: source.uri });
           }
       });
   }
  
   render() {
       return (
           <View style={styles.MainContainer}>
               ...
               ...
             // Add onPress property to capture image button //
           
              <TouchableHighlight style={[styles.buttonContainer, styles.loginButton]} onPress={this.captureImageButtonHandler}>
                       <Text style={styles.loginText}>Capture Image</Text>
               </TouchableHighlight>
              ...
              ...   
           </View>
       );
   }
}

const styles = StyleSheet.create({
...
...
...

});

export default LoginScreen;

Add the captureImageButtonHandler() method in the file and add the onPress property to Capture Image button to call this method. After updating the code, reload your app. Now you can access your camera and gallery by clicking on the Capture Image button. Once you capture an image you can see the preview of that image on your screen as below:

Now we need to register the captured image by storing it in the S3 bucket.

I have created an API Endpoint in API Gateway from AWS console which invokes a lambda function (register-face). All I have to do is to send a POST request to the API URL endpoint from the client-side.

Creating API endpoint:

view Working with API Gateway paragraph in the following post:
https://medium.com/zenofai/serverless-web-application-architecture-using-react-with-amplify-part1-5b4d89f384f7

In the below image I created two resources one for adding faces and another for searching face:

This is the lambda function called register-face that is invoked when we click on the Submit button.

const AWS = require('aws-sdk')
var rekognition = new AWS.Rekognition()
var s3Bucket = new AWS.S3( { params: {Bucket: "<bucket-name>"} } );
var fs = require('fs');

exports.handler = (event, context, callback) => {
    let parsedData = JSON.parse(event)
    let encodedImage = parsedData.Image;
    var filePath = "registered/" + parsedData.name;
    console.log(filePath)
    let decodedImage = new Buffer(encodedImage.replace(/^data:image\/\w+;base64,/, ""),'base64')
    var data = {
        Key: filePath, 
        Body: decodedImage,
        ContentEncoding: 'base64',
        ContentType: 'image/jpeg'
    };
    s3Bucket.putObject(data, function(err, data){
        if (err) { 
            console.log('Error uploading data: ', data);
            callback(err, null);
        } else {
            console.log('succesfully uploaded the image!');
            callback(null, data);
        }
    });
};

In the above code, I am storing the image in the registered folder (prefix) in the S3 bucket.

Just uploading faces in S3 bucket is not enough, we need to create a collection in an AWS region to store the registered faces from S3 bucket. Because we also need to add the verification or recognition process whether the face is registered or not. For that, we shall be using Amazon Rekognition to search faces in the collection. In Amazon Rekognition there is an operation called SearchFacesByImage which searches the image from the collection. Go through the Searching Faces in a Collection to know more. 
Add the below code to the register-face lambda function.

var params ={
        CollectionId: "<collection-id>", 
        DetectionAttributes: [], 
        ExternalImageId: parsedData.name, 
        Image: {
            S3Object: {
                Bucket: "<bucket-name>", 
                Name: filePath
            }
        }
    }
    setTimeout(function () {
        rekognition.indexFaces(params, function(err, data) {
            if (err){
                console.log(err, err.stack); // an error occurred
                callback(err)
            }
            else{
                console.log(data); // successful response
                callback(null,data);
            }
        });
    }, 3000);

So, the final lambda function looks as below:

const AWS = require('aws-sdk')
var rekognition = new AWS.Rekognition()
var s3Bucket = new AWS.S3( { params: {Bucket: "<bucket-name>"} } );
var fs = require('fs');

exports.handler = (event, context, callback) => {
    console.log(event);
    console.log(typeof event);
    console.log(JSON.parse(event));
    let parsedData = JSON.parse(event)
    let encodedImage = parsedData.Image;
    var filePath = "registered/" + parsedData.name;
    console.log(filePath)
    let buf = new Buffer(encodedImage.replace(/^data:image\/\w+;base64,/, ""),'base64')
    var data = {
        Key: filePath, 
        Body: buf,
        ContentEncoding: 'base64',
        ContentType: 'image/jpeg'
    };
    s3Bucket.putObject(data, function(err, data){
        if (err) { 
            console.log('Error uploading data: ', data);
            callback(err, null);
        } else {
            console.log('succesfully uploaded the image!');
            // callback(null, data);
        }
    });
    var params ={
        CollectionId: "face-collection", 
        DetectionAttributes: [], 
        ExternalImageId: parsedData.name, 
        Image: {
            S3Object: {
                Bucket: "face-recognise-test", 
                Name: filePath
            }
        }
    }
    setTimeout(function () {
        rekognition.indexFaces(params, function(err, data) {
            if (err){
                console.log(err, err.stack); // an error occurred
                callback(err)
            }
            else{
                console.log(data);           // successful response
                callback(null,data);
            }
        });
    }, 3000);
};

In this lambda function initially, I’m storing the image(face) in S3 bucket and then adding the same face to collection from S3 bucket.

Let’s get back to our client-side and install the aws-amplify library in our project root directory from the command line with below commands:

npm install --save aws-amplify
npm install --save aws-amplify-react-native
(or)
yarn add aws-amplify
yarn add aws-amplify-react-native

Now add the below code in Register.js file:

import React from 'react';
...
...
import Amplify, {API} from "aws-amplify";
Amplify.configure({
   API: {
       endpoints: [
           {
               name: "<API-name>",
               endpoint: "<your endpoint url>"
           }
       ]
   }
});

class Registration extends React.Component {
    constructor(props){
      ...
      ...
    }
   submitButtonHandler = () => {
       if (this.state.username == '' || this.state.username == undefined || this.state.username == null) {
           alert("Please Enter the Username");
       } else if (this.state.userId == '' || this.state.userId == undefined || this.state.userId == null) {
           alert("Please Enter the UserId");
       } else if(this.state.capturedImage == '' || this.state.capturedImage == undefined || this.state.capturedImage == null) {
           alert("Please Capture the Image");
       } else {
           const apiName = "<API-name>";
           const path = "<your path>";
           const init = {
               headers : {
                   'Accept': 'application/json',
                   "X-Amz-Target": "RekognitionService.IndexFaces",
                   "Content-Type": "application/x-amz-json-1.1"
               },
               body : JSON.stringify({
                   Image: this.state.base64String,
                   name: this.state.username
               })
           }
          
           API.post(apiName, path, init).then(response => {
               alert(JSON.stringify(response))
           });
       }
   }
   
   render() {
       if(this.state.image!=="") {
           // alert(this.state.image)
       }
       return (
           <View style={styles.MainContainer}>
               <ScrollView>
...
...

                   <TouchableHighlight style={[styles.buttonContainer, styles.signupButton]} onPress={this.submitButtonHandler}>
                       <Text style={styles.signupText}>Submit</Text>
                   </TouchableHighlight>
...
...   
            </ScrollView>
           </View>
       );
   }
}

In the above code, we added configuration to the API Gateway using amplify and created a method called submitButtonHandler() where we are going to do a POST request to the lambda function to register the face when the user clicks on the submit button. So, we have added the onPress property to submit button which calls the submitButtonHandler().

Here is the complete code for Register.js file:

import React, {Component} from 'react';
import { StyleSheet, View, Text, TextInput, Image, ScrollView, TouchableHighlight } from 'react-native';
import ImagePicker from "react-native-image-picker";
import Amplify, {API} from "aws-amplify";
Amplify.configure({
    API: {
        endpoints: [
            {
                name: "<api-name>",
                Endpoint: "<your endpoint url>"
            }
        ]
    }
});

class Registration extends Component {
  
    constructor(props){
        super(props);
        this.state =  {
            username : '',
            capturedImage : ''
        };
        // this.submitButtonHandler = this.submitButtonHandler.bind(this);
    }

    captureImageButtonHandler = () => {
        ImagePicker.showImagePicker({title: "Pick an Image", maxWidth: 800, maxHeight: 600}, (response) => {
            console.log('Response = ', response);
            // alert(response)
            if (response.didCancel) {
                console.log('User cancelled image picker');
            } else if (response.error) {
                console.log('ImagePicker Error: ', response.error);
            } else if (response.customButton) {
                console.log('User tapped custom button: ', response.customButton);
            } else {
                // You can also display the image using data:
                const source = { uri: 'data:image/jpeg;base64,' + response.data };
            
                this.setState({capturedImage: response.uri, base64String: source.uri });
            }
        });
    }

    submitButtonHandler = () => {
        if (this.state.username == '' || this.state.username == undefined || this.state.username == null) {
            alert("Please Enter the Username");
        } else if(this.state.capturedImage == '' || this.state.capturedImage == undefined || this.state.capturedImage == null) {
            alert("Please Capture the Image");
        } else {
            const apiName = "<api-name>";
            const path = "<your path>";
            const init = {
                headers : {
                    'Accept': 'application/json',
                    "X-Amz-Target": "RekognitionService.IndexFaces",
                    "Content-Type": "application/x-amz-json-1.1"
                },
                body : JSON.stringify({ 
                    Image: this.state.base64String,
                    name: this.state.username
                })
            }
            
            API.post(apiName, path, init).then(response => {
                alert(response);
            });
        }
    }

    render() {
        if(this.state.image!=="") {
            // alert(this.state.image)
        }
        return (
            <View style={styles.MainContainer}>
                <ScrollView>
                    <Text style= {{ fontSize: 20, color: "#000", textAlign: 'center', marginBottom: 15, marginTop: 10 }}>Register Face</Text>
                
                    <TextInput
                        placeholder="Enter Username"
                        onChangeText={UserName => this.setState({username: UserName})}
                        underlineColorAndroid='transparent'
                        style={styles.TextInputStyleClass}
                    />


                    {this.state.capturedImage !== "" && <View style={styles.imageholder} >
                        <Image source={{uri : this.state.capturedImage}} style={styles.previewImage} />
                    </View>}

                    <TouchableHighlight style={[styles.buttonContainer, styles.captureButton]} onPress={this.captureImageButtonHandler}>
                        <Text style={styles.buttonText}>Capture Image</Text>
                    </TouchableHighlight>

                    <TouchableHighlight style={[styles.buttonContainer, styles.submitButton]} onPress={this.submitButtonHandler}>
                        <Text style={styles.buttonText}>Submit</Text>
                    </TouchableHighlight>
                </ScrollView>
            </View>
        );
    }
}

const styles = StyleSheet.create({
    TextInputStyleClass: {
      textAlign: 'center',
      marginBottom: 7,
      height: 40,
      borderWidth: 1,
      margin: 10,
      borderColor: '#D0D0D0',
      borderRadius: 5 ,
    },
    inputContainer: {
      borderBottomColor: '#F5FCFF',
      backgroundColor: '#FFFFFF',
      borderRadius:30,
      borderBottomWidth: 1,
      width:300,
      height:45,
      marginBottom:20,
      flexDirection: 'row',
      alignItems:'center'
    },
    buttonContainer: {
      height:45,
      flexDirection: 'row',
      alignItems: 'center',
      justifyContent: 'center',
      marginBottom:20,
      width:"80%",
      borderRadius:30,
      marginTop: 20,
      marginLeft: 5,
    },
    captureButton: {
      backgroundColor: "#337ab7",
      width: 350,
    },
    buttonText: {
      color: 'white',
      fontWeight: 'bold',
    },
    horizontal: {
      flexDirection: 'row',
      justifyContent: 'space-around',
      padding: 10
    },
    submitButton: {
      backgroundColor: "#C0C0C0",
      width: 350,
      marginTop: 5,
    },
    imageholder: {
      borderWidth: 1,
      borderColor: "grey",
      backgroundColor: "#eee",
      width: "50%",
      height: 150,
      marginTop: 10,
      marginLeft: 90,
      flexDirection: 'row',
      alignItems:'center'
    },
    previewImage: {
      width: "100%",
      height: "100%",
    }
});

export default Registration;

Now reload your application and register the image(face).

After registering successfully you will receive an alert message as below:

Now go to your S3 bucket and check if the image is stored as below:

And also check in your collection using below command from your command line:

aws rekognition list-faces --collection-id "<your collection id>"

You will get a JSON data with list of faces that are registered as output. So, the registration process is working successfully. Now we need to add the verification/searchface process to our application. I created another lambda function (searchFace) for face verification. Here is the code for face verification lambda function.

const AWS = require('aws-sdk')
var rekognition = new AWS.Rekognition()
var s3Bucket = new AWS.S3( { params: {Bucket: "<bucket-name>"} } );
var fs = require('fs');

exports.handler = (event, context, callback) => {
    let parsedData = JSON.parse(event)
    let encodedImage = parsedData.Image;
    var filePath = parsedData.name + ".jpg";
    console.log(filePath)
    let decodedImage = new Buffer(encodedImage.replace(/^data:image\/\w+;base64,/, ""),'base64')
    var data = {
        Key: filePath, 
        Body: decodedImage,
        ContentEncoding: 'base64',
        ContentType: 'image/jpeg'
    };
    s3Bucket.putObject(data, function(err, data){
        if (err) { 
            console.log('Error uploading data: ', data);
            callback(err);
        } else {
            console.log('succesfully uploaded the image!');
            // callback(null, data);
        }
    });
    var params2 ={
        CollectionId: "<collectio-id>", 
        FaceMatchThreshold: 85, 
        Image: {
            S3Object: {
                Bucket: "<bucket-name>", 
                Name: filePath
            }
        }, 
        MaxFaces: 5
    }
    setTimeout(function () {
        rekognition.searchFacesByImage(params2, function(err, data) {
            if (err){
                console.log(err, err.stack); // an error occurred
                callback(err)
            }
            else{
                console.log(data);           // successful response
                callback(null,data);
            }
        });
    }, 2000);
};

In the above lambda function, we are using SearchFacesByImage. It searches the image from the collection. The response will be a JSON object. Now create a new file called Verification.js in your project root directory and copy the below code in it:

import React, {Component} from 'react';
import { StyleSheet, View, Text, TextInput, Image, ScrollView, TouchableHighlight } from 'react-native';
import ImagePicker from "react-native-image-picker";
import Amplify, {API} from "aws-amplify";
Amplify.configure({
   API: {
       endpoints: [
           {
               name: "<API-name>",
               endpoint: "<your endpoint url>"
           }
       ]
   }
});

class Verification extends Component {
    constructor(props){
       super(props);
       this.state =  {
           username: ''
           capturedImage : ''
       };
   }

   captureImageButtonHandler = () => {
       ImagePicker.showImagePicker({title: "Pick an Image", maxWidth: 800, maxHeight: 600}, (response) => {
           console.log('Response = ', response);
           // alert(response)
           if (response.didCancel) {
               console.log('User cancelled image picker');
           } else if (response.error) {
               console.log('ImagePicker Error: ', response.error);
           } else if (response.customButton) {
               console.log('User tapped custom button: ', response.customButton);
           } else {
               // You can also display the image using data:
               const source = { uri: 'data:image/jpeg;base64,' + response.data };
          
               this.setState({capturedImage: response.uri, base64String: source.uri });
           }
       });
   }

   verification = () => {
       if(this.state.capturedImage == '' || this.state.capturedImage == undefined || this.state.capturedImage == null) {
           alert("Please Capture the Image");
       } else {
           const apiName = "<api-name>";
           const path = "<your path>";
          
           const init = {
               headers : {
                   'Accept': 'application/json',
                   "X-Amz-Target": "RekognitionService.SearchFacesByImage",
                   "Content-Type": "application/x-amz-json-1.1"
               },
               body : JSON.stringify({
                   Image: this.state.base64String,
                   name: this.state.username
               })
           }
          
           API.post(apiName, path, init).then(response => {
               if(JSON.stringify(response.FaceMatches.length) > 0) {
                   alert(response.FaceMatches[0].Face.ExternalImageId)
               } else {
                   alert("No matches found.")
               }
           });
       }
   }

  
  
    render() {
       if(this.state.image!=="") {
           // alert(this.state.image)
       }
       return (
           <View style={styles.MainContainer}>
               <ScrollView>
                   <Text style= {{ fontSize: 20, color: "#000", textAlign: 'center', marginBottom: 15, marginTop: 10 }}>Verify Face</Text>
              
                   {this.state.capturedImage !== "" && <View style={styles.imageholder} >
                       <Image source={{uri : this.state.capturedImage}} style={styles.previewImage} />
                   </View>}

                   <TouchableHighlight style={[styles.buttonContainer, styles.captureButton]} onPress={this.captureImageButtonHandler}>
                       <Text style={styles.buttonText}>Capture Image</Text>
                   </TouchableHighlight>

                   <TouchableHighlight style={[styles.buttonContainer, styles.verifyButton]} onPress={this.verification}>
                       <Text style={styles.buttonText}>Verify</Text>
                   </TouchableHighlight>
               </ScrollView>
           </View>
       );
   }
}

const styles = StyleSheet.create({
   container: {
     flex: 1,
     backgroundColor: 'white',
     alignItems: 'center',
     justifyContent: 'center',
   },
   buttonContainer: {
     height:45,
     flexDirection: 'row',
     alignItems: 'center',
     justifyContent: 'center',
     marginBottom:20,
     width:"80%",
     borderRadius:30,
     marginTop: 20,
     marginLeft: 5,
   },
   captureButton: {
     backgroundColor: "#337ab7",
     width: 350,
   },
   buttonText: {
     color: 'white',
     fontWeight: 'bold',
   },
   verifyButton: {
     backgroundColor: "#C0C0C0",
     width: 350,
     marginTop: 5,
   },
   imageholder: {
     borderWidth: 1,
     borderColor: "grey",
     backgroundColor: "#eee",
     width: "50%",
     height: 150,
     marginTop: 10,
     marginLeft: 90,
     flexDirection: 'row',
     alignItems:'center'
   },
   previewImage: {
     width: "100%",
     height: "100%",
   }
});

export default Verification;

In the above code there are two buttons one (Capture image) for capturing the face that needs to be verified and another (Verify) for verifying the captured face if it is registered or not. When a user clicks on Verify button verification() method will be called in which we make a POST request (invoking searchFace lambda function via API gateway).

Now we are having two screens one for Registration and another for Verification.Let’s add navigation between two screens using react-navigation. The first step is to install react-navigation in your project:

npm install --save react-navigation

The second step is to install react-native-gesture-handler:

yarn add react-native-gesture-handler
# or with npm
# npm install --save react-native-gesture-handler

Now we need to link our react-native with react-native-gesture-handler:

react-native link react-native-gesture-handler

After that go back to your App.js file and replace it with the below code:

import React, {Component} from 'react';
import {View, Text, TouchableHighlight, StyleSheet} from 'react-native';
import Registration from './Registration';
import {createStackNavigator, createAppContainer} from 'react-navigation';
import Verification from './Verification';

class HomeScreen extends React.Component {
   render() {
       return (
           <View style={{ flex: 1, alignItems: "center" }}>
               <Text style= {{ fontSize: 30, color: "#000", marginBottom: 50, marginTop: 100 }}>Register Face ID</Text>
               <TouchableHighlight style={[styles.buttonContainer, styles.button]} onPress={() => this.props.navigation.navigate('Registration')}>
                   <Text style={styles.buttonText}>Registration</Text>
               </TouchableHighlight>
               <TouchableHighlight style={[styles.buttonContainer, styles.button]} onPress={() => this.props.navigation.navigate('Verification')}>
                   <Text style={styles.buttonText}>Verification</Text>
               </TouchableHighlight>
           </View>
       );
   }
}

const MainNavigator = createStackNavigator(
   {
       Home: {screen: HomeScreen},
       Registration: {screen: Registration},
       Verification: {screen: Verification}
   },
   {
       initialRouteName: 'Home',
   }
);

const AppContainer = createAppContainer(MainNavigator);

export default class App extends Component {
   render() {
       return <AppContainer />;
   }
}

const styles = StyleSheet.create({
   buttonContainer: {
       height:45,
       flexDirection: 'row',
       alignItems: 'center',
       justifyContent: 'center',
       marginBottom:20,
       width:"80%",
       borderRadius:30,
       marginTop: 20,
       marginLeft: 5,
   },
   button: {
       backgroundColor: "#337ab7",
       width: 350,
       marginTop: 5,
   },
   buttonText: {
       color: 'white',
       fontWeight: 'bold',
   },
})

Now reload your app and you could see your home screen as below:

When user clicks on Registration button it navigates to Registration screen as below:

When user clicks on Verification button it navigates to Verification screen as below:

Now let’s check the verification process.

Step 1: Navigate to Verification screen.

Step 2: Capture the registered image(face).

Step 3: Click on the verify button.

If everything is fine then you will receive an alert message with the face name as below:

If there are no face matches with the captured face then the user receives an alert message as “No matches found”.

Thanks for the read, I hope it was useful.

This story is authored by Venu Vaka. Venu is a software engineer and machine learning enthusiast.

Create a Language Translation Mobile App using React Native and Google APIs

In this blog, we are going to learn how to create a simple React Native based Language Translation Android app with Speech to Text and Text to Speech capabilities powered by Google APIs.

Installing dependencies:

Go to React Native Docs, select React Native CLI Quickstart and select your Development OS and Target OS -> Android, as we are going to build an Android application.

Follow the docs for installing dependencies and create a new React Native Application. Use the command line interface to generate a new React Native project called “Translator“:

react-native init Translator

You should see a folder named Translator created. Now open Translator folder with your favourite code editor and create a file called Translator.js. We need an input box for text that needs to be translated and another output section to display the translated text. We also need a select box that lists different languages to choose from for translation. Let’s create a json file, call it languages.json.

Go to languages.json file and copy the code below:

{
   "auto": "Auto Detect",
   "af": "Afrikaans",
   "sq": "Albanian",
   "am": "Amharic",
   "ar": "Arabic",
   "hy": "Armenian",
   "az": "Azerbaijani",
   "eu": "Basque",
   "be": "Belarusian",
   "bn": "Bengali",
   "bs": "Bosnian",
   "bg": "Bulgarian",
   "ca": "Catalan",
   "ceb": "Cebuano",
   "ny": "Chichewa",
   "zh-cn": "Chinese Simplified",
   "zh-tw": "Chinese Traditional",
   "co": "Corsican",
   "hr": "Croatian",
   "cs": "Czech",
   "da": "Danish",
   "nl": "Dutch",
   "en": "English",
   "eo": "Esperanto",
   "et": "Estonian",
   "tl": "Filipino",
   "fi": "Finnish",
   "fr": "French",
   "fy": "Frisian",
   "gl": "Galician",
   "ka": "Georgian",
   "de": "German",
   "el": "Greek",
   "gu": "Gujarati",
   "ht": "Haitian Creole",
   "ha": "Hausa",
   "haw": "Hawaiian",
   "iw": "Hebrew",
   "hi": "Hindi",
   "hmn": "Hmong",
   "hu": "Hungarian",
   "is": "Icelandic",
   "ig": "Igbo",
   "id": "Indonesian",
   "ga": "Irish",
   "it": "Italian",
   "ja": "Japanese",
   "jw": "Javanese",
   "kn": "Kannada",
   "kk": "Kazakh",
   "km": "Khmer",
   "ko": "Korean",
   "ku": "Kurdish (Kurmanji)",
   "ky": "Kyrgyz",
   "lo": "Lao",
   "la": "Latin",
   "lv": "Latvian",
   "lt": "Lithuanian",
   "lb": "Luxembourgish",
   "mk": "Macedonian",
   "mg": "Malagasy",
   "ms": "Malay",
   "ml": "Malayalam",
   "mt": "Maltese",
   "mi": "Maori",
   "mr": "Marathi",
   "mn": "Mongolian",
   "my": "Myanmar (Burmese)",
   "ne": "Nepali",
   "no": "Norwegian",
   "ps": "Pashto",
   "fa": "Persian",
   "pl": "Polish",
   "pt": "Portuguese",
   "ma": "Punjabi",
   "ro": "Romanian",
   "ru": "Russian",
   "sm": "Samoan",
   "gd": "Scots Gaelic",
   "sr": "Serbian",
   "st": "Sesotho",
   "sn": "Shona",
   "sd": "Sindhi",
   "si": "Sinhala",
   "sk": "Slovak",
   "sl": "Slovenian",
   "so": "Somali",
   "es": "Spanish",
   "su": "Sundanese",
   "sw": "Swahili",
   "sv": "Swedish",
   "tg": "Tajik",
   "ta": "Tamil",
   "te": "Telugu",
   "th": "Thai",
   "tr": "Turkish",
   "uk": "Ukrainian",
   "ur": "Urdu",
   "uz": "Uzbek",
   "vi": "Vietnamese",
   "cy": "Welsh",
   "xh": "Xhosa",
   "yi": "Yiddish",
   "yo": "Yoruba",
   "zu": "Zulu"
}

Translator.js (modify file), copy the code below:

import React, { Component } from 'react';
import { View, TextInput, StyleSheet, TouchableOpacity, TouchableHighlight, Text, Picker, Image } from 'react-native';
import Languages from './languages.json';

export default class Translator extends Component {

   constructor(props) {
       super(props);
       this.state = {
           languageFrom: "",
           languageTo: "",
           languageCode: 'en',
           inputText: "",
           outputText: "",
           submit: false,
       };
   }

   render() {
       return (
           <View style = {styles.container}>
               <View style={styles.input}>
                   <TextInput
                       style={{flex:1, height: 80}}
                       placeholder="Enter Text"
                       underlineColorAndroid="transparent"
                       onChangeText = {inputText => this.setState({inputText})}
                       value={this.state.inputText}
                   />
               </View>

               <Picker
               selectedValue={this.state.languageTo}
               onValueChange={ lang => this.setState({languageTo: lang, languageCode: lang})}
               >
                   {Object.keys(Languages).map(key => (
                       <Picker.Item label={Languages[key]} value={key} />
                   ))}
               </Picker>

               <View style = {styles.output}>
                  {/* output text displays here.. */}
               </View>
               <TouchableOpacity
                   style = {styles.submitButton}
                   onPress = {this.handleTranslate}
               >
                   <Text style = {styles.submitButtonText}> Submit </Text>
               </TouchableOpacity>
           </View>
       )
   }
}

const styles = StyleSheet.create({
   container: {
       paddingTop: 53
   },
   input: {
       flexDirection: 'row',
       justifyContent: 'center',
       alignItems: 'center',
       backgroundColor: '#fff',
       borderWidth: .5,
       borderColor: '#000',
       // height: 40,
       borderRadius: 5 ,
       margin: 10
   },
   output: {
       flexDirection: 'row',
       justifyContent: 'center',
       alignItems: 'center',
       backgroundColor: '#fff',
       borderWidth: .5,
       borderColor: '#000',
       borderRadius: 5 ,
       margin: 10,
       height: 80,
   },
   submitButton: {
       backgroundColor: '#7a42f4',
       padding: 10,
       margin: 15,
       borderRadius: 5 ,
       height: 40,
   },
   submitButtonText:{
       color: 'white'
   },
})

Now import your Translator.js in to App.js file.
Replace your App.js file with below code

import React, {Component} from 'react';
import {View} from 'react-native';
import Translator from './Translator';

export default class App extends Component {
   render() {
       return (
       <View>
           <Translator />
       </View>
       );
   }
}

Preparing the Android device

You will need an Android device to run your React Native Android app. This can be either a physical Android device, or more commonly, you can use an Android Virtual Device (AVD) which allows you to emulate an Android device on your computer (using Android Studio).

Either way, you will need to prepare the device to run Android apps for development.

Using a physical device

If you have a physical Android device, you can use it for development in place of an AVD by connecting it to your computer using a USB cable and following the instructions here.

If you are using virtual device follow this link.

Now go to command line and run react-native run-android inside your React Native app directory:

cd Translator
react-native run-android

If everything is set up correctly, you should see your new app running in your physical device or Android emulator shortly as below.

That’s great. We got the basic UI for our Translator app. Now we need to translate the input text into the selected language on submit. In React Native we have a library called react-native-power-translator for translating the text.

Let’s install the react-native-power-translator library. Go to the project root directory in command line and run the below command:

npm i react-native-power-translator --save

Usage:

import { PowerTranslator, ProviderTypes, TranslatorConfiguration, TranslatorFactory } from 'react-native-power-translator';

//Example
TranslatorConfiguration.setConfig('Provider_Type', 'Your_API_Key','Target_Language', 'Source_Language');

//Fill with your own details
TranslatorConfiguration.setConfig(ProviderTypes.Google, 'xxxx','fr');
  • PowerTranslator: a simple component to translate your texts.
  • ProviderTypes: type of cloud provider you want to use. There are two providers you can specify. ProviderTypes.Google for Google translate and ProviderTypes.Microsoft for Microsoft translator text cloud service.
  • TranslatorFactory: It returns a suitable translator instance, based on your configuration.
  • TranslatorConfiguration: It is a singleton class that keeps the translator configuration.

Now add the following code in your Translator.js file:

In the above code I’m using Google provider. You can use either Google or Microsoft provider.

Save all the files and run your app in the command line again and you can see a working app with translates text from one language to another as below.

import React, { Component } from 'react';
...
...
import { PowerTranslator, ProviderTypes, TranslatorConfiguration, TranslatorFactory } from 'react-native-power-translator';

export default class Translator extends Component {
...
...
render() {
       TranslatorConfiguration.setConfig(ProviderTypes.Google,’XXXX’, this.state.languageCode);
       return (
             ...
             ...
             ...
             <View style = {styles.output}>
                  {/* output text displays here.. */}
              {this.state.submit && <PowerTranslator  text={this.state.inputText} />}
              </View>

             ...
...
    
}
}

In the below image you can see the text that converted from English to French.

In Android devices you can download different language keyboards. So that you can translate your local language to other languages.

In Android devices you can download different language keyboards. So that you can translate your local language to other languages.

For speech to text we have a library called react-native-android-voice. Let’s install this library in our project.
Go to command line and navigate to project root directory and run the below command:

npm install --save react-native-android-voice

After installing successfully please follow the steps in this link for linking the library to your android project.

Once you completed linking libraries to your Android project, let’s start implementing it in our Translator.js file.

Let’s add a mic icon in our input box. When user taps on mic icon the speech feature will be enabled, there is a library called react-native-vector-icons. For installation follow the steps in this link.

In this project I’m using Ionicons icons, you can change it in iconFontNames in your android/app/build.gradle file as:

project.ext.vectoricons = [
   iconFontNames: [ 'Ionicons.ttf' ] // Name of the font files you want to copy
]

Now add the following code in Translator.js file.

import React, { Component } from 'react';
...
...
import Icon from "react-native-vector-icons/Ionicons";
import SpeechAndroid from 'react-native-android-voice';

export default class Translator extends Component {
constructor(props) {
      super(props);
      this.state = {
          languageFrom: "",
          ....
          ....
          micOn: false, //Add this
      };
      this._buttonClick = this._buttonClick.bind(this); //Add this
  }

...
async _buttonClick(){
       await this.setState({micOn: true})
       try{
           var spokenText = await SpeechAndroid.startSpeech("", SpeechAndroid.ENGLISH);
           await this.setState({inputText: spokenText});
           await ToastAndroid.show(spokenText , ToastAndroid.LONG);
       }catch(error){
           switch(error){
               case SpeechAndroid.E_VOICE_CANCELLED:
                   ToastAndroid.show("Voice Recognizer cancelled" , ToastAndroid.LONG);
                   break;
               case SpeechAndroid.E_NO_MATCH:
                   ToastAndroid.show("No match for what you said" , ToastAndroid.LONG);
                   break;
               case SpeechAndroid.E_SERVER_ERROR:
                   ToastAndroid.show("Google Server Error" , ToastAndroid.LONG);
                   break;
           }
       }
       this.setState({micOn: false})
   }

render() {
       TranslatorConfiguration.setConfig(ProviderTypes.Google,'XXXX', this.state.languageCode);
       return (
             <View style = {styles.container}>
              <View style={styles.input}>
                  <TextInput
                      ...
                      ...
                      ...
                  />
                  <TouchableOpacity onPress={this._buttonClick}>
                       {this.state.micOn ? <Icon size={30} name="md-mic" style={styles.micStyle}/> : <Icon size={30} name="md-mic-off" style={styles.micStyle}/>}
                   </TouchableOpacity>
              </View>
...
...
</View>
    )
}
}

const styles = StyleSheet.create({
  container: {
      paddingTop: 53
  },
...
...
...
  micStyle: {
      padding: 10,
      margin: 5,
      alignItems: 'center'
  }
})

After adding the code correctly, save all the changes and run your app. Now you can see a mic icon in the text input box which allows speech to text feature.

In the above code we are calling a function called _buttonClick() which contains speech to text logic. This will automatically start recognizing and adjusting for the English Language. You can use different languages for speech, you can check here for more information.

Now we successfully implemented speech to text to our Translator app. Let’s add text to speech feature which will turn the translated text into speech. For that we have a library called react-native-tts which converts text to speech.

Install react-native-tts in our project. Go to the command line and navigate to project root directory and run the following command:

npm install --save react-native-tts
react-native link react-native-tts

First command will install the library.
Second command will link the library to your android project.

Now add the following code in your Translator.js file

import React, { Component } from 'react';
...
...
import Icon from "react-native-vector-icons/Ionicons";
import SpeechAndroid from 'react-native-android-voice';

export default class Translator extends Component {
constructor(props) {
      super(props);
      this.state = {
          languageFrom: "",
          ...
          ...
          micOn: false, //Add this
      };
      this._buttonClick = this._buttonClick.bind(this); //Add this
  }


handleTranslate = () => {
       this.setState({submit: true})
       const translator = TranslatorFactory.createTranslator();
       translator.translate(this.state.inputText).then(translated => {
           // alert(translated)
           Tts.getInitStatus().then(() => {
               Tts.speak(translated);
           });
           Tts.stop();
       });
   }
...

render() {
         ...
    )
}
}

In the above code we have added the text to speech logic in handleTranslate function that called when submit button clicked.

Now our final Translator.js file will look like below:

import React, { Component } from 'react';
import { PowerTranslator, ProviderTypes, TranslatorConfiguration, TranslatorFactory } from 'react-native-power-translator';
import { View, TextInput, StyleSheet, TouchableOpacity, TouchableHighlight, Text, Picker, Image } from 'react-native';
import Icon from "react-native-vector-icons/Ionicons";
import Tts from 'react-native-tts';
import Languages from './languages.json';
import SpeechAndroid from 'react-native-android-voice';

export default class Translator extends Component {

   constructor(props) {
       super(props);
       this.state = {
           languageFrom: "",
           languageTo: "",
           languageCode: 'en',
           inputText: "",
           outputText: "",
           submit: false,
           micOn: false,
       };
       this._buttonClick = this._buttonClick.bind(this);
   }
   handleTranslate = () => {
       this.setState({submit: true})
       const translator = TranslatorFactory.createTranslator();
       translator.translate(this.state.inputText).then(translated => {
           Tts.getInitStatus().then(() => {
               Tts.speak(translated);
           });
           Tts.stop();
       });
   }
   async _buttonClick(){
       await this.setState({micOn: true})
       try{
           var spokenText = await SpeechAndroid.startSpeech("", SpeechAndroid.DEFAULT);
           await this.setState({inputText: spokenText});
           await ToastAndroid.show(spokenText , ToastAndroid.LONG);
       }catch(error){
           switch(error){
               case SpeechAndroid.E_VOICE_CANCELLED:
                   ToastAndroid.show("Voice Recognizer cancelled" , ToastAndroid.LONG);
                   break;
               case SpeechAndroid.E_NO_MATCH:
                   ToastAndroid.show("No match for what you said" , ToastAndroid.LONG);
                   break;
               case SpeechAndroid.E_SERVER_ERROR:
                   ToastAndroid.show("Google Server Error" , ToastAndroid.LONG);
                   break;
           }
       }
       this.setState({micOn: false})
   }

   render() {
       TranslatorConfiguration.setConfig(ProviderTypes.Google, 'XXXXXXXXX', this.state.languageCode);
       return (
           <View style = {styles.container}>
               <View style={styles.input}>
                   <TextInput
                       style={{flex:1, height: 80}}
                       placeholder="Enter Text"
                       underlineColorAndroid="transparent"
                       onChangeText = {inputText => this.setState({inputText})}
                       value={this.state.inputText}
                   />
                   <TouchableOpacity onPress={this._buttonClick}>
                       {this.state.micOn ? <Icon size={30} name="md-mic" style={styles.ImageStyle}/> : <Icon size={30} name="md-mic-off" style={styles.ImageStyle}/>}
                   </TouchableOpacity>
               </View>

               <Picker
               selectedValue={this.state.languageTo}
               onValueChange={ lang => this.setState({languageTo: lang, languageCode: lang})}
               >
                   {Object.keys(Languages).map(key => (
                       <Picker.Item label={Languages[key]} value={key} />
                   ))}
               </Picker>

               <View style = {styles.output}>
                   {this.state.submit && <PowerTranslator text={this.state.inputText} />}
                   {/* onTranslationEnd={this.textToSpeech} */}
               </View>
               <TouchableOpacity
                   style = {styles.submitButton}
                   onPress = {this.handleTranslate}
               >
                   <Text style = {styles.submitButtonText}> Submit </Text>
               </TouchableOpacity>
           </View>
       )
   }
}

const styles = StyleSheet.create({
   container: {
       paddingTop: 53
   },
   input: {
       flexDirection: 'row',
       justifyContent: 'center',
       alignItems: 'center',
       backgroundColor: '#fff',
       borderWidth: .5,
       borderColor: '#000',
       // height: 40,
       borderRadius: 5 ,
       margin: 10
   },
   output: {
       flexDirection: 'row',
       justifyContent: 'center',
       alignItems: 'center',
       backgroundColor: '#fff',
       borderWidth: .5,
       borderColor: '#000',
       borderRadius: 5 ,
       margin: 10,
       height: 80,
   },
   ImageStyle: {
       padding: 10,
       margin: 5,
       alignItems: 'center'
   },
   submitButton: {
       backgroundColor: '#7a42f4',
       padding: 10,
       margin: 15,
       borderRadius: 5 ,
       height: 40,
   },
   submitButtonText:{
       color: 'white'
   },
})

Make sure you have replaced ‘XXXXXX’ with your Google/Microsoft API-Key in TranslatorConfiguration in render method.

That’s it. Now we have a Language Translator, Speech to Text, Text to Speech features in our Translator application. We are ready to go now. Reload / Run your app and you can see a fully functional app.

When user taps on mic icon, an Android speech recognizer popup will be displayed as below.

If user didn’t speak or google doesn’t recognize the speech then it shows up as below:

Once Google recognizes speech then select a language to which you need to translate to and click the submit button, so that you would receive a translated text as speech.

That’s it folks!

This story is authored by Venu Vaka. Venu is a software engineer and machine learning enthusiast.