Create Voice Driven HealthCare Chatbot in React Native Using Google DialogFlow

In our earlier blog post, we had built a Healthcare Chatbot, in React Native using Dialogflow API. This blog is an extension of the chatbot we built earlier. We shall learn how to add support for voice-based user interaction to that chatbot. Assuming you had followed our earlier blog and created the chatbot we will proceed further.

The first thing is to specify voice permission for accessing the device’s microphone to record the audio which will help the app to operate properly. For that we just need to add the below code in AndroidManifest.xml located at project-name > android > app > src > main location.

AndroidManifest.xml

<uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW"/>
<uses-permission android:name="android.permission.RECORD_AUDIO" />

We also need to make few changes in MainApplication.java file located at project-name > android > app > src > main > java > com > project-name location.
Modify the content in MainApplication.java as below.

MainApplicaiton.java

import android.app.Application;
import android.content.Context;
import com.facebook.react.ReactApplication;
import com.facebook.react.ReactNativeHost;
import com.facebook.react.ReactPackage;
import com.facebook.soloader.SoLoader;
import java.util.List;


// Additional packages which we need to add
import net.no_mad.tts.TextToSpeechPackage;
import com.oblador.vectoricons.VectorIconsPackage;
import com.facebook.react.shell.MainReactPackage;
import com.reactnativecommunity.rnpermissions.RNPermissionsPackage;
import com.wmjmc.reactspeech.VoicePackage;
import java.util.Arrays;

public class MainApplication extends Application implements ReactApplication {

  private final ReactNativeHost mReactNativeHost =
      new ReactNativeHost(this) {
        @Override
        public boolean getUseDeveloperSupport() {
          return BuildConfig.DEBUG;
        }
        // Replace your getPackages() method with below getPackages() method
        @Override
        protected List<ReactPackage> getPackages() {
          return Arrays.<ReactPackage>asList(
              new MainReactPackage(),
              new VectorIconsPackage(),
              new RNPermissionsPackage(),
              new VoicePackage(),
              new TextToSpeechPackage()
          );
        }

        @Override
        protected String getJSMainModuleName() {
          return "index";
        }
      };

  @Override
  public ReactNativeHost getReactNativeHost() {
    return mReactNativeHost;
  }

  @Override
  public void onCreate() {
    super.onCreate();
    SoLoader.init(this, /* native exopackage */ false);
    // initializeFlipper(this); // Remove this line if you don't want Flipper enabled
  }

  /**
   * Loads Flipper in React Native templates.
   *
   * @param context
   */

  // private static void initializeFlipper(Context context) {
  //   if (BuildConfig.DEBUG) {
  //     try {
  //       /*
  //        We use reflection here to pick up the class that initializes Flipper,
  //       since Flipper library is not available in release mode
  //       */
  //       Class<?> aClass = Class.forName("com.facebook.flipper.ReactNativeFlipper");
  //       aClass.getMethod("initializeFlipper", Context.class).invoke(null, context);
  //     } catch (ClassNotFoundException e) {
  //       e.printStackTrace();
  //     } catch (NoSuchMethodException e) {
  //       e.printStackTrace();
  //     } catch (IllegalAccessException e) {
  //       e.printStackTrace();
  //     } catch (InvocationTargetException e) {
  //       e.printStackTrace();
  //     }
  //   }
  // }
}

We have added import net.no_mad.tts.TextToSpeechPackage to help the app support text to speech conversion in order to receive the response not only as text but also as voice.

To capture voice input we have to add import com.wmjmc.reactspeech.VoicePackage.

If you intend to use react-native vector icons, you will be required to add import com.oblador.vectoricons.VectorIconsPackage.

The initialize flipper() method has to be commented to avoid encountering the error shown below:

We also need to comment the line initializeFlipper(this) in public void onCreate() { method.

Moving on from MainApplication.java,  we also need to modify our existing code in the App.js file.

Initially, we need to install some packages. To do so, navigate to your project folder in the terminal and execute the below command.

npm install react-native-elements react-native-vector-icons react-native-tts uuid react-native-android-voice --save

We are using react-native-elements to use the already built UI components. Use the react-native-vector-icons package if you want to use react-native vector icons.

React Native TTS is a text-to-speech library for react-native on iOS and Android.

Uuid package is used in order to generate UUID (unique id’s) which we would be using in the app.

react-native-android-voice is a speech-to-text library for react-native for the Android platform.

Now modify the content of the App.js file as below.
App.js:

import React, { Component, useEffect } from 'react';
import { View, StyleSheet } from 'react-native';
import { GiftedChat } from 'react-native-gifted-chat';
import { Dialogflow_V2 } from 'react-native-dialogflow';
import { dialogflowConfig } from './env';
import SpeechAndroid from 'react-native-android-voice';
import uuid from "uuid";
import Tts from 'react-native-tts';
import Icon from 'react-native-vector-icons/FontAwesome';
import { Button } from 'react-native-elements';

const BOT_USER = {
  _id: 2,
  name: 'Health Bot',
  avatar: 'https://previews.123rf.com/images/iulika1/iulika11909/iulika1190900021/129697389-medical-worker-health-professional-avatar-medical-staff-doctor-icon-isolated-on-white-background-vec.jpg'
};

var speak=0;

class App extends Component {
  state = {
    messages: [
      {
        _id: 1,
        text: 'Hi! I am the Healthbot 🤖.\n\nHow may I help you today?',
        createdAt: new Date(),
        user: BOT_USER,
      }
    ],
  };

  componentDidMount() {
    Dialogflow_V2.setConfiguration(
      dialogflowConfig.client_email,
      dialogflowConfig.private_key,
      Dialogflow_V2.LANG_ENGLISH_US,
      dialogflowConfig.project_id
    );
  }

  onSend(messages = []) {
    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, messages)
    }));

    let message = messages[0].text;
    Dialogflow_V2.requestQuery(
      message,
      result => this.handleGoogleResponse(result),
      error => console.log(error)
    );
  }

  handleGoogleResponse(result) {
    let receivedText = '';
    let splitReceivedText = '';
    var extractDate = '';
    const dateRegex = /\d{4}\-\d{2}\-\d{2}?/gm;

    receivedText = result.queryResult.fulfillmentMessages[0].text.text[0];
    splitReceivedText = receivedText.split('on')[0];
    extractDate = receivedText.match(dateRegex);

    if (extractDate != null) {
      var completeTimeValue = splitResponseForTime(receivedText);
      var timeValue = getTimeValue(completeTimeValue);
      function splitResponseForTime(str) {
        return str.split('at')[1];
      }

      function getTimeValue(str) {
        let time1 = str.split('T')[1];
        let hour = time1.split(':')[0];
        let min = time1.split(':')[1];
        return hour + ":" + min;
      }
      splitReceivedText = splitReceivedText + 'on ' + extractDate[0] + ' at ' + timeValue;
    }
    this.sendBotResponse(splitReceivedText);
  }

  sendBotResponse(text) {
    let msg = {
      _id: this.state.messages.length + 1,
      text,
      createdAt: new Date(),
      user: BOT_USER
    };

    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, msg)
    }));

    if (speak) {
      speak=0; 
      Tts.getInitStatus().then(() => {
        Tts.speak(text);
      }, (err) => {
        if (err.code === 'no_engine') {
          Tts.requestInstallEngine();
        }
      });
    }
  }

  uuidGen = () => {
    return uuid.v4();
  }

  micHandler = async () => {
    try {
      let textSpeech = await SpeechAndroid.startSpeech("Speak now", SpeechAndroid.ENGLISH);
      speak=1;
      let StateVariable = [{
        "text": textSpeech,
        "user": {
          "_id": 1
        },
        "createdAt": new Date(),
        "_id": this.uuidGen()
      }];
      await this.onSend(StateVariable);

      await ToastAndroid.show(spokenText, ToastAndroid.LONG);
    } catch (error) {
      switch (error) {
        case SpeechAndroid.E_VOICE_CANCELLED:
          ToastAndroid.show("Voice Recognizer cancelled", ToastAndroid.LONG);
          break;
        case SpeechAndroid.E_NO_MATCH:
          ToastAndroid.show("No match for what you said", ToastAndroid.LONG);
          break;
        case SpeechAndroid.E_SERVER_ERROR:
          ToastAndroid.show("Google Server Error", ToastAndroid.LONG);
          break;
      }
    }
  }


  render() {
    return (
      < View style={styles.screen} >
        <GiftedChat
          messages={this.state.messages}
          onSend={messages => this.onSend(messages)}
          user={{
            _id: 1
          }}
        />
        <Button
          icon={
            <Icon
              name="microphone"
              size={24}
              color="white"
            />
          }
          onPress={this.micHandler}
          />
        </View>
    );
  }
}

const styles = StyleSheet.create({
  screen: {
    flex: 1,
    backgroundColor: '#fff'
  },
});
export default App;

Now when we run the app using the react-native run-android command, we can see a microphone button that will enable speech to text feature in the app.

Once we click the mic button, it triggers micHandler() function which contains the logic to convert speech into text. This will automatically start recognizing and adjusting for the English language.

However, you can use different languages for speech. Read this for more information.

micHandler = async () => {
    try {
      let textSpeech = await SpeechAndroid.startSpeech("Speak now", SpeechAndroid.ENGLISH);
      speak=1;
      let messagesArray = [{
        "text": textSpeech,
        "user": {
          "_id": 1
        },
        "createdAt": new Date(),
        "_id": this.uuidGen()
      }];
      await this.onSend(messagesArray);
      await ToastAndroid.show(spokenText, ToastAndroid.LONG);
    } catch (error) {
      switch (error) {
        case SpeechAndroid.E_VOICE_CANCELLED:
          ToastAndroid.show("Voice Recognizer cancelled", ToastAndroid.LONG);
          break;
        case SpeechAndroid.E_NO_MATCH:
          ToastAndroid.show("No match for what you said", ToastAndroid.LONG);
          break;
        case SpeechAndroid.E_SERVER_ERROR:
          ToastAndroid.show("Google Server Error", ToastAndroid.LONG);
          break;
      }
    }
  }

micHandler() is an asynchronous function. Here, we will capture the data and store it in messagesArray array and send it as an argument to the onSend() function. As we need unique id for every message, we use uuidGen() function which will simply return unique id every time we call it.

If the response is successful, onSend() function will trigger handleGoogleResponse() function.
handleGoogleResponse() function is as shown below.

handleGoogleResponse(result) {
    let receivedText = '';
    let splitReceivedText = '';
    var extractDate = '';
    const dateRegex = /\d{4}\-\d{2}\-\d{2}?/gm;

    receivedText = result.queryResult.fulfillmentMessages[0].text.text[0];
    splitReceivedText = receivedText.split('on')[0];
    extractDate = receivedText.match(dateRegex);

    if (extractDate != null) {
      var completeTimeValue = splitResponseForTime(receivedText);
      var timeValue = getTimeValue(completeTimeValue);
      function splitResponseForTime(str) {
        return str.split('at')[1];
      }

      function getTimeValue(str) {
        let time1 = str.split('T')[1];
        let hour = time1.split(':')[0];
        let min = time1.split(':')[1];
        return hour + ":" + min;
      }
      splitReceivedText = splitReceivedText + 'on ' + extractDate[0] + ' at ' + timeValue;
    }
    this.sendBotResponse(splitReceivedText);
  }

In this method, we have written the logic which helps in structuring the response sent by Dialogflow API in the way we require. The structured text is then passed on to sendBotResponse() function to set the state of the messages array.

 sendBotResponse(text) {
    let msg = {
      _id: this.state.messages.length + 1,
      text,
      createdAt: new Date(),
      user: BOT_USER
    };

    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, msg)
    }));

    if (speak) {
      speak=0; 
      Tts.getInitStatus().then(() => {
        Tts.speak(text);
      }, (err) => {
        if (err.code === 'no_engine') {
          Tts.requestInstallEngine();
        }
      });
    }
  }

If the user is interacting with the bot using the voice, the response from the bot will not only be in the form of a text but also as a voice. That is handled using Tts.speak(text). It can take some time to initialize the TTS engine, and Tts.speak() will fail to speak until the engine is ready. To wait for successful initialization, we are using getInitStatus() call.

Below are the snapshots of the app running on an Android device.

That’s all. Thank you for the read!

This story is authored by Dheeraj Kumar and Santosh Kumar. Dheeraj is a software engineer specializing in React Native and React based frontend development. Santosh specializes on Cloud Services based development.

Creating a Chatbot for Healthcare in React Native using Dialogflow

In this blog, we shall learn how to build an AI virtual assistant or a Chatbot using React Native and Dialogflow API.

Why are chatbots important?
A chatbot is a piece of software that helps in conducting a conversation through voice based or textual methods. Chatbots offer companies new opportunities to improve the customer engagement process and operational efficiency by reducing the typical cost of customer service.

Image result for dialogflow

What is Dialogflow?
Dialogflow (previously known as API.AI) is a Natural Language Processing (NLP) platform which can be greatly helpful to build conversational applications for a company’s customers in various languages and also across multiple platforms. Dialogflow enables developers to create text-based and voice conversation interfaces for responding to customer queries in different languages.

Why Dialogflow?
There are different chatbot SDK’s like Dialogflow, Amazon Lex, IBM Watson, Microsoft Bot Framework etc. The reasons to why we chose to use Dialogflow are:

  1. Dialogflow supports multiple platforms.
  2. Dialogflow supports all the devices like wearables, phones and other devices.
  3. Dialogflow also supports multiple languages.

How Dialogflow works?
In Dialogflow, the typical flow of any conversation involves these steps:

  1. The user providing an input.
  2. Dialogflow agent parsing that input based on the intent.
  3. Agent returning a response to the user.

Setting up Dialogflow account:

Navigate to console in the official website. After navigating to console you will be prompted to sign in with Google, go ahead and sign-in. After successfully signing in you can see a dashboard.

Before we dive into the platform and start building the bot/agent, let us learn about the terms used in Dialogflow.

After signing in, you could see a Create Agent tab. An agent is nothing but the bot that you would like to create. Give a name of your choice and click on the Create button. After creating successfully you could see multiple tabs on the left side of the screen like:

  1. Intents
  2. Entities
  3. Fulfillment etc

Intents:
An Intent is a specific action that the user can invoke by using one of the defined terms in the Dialogflow console. 

For example, the user could ask “What’s the time?” or “What is today’s date?” if these terms are defined within the console, then they will be detected by Dialogflow and intents that are defined under will get triggered.

You can create an intent by clicking on create intent as shown below.

You shall see some default intents already available. We can create the new intents here.

Entities:
An Entity is a property which can be used by Dialogflow to answer the request from the user. The entity will usually be a keyword within the request such as a name, date, time etc. 

Dialogflow has a rich set of predefined entities and also has an option that enables the developer to define custom entities as well.

Fulfillment:
When the user provides the input, Dialogflow needs to process the user input which might contain entities as well. Hence Dialogflow needs to request the information from web-hook so as to fulfill the users request. The input provided by the user along with entities is then sent to the web-hook so that the required information can be retrieved. Once the Dialogflow receives the information from web-hook it sends the response back to the user in the desired manner.

For example, if the user wants to know about weather conditions, a web-hook could be used to get info about weather and pass it on to the user.

Response:
It is the content which Dialogflow sends back to the user once the user’s query is processed.

Creating a ChatBot for Health care:

Now that we have learnt about some basic terms of Dialogflow, let us start building a chatbot (in this case Healthbot) which helps the user (patient) to schedule an appointment with a specific doctor in an organization.

Let’s go ahead and create an agent first. Here we are creating an agent with the name HealthBot.

After clicking the create button, the HealthBot agent would be created. It would look like below.

You could see some default intents there. We can create our own intents here. So, let’s move forward and start creating the intents.

The intent we will be creating here is “Schedule an Appointment”.

Save the intent after creating. In the Training Phrases section, we can add our own training phrases to train the agent.

When we add a particular training phrase , Dialogflow would look for predefined entities in the phrase, if found it will highlight them as shown.

Add few other relative training phrases and click on save.

Next in the Action and Parameters section we can make the @sys.person, @sys.date, @sys.time as required by checking on the Required checkbox. We can also define the prompts for the required fields so that if the user does not provide any one of them the defined prompt will be shown up asking the user to provide the required parameters.

The prompts for the entities could be defined by clicking define prompts under Prompts. Below are the prompts for the respective entities.

Next we have to add the response in the Response section.

After receiving all the required parameters from the user , we can phrase a response like shown.

Now we have to create a front end app using React Native which would communicate with the HealthBot agent.

Let’s go to React Native Docs, select React Native CLI Quickstart and select the appropriate development OS and the target OS as Android, as we are going to build an android application. 

Follow the docs for installing dependencies, then create a new react native application. Use the command line interface to create a new react native project.

react-native init <project-name>

By using the below commands you can run the app on android device. You could see the default welcome page. 

cd <project-name>
Npm install
React-native run-android

Note:  If you face an issue like “Failed to install the app. Make sure you have the Android development environment set up”, just traverse to <project-name>/android folder and create a file named local.properties and add the Android SDK path in it as shown here.

sdk.dir = Your Android SDK Path

We also need to install some dependencies using below command.

npm install react-native-gifted-chat
react-native-dialogflow -save

We are using react-native-gifted-chat package as it provides a customizable and complete chat UI interface.

We are also using react-native-dialogflow so that we can bridge our app with Google Dialogflow’s SDK. 

For our app to communicate with Dialogflow agent, we need to configure few things. For that create any .js file in your project root folder (in this env.js).
We need to configure few values in env.js file.

To get the values click on the Service Account link as shown in the image.
You can get this by clicking on the gear icon present beside the agent name on the left side of the screen.

After clicking the link , you would be shown a table called Service accounts for project “<Agent Name>”. Click on Actions and select create key option from there. A prompt will appear asking to choose an option. Select JSON and click on create. A json file would be downloaded. Just copy the contents of the json file and add it in env.js.

Your env.js file would look like below.

env.js

export const dialogflowConfig = {
  "type": "service_account",
  "project_id": "Health-bot",
  "private_key_id": "xxxx",
  "private_key": "-----BEGIN PRIVATE KEY-----\n xxxx\n-----END PRIVATE KEY-----\n",
  "client_email": "xxxx",
  "client_id": "xxxx",
  "auth_uri": "xxxx",
  "token_uri": "xxxx",
  "auth_provider_x509_cert_url": "xxxx",
  "client_x509_cert_url": "xxxx"
}

Now go to <project-name> directory and open App.js. Modify the content of App.js as below.

App.js

import React, { Component } from 'react';
import {View} from 'react-native';
import { GiftedChat } from 'react-native-gifted-chat';
import { Dialogflow_V2 } from 'react-native-dialogflow';
import { dialogflowConfig } from './env';

const BOT_USER = {
  _id: 2,
  name: 'Health Bot',
  avatar: 'https://previews.123rf.com/images/iulika1/iulika11909/iulika1190900021/129697389-medical-worker-health-professional-avatar-medical-staff-doctor-icon-isolated-on-white-background-vec.jpg'
};
class App extends Component {

  state = {
    messages: [
      {
        _id: 1,
        text: 'Hi! I am the Healthbot 🤖.\n\nHow may I help you today?',
        createdAt: new Date(),
        user: BOT_USER
      }
    ]
  };

  componentDidMount() {
    Dialogflow_V2.setConfiguration(
      dialogflowConfig.client_email,
      dialogflowConfig.private_key,
      Dialogflow_V2.LANG_ENGLISH_US,
      dialogflowConfig.project_id
    );
  }

  onSend(messages = []) {
    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, messages)
    }));

    let message = messages[0].text;
    Dialogflow_V2.requestQuery(
      message,
      result => this.handleGoogleResponse(result),
      error => console.log(error)
    );
  }

  handleGoogleResponse(result) {
    let text = result.queryResult.fulfillmentMessages[0].text.text[0];
    this.sendBotResponse(text);
  }

  sendBotResponse(text) {
    let msg = {
      _id: this.state.messages.length + 1,
      text,
      createdAt: new Date(),
      user: BOT_USER
    };

    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, [msg])
    }));
  }

  render() {
    return (
      <View style={{ flex: 1, backgroundColor: '#fff' }}>
        <GiftedChat
          messages={this.state.messages}
          onSend={messages => this.onSend(messages)}
          user={{
            _id: 1
          }}
        />
      </View>
    );
  }
}
export default App;

When the App.js file renders, the first thing it renders is componentDidMount() where we set the configuration of Dialogflow as given below.


componentDidMount() {
    Dialogflow_V2.setConfiguration(
      dialogflowConfig.client_email,
      dialogflowConfig.private_key,
      Dialogflow_V2.LANG_ENGLISH_US,
      dialogflowConfig.project_id
    );
  }

When you click on send , it will trigger the onSend() method where the user message gets stored in the state variable and we will send a request to Dialogflow using Dialogflow_V2.requestQuery. If the response is successful, handleGoogleResponse() method gets triggered.

onSend(messages = []) {
    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, messages)
    }));

    let message = messages[0].text;
    Dialogflow_V2.requestQuery(
      message,
      result => this.handleGoogleResponse(result),
      error => console.log(error)
    );
  }

handleGoogleResponse() will get the text from the response and triggers sendBotResponse() method where it will set the state to response as shown below

handleGoogleResponse(result) {
    let text = result.queryResult.fulfillmentMessages[0].text.text[0];
    this.sendBotResponse(text);
  }

  sendBotResponse(text) {
      let msg = {
        _id: this.state.messages.length + 1,
        text,
        createdAt: new Date(),
        user: BOT_USER
      };

      this.setState(previousState => ({
        messages: GiftedChat.append(previousState.messages, [msg])
      }));
    }

Below are the images of the app running on an Android device.

That’s it folks, we hope it was fun and useful.

This story is authored by Dheeraj Kumar and Santosh Kumar. Dheeraj is a software engineer specializing in React Native and React based frontend development. Santosh specializes on Cloud Services based development.