In our earlier blog post, we had built a Healthcare Chatbot, in React Native using Dialogflow API. This blog is an extension of the chatbot we built earlier. We shall learn how to add support for voice-based user interaction to that chatbot. Assuming you had followed our earlier blog and created the chatbot we will proceed further.

The first thing is to specify voice permission for accessing the device’s microphone to record the audio which will help the app to operate properly. For that we just need to add the below code in AndroidManifest.xml located at project-name > android > app > src > main location.

AndroidManifest.xml

<uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW"/>
<uses-permission android:name="android.permission.RECORD_AUDIO" />

We also need to make few changes in MainApplication.java file located at project-name > android > app > src > main > java > com > project-name location.
Modify the content in MainApplication.java as below.

MainApplicaiton.java

import android.app.Application;
import android.content.Context;
import com.facebook.react.ReactApplication;
import com.facebook.react.ReactNativeHost;
import com.facebook.react.ReactPackage;
import com.facebook.soloader.SoLoader;
import java.util.List;


// Additional packages which we need to add
import net.no_mad.tts.TextToSpeechPackage;
import com.oblador.vectoricons.VectorIconsPackage;
import com.facebook.react.shell.MainReactPackage;
import com.reactnativecommunity.rnpermissions.RNPermissionsPackage;
import com.wmjmc.reactspeech.VoicePackage;
import java.util.Arrays;

public class MainApplication extends Application implements ReactApplication {

  private final ReactNativeHost mReactNativeHost =
      new ReactNativeHost(this) {
        @Override
        public boolean getUseDeveloperSupport() {
          return BuildConfig.DEBUG;
        }
        // Replace your getPackages() method with below getPackages() method
        @Override
        protected List<ReactPackage> getPackages() {
          return Arrays.<ReactPackage>asList(
              new MainReactPackage(),
              new VectorIconsPackage(),
              new RNPermissionsPackage(),
              new VoicePackage(),
              new TextToSpeechPackage()
          );
        }

        @Override
        protected String getJSMainModuleName() {
          return "index";
        }
      };

  @Override
  public ReactNativeHost getReactNativeHost() {
    return mReactNativeHost;
  }

  @Override
  public void onCreate() {
    super.onCreate();
    SoLoader.init(this, /* native exopackage */ false);
    // initializeFlipper(this); // Remove this line if you don't want Flipper enabled
  }

  /**
   * Loads Flipper in React Native templates.
   *
   * @param context
   */

  // private static void initializeFlipper(Context context) {
  //   if (BuildConfig.DEBUG) {
  //     try {
  //       /*
  //        We use reflection here to pick up the class that initializes Flipper,
  //       since Flipper library is not available in release mode
  //       */
  //       Class<?> aClass = Class.forName("com.facebook.flipper.ReactNativeFlipper");
  //       aClass.getMethod("initializeFlipper", Context.class).invoke(null, context);
  //     } catch (ClassNotFoundException e) {
  //       e.printStackTrace();
  //     } catch (NoSuchMethodException e) {
  //       e.printStackTrace();
  //     } catch (IllegalAccessException e) {
  //       e.printStackTrace();
  //     } catch (InvocationTargetException e) {
  //       e.printStackTrace();
  //     }
  //   }
  // }
}

We have added import net.no_mad.tts.TextToSpeechPackage to help the app support text to speech conversion in order to receive the response not only as text but also as voice.

To capture voice input we have to add import com.wmjmc.reactspeech.VoicePackage.

If you intend to use react-native vector icons, you will be required to add import com.oblador.vectoricons.VectorIconsPackage.

The initialize flipper() method has to be commented to avoid encountering the error shown below:

We also need to comment the line initializeFlipper(this) in public void onCreate() { method.

Moving on from MainApplication.java,  we also need to modify our existing code in the App.js file.

Initially, we need to install some packages. To do so, navigate to your project folder in the terminal and execute the below command.

npm install react-native-elements react-native-vector-icons react-native-tts uuid react-native-android-voice --save

We are using react-native-elements to use the already built UI components. Use the react-native-vector-icons package if you want to use react-native vector icons.

React Native TTS is a text-to-speech library for react-native on iOS and Android.

Uuid package is used in order to generate UUID (unique id’s) which we would be using in the app.

react-native-android-voice is a speech-to-text library for react-native for the Android platform.

Now modify the content of the App.js file as below.
App.js:

import React, { Component, useEffect } from 'react';
import { View, StyleSheet } from 'react-native';
import { GiftedChat } from 'react-native-gifted-chat';
import { Dialogflow_V2 } from 'react-native-dialogflow';
import { dialogflowConfig } from './env';
import SpeechAndroid from 'react-native-android-voice';
import uuid from "uuid";
import Tts from 'react-native-tts';
import Icon from 'react-native-vector-icons/FontAwesome';
import { Button } from 'react-native-elements';

const BOT_USER = {
  _id: 2,
  name: 'Health Bot',
  avatar: 'https://previews.123rf.com/images/iulika1/iulika11909/iulika1190900021/129697389-medical-worker-health-professional-avatar-medical-staff-doctor-icon-isolated-on-white-background-vec.jpg'
};

var speak=0;

class App extends Component {
  state = {
    messages: [
      {
        _id: 1,
        text: 'Hi! I am the Healthbot 🤖.\n\nHow may I help you today?',
        createdAt: new Date(),
        user: BOT_USER,
      }
    ],
  };

  componentDidMount() {
    Dialogflow_V2.setConfiguration(
      dialogflowConfig.client_email,
      dialogflowConfig.private_key,
      Dialogflow_V2.LANG_ENGLISH_US,
      dialogflowConfig.project_id
    );
  }

  onSend(messages = []) {
    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, messages)
    }));

    let message = messages[0].text;
    Dialogflow_V2.requestQuery(
      message,
      result => this.handleGoogleResponse(result),
      error => console.log(error)
    );
  }

  handleGoogleResponse(result) {
    let receivedText = '';
    let splitReceivedText = '';
    var extractDate = '';
    const dateRegex = /\d{4}\-\d{2}\-\d{2}?/gm;

    receivedText = result.queryResult.fulfillmentMessages[0].text.text[0];
    splitReceivedText = receivedText.split('on')[0];
    extractDate = receivedText.match(dateRegex);

    if (extractDate != null) {
      var completeTimeValue = splitResponseForTime(receivedText);
      var timeValue = getTimeValue(completeTimeValue);
      function splitResponseForTime(str) {
        return str.split('at')[1];
      }

      function getTimeValue(str) {
        let time1 = str.split('T')[1];
        let hour = time1.split(':')[0];
        let min = time1.split(':')[1];
        return hour + ":" + min;
      }
      splitReceivedText = splitReceivedText + 'on ' + extractDate[0] + ' at ' + timeValue;
    }
    this.sendBotResponse(splitReceivedText);
  }

  sendBotResponse(text) {
    let msg = {
      _id: this.state.messages.length + 1,
      text,
      createdAt: new Date(),
      user: BOT_USER
    };

    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, msg)
    }));

    if (speak) {
      speak=0; 
      Tts.getInitStatus().then(() => {
        Tts.speak(text);
      }, (err) => {
        if (err.code === 'no_engine') {
          Tts.requestInstallEngine();
        }
      });
    }
  }

  uuidGen = () => {
    return uuid.v4();
  }

  micHandler = async () => {
    try {
      let textSpeech = await SpeechAndroid.startSpeech("Speak now", SpeechAndroid.ENGLISH);
      speak=1;
      let StateVariable = [{
        "text": textSpeech,
        "user": {
          "_id": 1
        },
        "createdAt": new Date(),
        "_id": this.uuidGen()
      }];
      await this.onSend(StateVariable);

      await ToastAndroid.show(spokenText, ToastAndroid.LONG);
    } catch (error) {
      switch (error) {
        case SpeechAndroid.E_VOICE_CANCELLED:
          ToastAndroid.show("Voice Recognizer cancelled", ToastAndroid.LONG);
          break;
        case SpeechAndroid.E_NO_MATCH:
          ToastAndroid.show("No match for what you said", ToastAndroid.LONG);
          break;
        case SpeechAndroid.E_SERVER_ERROR:
          ToastAndroid.show("Google Server Error", ToastAndroid.LONG);
          break;
      }
    }
  }


  render() {
    return (
      < View style={styles.screen} >
        <GiftedChat
          messages={this.state.messages}
          onSend={messages => this.onSend(messages)}
          user={{
            _id: 1
          }}
        />
        <Button
          icon={
            <Icon
              name="microphone"
              size={24}
              color="white"
            />
          }
          onPress={this.micHandler}
          />
        </View>
    );
  }
}

const styles = StyleSheet.create({
  screen: {
    flex: 1,
    backgroundColor: '#fff'
  },
});
export default App;

Now when we run the app using the react-native run-android command, we can see a microphone button that will enable speech to text feature in the app.

Once we click the mic button, it triggers micHandler() function which contains the logic to convert speech into text. This will automatically start recognizing and adjusting for the English language.

However, you can use different languages for speech. Read this for more information.

micHandler = async () => {
    try {
      let textSpeech = await SpeechAndroid.startSpeech("Speak now", SpeechAndroid.ENGLISH);
      speak=1;
      let messagesArray = [{
        "text": textSpeech,
        "user": {
          "_id": 1
        },
        "createdAt": new Date(),
        "_id": this.uuidGen()
      }];
      await this.onSend(messagesArray);
      await ToastAndroid.show(spokenText, ToastAndroid.LONG);
    } catch (error) {
      switch (error) {
        case SpeechAndroid.E_VOICE_CANCELLED:
          ToastAndroid.show("Voice Recognizer cancelled", ToastAndroid.LONG);
          break;
        case SpeechAndroid.E_NO_MATCH:
          ToastAndroid.show("No match for what you said", ToastAndroid.LONG);
          break;
        case SpeechAndroid.E_SERVER_ERROR:
          ToastAndroid.show("Google Server Error", ToastAndroid.LONG);
          break;
      }
    }
  }

micHandler() is an asynchronous function. Here, we will capture the data and store it in messagesArray array and send it as an argument to the onSend() function. As we need unique id for every message, we use uuidGen() function which will simply return unique id every time we call it.

If the response is successful, onSend() function will trigger handleGoogleResponse() function.
handleGoogleResponse() function is as shown below.

handleGoogleResponse(result) {
    let receivedText = '';
    let splitReceivedText = '';
    var extractDate = '';
    const dateRegex = /\d{4}\-\d{2}\-\d{2}?/gm;

    receivedText = result.queryResult.fulfillmentMessages[0].text.text[0];
    splitReceivedText = receivedText.split('on')[0];
    extractDate = receivedText.match(dateRegex);

    if (extractDate != null) {
      var completeTimeValue = splitResponseForTime(receivedText);
      var timeValue = getTimeValue(completeTimeValue);
      function splitResponseForTime(str) {
        return str.split('at')[1];
      }

      function getTimeValue(str) {
        let time1 = str.split('T')[1];
        let hour = time1.split(':')[0];
        let min = time1.split(':')[1];
        return hour + ":" + min;
      }
      splitReceivedText = splitReceivedText + 'on ' + extractDate[0] + ' at ' + timeValue;
    }
    this.sendBotResponse(splitReceivedText);
  }

In this method, we have written the logic which helps in structuring the response sent by Dialogflow API in the way we require. The structured text is then passed on to sendBotResponse() function to set the state of the messages array.

 sendBotResponse(text) {
    let msg = {
      _id: this.state.messages.length + 1,
      text,
      createdAt: new Date(),
      user: BOT_USER
    };

    this.setState(previousState => ({
      messages: GiftedChat.append(previousState.messages, msg)
    }));

    if (speak) {
      speak=0; 
      Tts.getInitStatus().then(() => {
        Tts.speak(text);
      }, (err) => {
        if (err.code === 'no_engine') {
          Tts.requestInstallEngine();
        }
      });
    }
  }

If the user is interacting with the bot using the voice, the response from the bot will not only be in the form of a text but also as a voice. That is handled using Tts.speak(text). It can take some time to initialize the TTS engine, and Tts.speak() will fail to speak until the engine is ready. To wait for successful initialization, we are using getInitStatus() call.

Below are the snapshots of the app running on an Android device.

That’s all. Thank you for the read!

This story is authored by Dheeraj Kumar and Santosh Kumar. Dheeraj is a software engineer specializing in React Native and React based frontend development. Santosh specializes on Cloud Services based development.

Last modified: December 17, 2019

Author

Comments

Write a Reply or Comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.