Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

The Orbita Voice Android SDK will enable the integration of Orbita Voice services across Android apps. Development of services such as audio processing, speech to text conversion from scratch can be time-consuming. For this reason, Orbita has developed the SDK that can integrate all the voice-related services that a business requires, for easy and quick implementation.

Prerequisites

Get started with Orbita Voice SDK Installation

Unzip the two SDK files named:

  • orbitaspeech (Voice SDK Package)

  • sdk (Web Service SDK Package)

Contact Orbita Support to access the above-mentioned files.

SDK installation in Android Studio

  1. Open Android Studio.

  2. Create or open a project in which the SDK has to be installed.

  3. To import the unzipped files, select “Import Module” from the Main menu – File – New.

  4. Browse the address where you unzipped the file orbitaspeech.

  5. After selecting the directory, please make sure the module name is “:orbitaspeech”.

  6. Click Finish.

  7. Repeat Step 2 to import the next module.

  8. Browse the address where you unzipped the file sdk.

  9. After selecting the directory, please make sure the module name is “:sdk”.

  10. Click Finish.

  11. You should now see both orbitaspeech and sdk modules in the current project directory.

Error Handling

  1. The Gradle plugin version that is used for Orbita SDK is 0.8.2

  2. If the Gradle version in Android studio is 8.3 or newer, Gradle sync will fail.

  3. We will have to change the latest version of the le plugin. For that, select “build.gradle” from <Project name> directory – orbitaspeech – src.

  4. Select the code ‘google.protobuf:protobuf-gradle-plugin:0.8.2’ (shown in the screenshot below).

  5. Press alt+enter to get a dropdown menu.

  6. Select “Change to <new version number>”.

  7. Click on the “Try Again” link at the top of the tab to run Gradle sync.

Using the SDK

Login to Orbita services

Req payload: {"password": " ","username": ""}

// Prepare the user info to get the user name and password
final User userInfo = new User();
userInfo.setUsername(email);
userInfo.setPassword(password);
 
// Start the login task
ApiService.login(getApplicationContext(), userInfo,  new ApiService.AuthorizationCallback() {
    @Override
    public void onSuccess() {
        // login sucess
    }
    @Override
    public void onFailure(String info) {
       //show error message
    }
});

  • The SDK will decode the java web token and save the authorization token in the share preferences.

  • The response will be the decoded JWT.

Sample Response

{
  "_id": "58v72a68434f94a80803f3bc",
  "roles": [
    "admin"
  ],
  "firstName": "John",
  "lastName": "Smith",
  "avatarSrc": "img/avatar0.jpg",
  "attributes": {
    "oauthSettings": {
      "alexa": {
        "userId": "G2M740NPXVIGU5",
        "access": {
          "expiresAt": "2018-05-15T18:36:55+05:30",
          "tokenSecret": "",
          "token": ""
        }
      }
    },
    "email": "johnsmith@example.com",
    "title": "Patient",
    "pinSecurityExpired": "2018-05-03T12:07:46.586Z"
  },
  "token": "",
  "personaType": "655f9e309g0h67381784i15j",
  "personaProfile": {
    "username": "johnsmith@example.com",
    "firstName": "John",
    "lastName": "smith",
    "securityPin": "1234",
    "securityPinInterval": "1",
    "timezone": "America/Chicago"
  },
  "iat": 1526480494,
  "exp": 1526494894
}

Using OrbitaSpeechRecognizer Class for Audio Processing

Post Utterance to Orbita Voice server

  • Instantiate the OrbitaSpeechRecognizer class.

private OrbitaSpeechRecognizer mRecognizer;
 mRecognizer = new OrbitaSpeechRecognizer(context.getApplicationContext());
  • Instantiate the OrbitaSpeechRecognizerListener class.

private OrbitaSpeechRecognizerListener mRecognizerListener;
mRecognizerListener = new RecognitionListener();
  • Set Listener to the Speech Recognizer

mRecognizer.setListener(mRecognizerListener);
  • To Start recognizing use this code

if (mRecognizer != null) {
    mRecognizer.startListening();
}
  • Once we start recognition session it will be stopped automatically on speech end, if you want to stop recognition manually use this method.

if (mRecognizer != null) {
    mRecognizer.stopListening();
}
  • We get the partial result of the Speech in onPartialResult method, this method is used to display the text while the user is speaking.

@Override
public void onPartialResults(String text) {
    // use the ‘text’
}
  • onResult is the final text output of recognizer class.

@Override
public void onResult(String text) {
    // this is where we get the final result from the voice recognizer.
}

We send the request with the final result text to the server.

Sample Request:

{“utterance":"launch","sessionID":"wwerfnr9lsdnfskfspj"}

Response :

{
    "orbitaPayload": {
        "payload": {
            "multiagent": {
                "voice": {
                    "sayText": "<p>Hello World! <break strength=\"none\" time=\"5s\"/> How can I help?</p><p> </p> <p>hello how are you</p><p> </p>",
                    "rePrompt": "<p>how can i help?<audio src=\"https://domainname.com/assets/editor/audio/SampleAudio_0.4mbd7504e20-5ec6-11e8-91c4-cf90c900530e.mp3\"/></p>"
                },
                "chat": {
                    "chatText": "<p>Hello World! How can I help?</p>",
                    "rePrompt": "<p>how can I help you?</p>"
                },
                "screen": {
                    "shortTitle": "Hi",
                    "longTitle": "Hi there",
                    "body": "<p>Hello World! How can I help?</p>",
                    "smallImage": "",
                    "largeImage": ""
                },
                "buttons": {
                    "type": "",
                    "name": "buttons",
                    "choices": []
                }
            }
        },
        "type": "4",
        "name": "orbita"
    },
    "text": "Hello World!  How can I help?  hello how are you ",
    "reprompt": "Hello World!  How can I help?  hello how are you ",
    "sessionEnd": false,
    "replaceWord": {
        "two": [
            "to",
            "too"
        ],
        "six": [
            "sex"
        ],
        "ten": [
            "tan",
            "tin"
        ]
    },
“sayTextAudio”:[],
“audioContent”:[],
”repromptAudio”:[]
}

Play Audio Content

To play the audio content from Orbita voice service.

serviceResponse - Response from orbita voice service.

audioContent - property name for audio file in response JSON.

  • We will be receiving a byte array of string which needs to be base 64 decoded.

  • The Delegate method AudioFileDidFinishPlaying will be called after the audio has finished playing.

  • We can use StopAudio to stop audio player anytime while the audio player is playing.

SpeechAudioProcessor.shared.StopAudio()

To synthesize the text from Orbita voice service to speech (TTS)

/**
 * Speaks the given text
 * @param textToSpeak text to speak
 */
 public void startTtsUtterance(String textToSpeak) {
    if (mTts != null) {
        mTts.speak(textToSpeak);
    }
}

To stop the Utterance manually you can call this method.

public void stopTtsUtterance() {
    if (mTts != null) {
        mTts.stopSpeaking();
    }
}

In android, TTS is not part of the SDK, user have to write code for speech synthesis.

  • No labels