Guide to Orbita Voice Mobile SDK (iOS)

The Orbita Voice SDK will enable the integration of Orbita Voice services across iOS apps. Development of services such as audio processing, speech to text conversion from scratch can be time-consuming. For this reason, Orbita has developed the SDK that can integrate all the voice-related services that a business requires, for easy and quick implementation.

 

Prerequisites

- iOS 10.0 or above

- Xcode 9.0 or above

Orbita Voice SDK Installation

Contact Orbita Support to access these files

  • iOS-SDK.zip files

  • iOS-SDK-sample.zip files

Creating a new project in Xcode

1. Open Xcode.

2. Select “Project” from the File menu – File - New

3. Select “Single View App” (Default selection) and click “next”

4. Fill in the Product name and click “next”

5. Select a directory where the project must be created and select “Create”

6. A new project will be created with the “Product name” that was given in the previous step.

Import the SDK files to the project

1. Unzip the SDK.zip file.

2. Copy the unzipped folder named “SDK” and paste it in the project’s directory (as shown in the screenshot below).

3. Move the ‘googleapis.podspec’ file and ‘google’ folder from the SDK folder to the project directory.

From the “SDK” folder

To the “Project name” folder

4. In Xcode, right-click on the project directory and select “Add files to <Project name>”.

5. Select the folder named “SDK” which is added to Xcode and click on “Add”

Pod Installation

1. Go to Terminal and change the directory to project directory

“cd /Users/username/Desktop/<Project name>”

2. Type “pod init” and press enter – this will create a Podfile in the project directory

3. Open Podfile in a text editor and paste the below lines after “use_frameworks!”

pod 'Alamofire' pod 'googleapis', :path => ‘.'

4. Save the Podfile.

5. Go to terminal and type “pod install” and press enter. This will install all the dependencies that were added, in the previous steps, into the Podfile.

6. The files that are highlighted in the below screenshot will be created.

7. Close the project in Xcode and open ‘<Project name>.xcworkspace’ file from the project directory

Build procedure

1. Go to Product – Run from the file menu. The build would run into errors in google STT. Please follow the below link to resolve it.

https://github.com/GoogleCloudPlatform/ios-docs-samples/blob/master/speech/Swift/Speech-gRPC-Streaming/BUILDFIXES

In the above-mentioned web page, ignore the first step and proceed with the second step.

2. Run the project from the product – Run from file menu till you see no error in the build

3. Replace the Google API key in SpeechRecognitionService.swift

Using the SDK

Login to Orbita services

 

let servicePayload = [“username":"","password":""] WebServices.sharedManager.LoginRequest(servicePayload as NSDictionary, success: { (response, requestName, status) in }) { (error, requestName,status) in }

The SDK will decode the java web token and save the authorization token in the userdefaults.

The response will be the decoded JWT.

Sample Response

{ "_id": "58e72a68343f94a80803e3ba", "roles": [ "admin" ], "firstName": "Alpesh", "lastName": "Patel", "avatarSrc": "img/avatar0.jpg", "attributes": { "oauthSettings": { "alexa": { "userId": "M2G739YPXVIGU8", "access": { "expiresAt": "2018-05-15T18:36:55+05:30", "tokenSecret": "Your_Token_Secret", "token": "Your_Token" } } }, "email": "alpesh@bebaio.com", "title": "CTO", "pinSecurityExpired": "2018-05-03T12:07:46.586Z" }, "token": "", "personaType": "592e9e508c0f67381784b15a", "personaProfile": { "username": "alpesh@bebaio.com", "firstName": "Alpesh", "lastName": "Patel", "securityPin": "2548", "securityPinInterval": "1", "timezone": "America/Chicago" }, "iat": 1526480494, "exp": 1526494894 }

Using SpeechAudioProcessor Class for Audio Processing

Start the recognition task

We need to request authorization from the user to use a microphone for voice recognition.

The AuthorizationStatus delegate method returns true for granted request and false for declined request

To stop recognition, we can use

the delegate method recognitionCancel() is called when the task has been canceled, either by client app, the user, or the system.

The delegate method recognitionResult will provide the recognized word including the non-final hypothesis.

isFinal = true - Called only for final recognitions of utterances. No more about the utterance will be reported.

isFinal = False - Called for all recognitions, including non-final hypothesis.

text = recognized text from speech recognizer.

Post Utterance to Orbita Voice server

sessionID - Random 16 digit string

UtteranceString - string from device voice recognition result for Orbita voice processing.

Sample Request:

Response :

Play Audio Content

To play the audio content from Orbita voice service

serviceResponse - Response from orbita voice service.

audioContent - property name for audio file in response JSON.

We will be receiving byte array of string which needs to be base 64 decoded.

Delegate method AudioFileDidFinishPlaying will be called after the audio has finished playing.

we can use stopAudio to stop audio player anytime while the audio player is playing.

To synthesize the text from Orbita voice service to speech (TTS)

Delegate method SpeechSynthesizer(didFinishSpeaking text: String) will be called after the SpeechSynthesizer has finished playing the text.

We can use stopAudio to stop SpeechSynthesizer anytime while the audio player is playing.