Bot Provider V2 node

Bot Provider V2 node lets you integrate the Web chatbot with your project in Experience Designer.

 

Bot Provider V2 node

The Bot Provider V2 node is responsible for integrating the Orbita chatbot with Google Dialogflow.

Property

Description

Property

Description

Name

This field contains the name of the node (if given).

Skill

This field will be populated with the current project name automatically. You cannot edit this property.

NLP Type

Choose the NLP type from the dropdown. By default, it is set to Google.

Enable Transcript Capturing (Chat)

Enable this checkbox if you want to save the transcript of the chatbot conversation.
You can view the chatbot conversation in the content schema named “Transcriptdata“.

Raw NLP result output

If you enable this checkbox, you can get the raw response payload from the NLP.

Bot In Parser

The Bot In Parser process the input data object
Query string example: /oeapi/bot/float?type=survey?id=423
After NLP processes the input, msg.req will change. Need to store anything you will need to process an intent or launch to msg.payload.originalRequest

if(msg.req.query.id) { msg.payload.originalRequest.id = msg.req.query.id; } if(msg.req.query.type) { msg.payload.originalRequest.type = msg.req.query.type; }

Bot Out Parser

The Bot Out Parser lets you change the Output payload values such as wait time between bubbles, and so on.

msg.payload.micInput = false; msg.payload.audio = false; msg.payload.waitTimeNoAnimation = false; msg.payload.waitSettings = { mode: 'dynamic', wpm: 300}

Text to Speech Config

You can change the output voice settings of your voice agent by changing the properties in the Bot Provider V2 node > Text To Speech Config.

Property

Description

Property

Description

"languageCode": "en-US"

You can select the language accent in which you want the output voice using this property.

You can choose from the below:

Arabic — ar-XA, Bengali (India) — bn-IN, Chinese (Hong Kong) — yue-HK, Czech (Czech Republic) — cs-CZ, Danish (Denmark) — da-DK, Dutch (Netherlands) — nl-NL, English (Australia) — en-AU, English (India) — en-IN, English (UK) — en-GB, English (US) — en-US, Filipino (Philippines) — fil-PH, Finnish (Finland) — fi-FI, French (Canada) — fr-CA, French (France) — fr-FR, German (Germany) — de-DE, Greek (Greece) — el-GR, Gujarati (India) — gu-IN, Hindi (India) — hi-IN, Hungarian (Hungary) — hu-HU, Indonesian (Indonesia) — id-ID, Italian (Italy) — it-IT, Japanese (Japan) — ja-JP, Kannada (India) — kn-IN, Korean (South Korea) — ko-KR, Malayalam (India) — ml-IN, Mandarin Chinese — cmn-CN, Mandarin Chinese (Taiwan) — cmn-TW, Norwegian (Norway) — nb-NO, Polish (Poland) — pl-PL, Portuguese (Brazil) — pt-BR, Portuguese (Portugal) — pt-PT, Romanian (Romania) — ro-RO, Russian (Russia) — ru-RU, Slovak (Slovakia) — sk-SK, Spanish (Spain) — es-ES, Swedish (Sweden) — sv-SE, Tamil (India) — ta-IN, Telugu (India) — te-IN, Thai (Thailand) — th-TH, Turkish (Turkey) — tr-TR, Ukrainian (Ukraine) — uk-UA, Vietnamese (Vietnam) — vi-VN.

"ssmlGender": "FEMALE"

You can also choose the gender for the voice output.

You can choose from FEMALE and MALE.

"name": "en-US-Wavenet-G"

You can try out different variations on how the voice output can be with this property.
Please refer to Google documentation on voices

This property will override the language and ssmlGender properties.

"audioEncoding": "MP3"

You can choose the audio encoding of your choice from the below mentioned.

  1. LINEAR16 - Uncompressed 16-bit signed little-endian samples (Linear PCM).

  2. MP3 - MP3 audio.

  3. OGG_OPUS - Opus encoded audio wrapped in an ogg container.

Output Payload

The sample format of the output payload is as follows.

{   "orbitaPayload" : {},   "text" : "",   "reprompt" : "",   "sessionEnd" : false,    "ssml" : "",   "waitTime" : "",   "ssmlReprompt" : 250,   "clearSession" : true,   "replaceWord"        : {}   "waitTimeNoAnimation" : false,   "micInput" : true,   "validRegexArray" : {},   "keyboardInput" : true, "waitSettings" : { mode: 'dynamic', wpm: 300}, "audio" : false }

Property

Description

Property

Description

orbitaPayload

The following payload displays the contents from the Multi-Modal Content editor.

text

The following payload displays the content from the Chat Text window in the Text tab from the Multi-Modal Content editor.

reprompt

The following payload displays the content from the Reprompt window in the Text tab from the Multi-Modal Content editor.

sessionEnd

The following property gives a Boolean value, true or false. If the End Session checkbox is enabled, then the sessionEnd value is true.

ssml and ssmlReprompt

The voice tab text when converted to speech is in ssml format. In the Voice tab, the content and corresponding payloads are:

  • Say text window

  • Reprompt window

waitTime

The time (in milliseconds), the bot must wait before displaying another chat bubble in a sequence. The default value for waitTime is set to 250.

To change the waitTime, use the following JSON code.

clearSession

The clearSession gives you a Boolean output, true or false. If this value is true, then the current session details are erased. The default value for clearSession is false.

Use the following sample code to set clearSession to true.

replaceWord

A homophone is a word that is pronounced the same as another word but differs in meaning and/or spelling (such as read, or wind). While giving voice commands to the bot, homophones can be misinterpreted. You can choose to replace such words with the actual words in the context using replaceWord.

The following sample code is for a replaceWord array.

waitTimeNoAnimation

The status of wait time animation is shown in the following payload. The default value for waitTimeNoAnimation is false.

The following code sets waitTimeNoAnimation to true.

micInput

The status of the microphone input setting is shown in the following payload. The default value for micInput is true.

micInput provided in this node will take priority over the Flow Studio Directive.

The following code shows how to set micInput to false.

validRegexArray

The status of the mic input setting is shown in the payload below. The default value for validRegexArray is set to true.

The following code sets the validRegexArray array.

keyboardInput

The status of the text input field is shown in the following payload. The default value for keyboardInput is true.

If this value is set to false, the text input field will be hidden.

audio

Audio output from the chatbot can be controlled using this property.

Setting the property value to true will enable the speaker and generate audio output even if the user has manually disabled the speaker using the speaker icon at the top right corner of the chatbot.

Setting the property value to false will hide the speaker icon and will not generate the audio output.

Sample Code:

 

Example flow in Experience Designer

The following flow changes the default values of micInput and keyboardInput properties.

The diagram has two flows.

  • The first flow integrates the Experience Designer with the Web chatbot.

  • The second flow has a function node that sets the property values for micInput and keyboardInput.

In this example, when the stop intent is triggered, the bot should say the content from Say node and should not take any input; that is, the Text input field and the Mic input should be disabled and not visible.

The output shows that the launch and SayHello intents are triggered, and the respective output message is captured in the following screenshot.

Refer Orbita Web Chat, to set up Orbita Web Chat on your website.

Related Articles