Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Bot Provider V2 node lets you integrate the Web chatbot with your project in Experience Designer.

Bot Provider V2 node

  • Name. This field contains the name of the node (if given).

  • Skill. This field will be populated with the current project name automatically. You cannot edit this property.

  • NLP Type. You can choose one of the NLPs (Google, Lex, Cortana) from the dropdown.

  • Access Token. Locate the access token in your Google Agent settings; see Developer access token in How do I set up Orbita Web Chat on my site?

  • Raw NLP result output. If this checkbox is checked, the output from the node will be Raw response payload from the NLP that you have chosen.

  • Bot In Parser. Lets you override the input properties to the Bot Provider V2 node such as the Access token. Sample code:

    msg.payload = 
      {
        "query"           : msg.orginalPayload.text || msg.orginalPayload.utterance,
      //"sessionId"       : msg.payload.sessionId,
        "originalRequest" : 
           {
             "source" : "orbita",
             "data"   : 
               {
                 "user" : 
                   {
                     "accessToken"  : msg.orginalPayload.accessToken || '',
                     "orbitaToken"  : msg.orginalPayload.orbitaToken || '',
                     "clearSession" : "",
                     "audio"        : msg.orginalPayload.audio
                   }
               }
           }
      }
  • Bot Out Parser. Lets you change the Output payload values such as wait time between bubbles, show/hide keyboard input.

Output Payload

The sample format of output payload is as follows.

{
  "orbitaPayload"       : {},
  "text"                : "",
  "reprompt"            : "",
  "sessionEnd"          : false, 
  "ssml"                : "",
  "waitTime"            : "",
  "ssmlReprompt"        : 250,
  "clearSession"        : true,
  "replaceWord"         : {}
  "waitTimeNoAnimation" : false,
  "micInput"            : true,
  "validRegexArray"     : {},
  "keyboardInput"       : true,
}

orbitaPayload

The following payload displays the contents from the Multi-Modal Content editor.

msg.payload.orbitaPayload.payload.multiagent

text

The following payload displays the content from the Chat Text window in the Text tab from the Multi-Modal Content editor.

msg.payload.text

reprompt

The following payload displays the content from the Reprompt window in the Text tab from the Multi-Modal Content editor.

msg.payload.reprompt

sessionEnd

The following property gives a Boolean value, true or false. If the End Session checkbox is enabled, then the sessionEnd value is true.

msg.payload.sessionEnd

ssml and ssmlReprompt

The voice tab text when converted to speech is in ssml format. In the Voice tab, the content and corresponding payloads are:

  • Say text window

    msg.payload.ssml
  • Reprompt window

    msg.payload.ssmlReprompt

waitTime

The time (in milliseconds), the bot must wait before displaying another chat bubble in a sequence. The default value for waitTime is set to 250.

msg.payload.waitTime

To change the waitTime, use the following JSON code.

msg.payload.waitTime = "1000"

clearSession

The clearSession gives you a Boolean output, true or false. If this value is true, then the current session details are erased. The default value for clearSession is false.

msg.payload.clearSession

Use the following sample code to set clearSession to true.

msg.payload.clearSession = true

replaceWord

A homophone is a word that is pronounced the same as another word but differs in meaning and/or spelling (such as read, or wind). While giving voice commands to the bot, homophones can be misinterpreted. You can choose to replace such words with the actual words in the context using replaceWord.

msg.payload.replaceWord

The following sample code is for a replaceWord array.

msg.payload.replaceWord =  
  {
    "one"   : ["won"],
    "two"   : ["to", "too"],
    "four"  : ["for","fore"],
    "six"   : ["sex"],
    "eight" : ["ate"],
    "ten"   : ["tan", "tin"]
  }

waitTimeNoAnimation

The status of wait time animation is shown in the following payload. The default value for waitTimeNoAnimation is false.

msg.payload.waitTimeNoAnimation

The following code sets waitTimeNoAnimation to true.

msg.payload.waitTimeNoAnimation = true

micInput

The status of the microphone input setting is shown in the following payload. The default value for micInput is true.

micInput provided in this node will take priority over the Flow Studio Directive.

msg.payload.micInput

The following code shows how to set micInput to false.

msg.payload.micInput = false

validRegexArray

The status of the mic input setting is shown in the payload below. The default value for validRegexArray is set to true.

msg.payload.validRegexArray

The following code sets the validRegexArray array.

msg.payload.validRegexArray = 
  {
    "Q1":["\\b(q|cute|queue|cue|quarter|Q on|queue on|21|key wine|key one|you on|you one|into one)\\s+(one|fun|wine)\\b"],
  }

keyboardInput

The status of the text input field is shown in the following payload. The default value for keyboardInput is true.

msg.payload.keyboardInput

If this value is set to false, the text input field will be hidden.

msg.payload.keyboardInput = false

audio

Audio output from the chatbot can be controlled using this property.

msg.payload.audio

Setting the property value to true will enable the speaker and generate audio output even if the user has manually disabled the speaker using the speaker icon at the top right corner of the chatbot.

Setting the property value to false will hide the speaker icon and will not generate the audio output.

Sample Code:

msg.payload.audio=false

Example flow in Experience Designer

The following flow changes the default values of micInput and keyboardInput properties.

The diagram has two flows.

  • The first flow integrates the Experience Designer with the Web chatbot.

  • The second flow has a function node that sets the property values for micInput and keyboardInput.

In this example, when the stop intent is triggered, the bot should say the content from Say node and should not take any input; that is, the Text input field and the Mic input should be disabled and not visible.

The output shows that the launch and SayHello intents are triggered, and the respective output message is captured in the following screenshot.

Refer Orbita Web Chat, to set up Orbita Web Chat in your website.

Related Articles

  • No labels