How do I create a Simple FAQ?
A Simple FAQ is a set of questions that share a common answer. For example, if a user asks:
What are the symptoms of diabetes
What are diabetes symptoms
You can specify the answer as “Diabetes is complicated. You should check with your doctor if you have concerns.”
You can specify as many questions as you want, and make the answer as long or short as you like. For example:
When was Orbita founded
When did Orbita launch
When did Orbita start
When was the company founded
How long has the company been around
How long has the company been in business
Response: “Orbita was founded in 2015”
Configuring Simple FAQ
Experience Manager
Creating a Simple FAQ is a quick way to create a question and answer without having to do any coding.
You can find Simple FAQ in the side menu within a project.
The Simple FAQ screen lists all the existing Simple FAQs.
To create a new Simple FAQ, click Add. The Simple FAQ definition screen appears.
The top section of the Simple FAQ definition specifies the following information:
Simple FAQ. Enter the name of the Simple FAQ, such as CompanyInfo.
Intent Name. Enter the name of the Intent. (This is automatically entered as the same as the Simple FAQ name.)
User Says. Specify one or more ways a person might ask the question for the answer you define. Click Add Utterance for each separate way a person might say the intent or click on the double arrow to bulk add the utterances.
NLP. Choose the provider you would like to deploy the Simple FAQ.The bottom section of the Dialog definition specifies the Response.
If you choose the Content Repository, click the Content link in step 2. A Browse Content dialog box appears. For more info, see How do I create a content type and content item?Select a Content-type from the dropdown and select the data item that contains the response.
If you choose a Custom Content response, specify the response in the Say Text.
You can specify responses for Voice, Text, Screen, and Button, in the multimodal content editor, to cover all devices that use the voice assistant.
The multimodal content editor contains the below fields. For more information, see What is the Multimodal Content Editor?
Voice. Select this tab for audio responses.
Say Text. Enter content for the voice assistant to speak.
Reprompt. Enter content for the voice assistant to speak when there is no response, or when the response was not understood.
Text. Select this tab for textual responses, such as for an automatic chat-bot.
Chat Text. Enter content for the chat-bot or other text display.
Reprompt. Enter content to display when there is no response, or when the response was not understood.
Screen. Select this tab for responses on a display device.
Short Title. Enter a short title to display on the screen.
Long Title. Enter a long title to display on the screen.
Body. Enter the content of the taxonomy node for the screen display.
Image Small. Click Upload to open the Asset browser where you can select an image and click OK. The display device uses either the small image or the large image, depending on its capability.
Image Large. Click Upload to open the Asset browser where you can select an image and click OK.
Button. Select this tab to define push-button responses. Use this for chat-bots. (See image below.)
Value. Enter the value of the button.
Text. Enter the text on the button (such as Upper Back, Lower Back, Neck, and so on).
Click Save when you complete the Dialog.
You can use images that are in the Asset browser only. To upload an image from your local system to the Asset Browser, click Browse. After the image is added, you can select it and click OK.
Experience Designer
In the Experience Designer, search for the Dialog Request node and drop it in the Canvas.
Connect the Dialog Request node with a Say node and a Response node to get the output in your chatbot as shown below.
In the Experience Manager > Simple FAQ, if you choose the Custom Content option, you have to use the below path in the Say node to get the output from the Chatbot.
msg.payload.dialogData.voiceResponse.contentLib.voice.sayText
If you choose the Content Repository option, the content from the Content Repository will be available in the following path.
{{msg.payload.dialogData.voiceResponse.contentLib.contentRef.sample.voice.sayText}}
If you use the Date and time schema type, the output payload path will be at
{{msg.payload.dialogData.voiceResponse.contentLib.contentRef.time}}