OVERVIEW
Virtual Assistants
Kore.ai Platform
Key Concepts
Natural Language Processing (NLP)
Accessing Platform
VIRTUAL ASSISTANTS
Virtual Assistant Builder
Virtual Assistant Types
Getting Started
Create a Simple Bot
SKILLS
Storyboard
Dialog Tasks
Introduction
Dialog Builder (New)
Dialog Builder (Legacy)
User Intent Node
Dialog Node
Entity Node
Supported Entity Types
Composite Entities
Supported Colors
Supported Company Names
Form Node
Logic Node
Message Nodes
Confirmation Nodes
Bot Action Node
Service Node
Custom Authentication
2-way SSL for Service nodes
Script Node
Agent Transfer Node
WebHook Node
Grouping Nodes
Connections & Transitions
Manage Dialogs
User Prompts
Knowledge Graph
Terminology
Building
Generation
Importing and Exporting
Analysis
Knowledge Extraction
Train
Build
Alert Tasks
Introduction
Ignore Words and Field Memory
How to Schedule a Smart Alert
Small Talk
Digital Views
Introduction
How to Configure Digital Views
Digital Forms
Overview
How to Configure Digital Forms
NATURAL LANGUAGE
Overview
Machine Learning
Introduction
Model Validation
Fundamental Meaning
Introduction
NLP Guidelines
Knowledge Graph
Traits
Introduction
How to Use Traits
Ranking and Resolver
Advanced NLP Configurations
INTELLIGENCE
Overview
Context Management
Overview
Session and Context Variables
Context Object
How to Manage Context Switching
Manage Interruptions
Dialog Management
Sub Intents & Follow-up Intents
Amend Entity
Multi-Intent Detection
Sentiment Management
Tone Analysis
Sentiment Management
Event Based Bot Actions
Default Conversations
Default Standard Responses
TEST & DEBUG
Talk to Bot
Utterance Testing
Batch Testing
Record Conversations
Conversation Testing
CHANNELS
PUBLISH
ANALYZE
Overview
Dashboard
Custom Dashboard
Overview
How to Create Custom Dashboard
Conversation Flows
NLP Metrics
ADVANCED TOPICS
Universal Bots
Overview
Defining
Creating
Training
Customizing
Enabling Languages
Store
Smart Bots
Defining
koreUtil Libraries
SETTINGS
Authorization
Language Management
PII Settings
Variables
Functions
IVR Integration
General Settings
Management
Import & Export
Delete
Bot Versioning
Collaborative Development
Plan Management
API GUIDE
API Overview
API List
API Collection
SDKs
SDK Overview
SDK Security
SDK App Registration
Web SDK Tutorial
Message Formatting and Templates
Mobile SDK Push Notification
Widget SDK Tutorial
Widget SDK – Message Formatting and Templates
Web Socket Connect & RTM
Using the BotKit SDK
Installing
Configuring
Events
Functions
BotKit SDK Tutorial – Agent Transfer
BotKit SDK Tutorial – Flight Search Sample Bot
Using an External NLP Engine
ADMINISTRATION
HOW TOs
Create a Simple Bot
Create a Banking Bot
Transfer Funds Task
Update Balance Task
Context Switching
Using Traits
Schedule a Smart Alert
Configure UI Forms
Add Form Data into Data Tables
Configuring Digital Views
Add Data to Data Tables
Update Data in Data Tables
Custom Dashboard
Custom Tags to filter Bot Metrics
Patterns for Intents & Entities
Build Knowledge Graph
Global Variables
Content Variables
Using Bot Functions
Configure Agent Transfer
RELEASE NOTES

Voice Call Properties

You can enable voice interaction with your virtual assistant, i.e., users can talk to the virtual assistant. For this, you need to enable one of the voice channels like IVR, Twilio, IVR-AudioCodes, etc and publish the bot on those channels.

There are some Voice Properties you can configure to streamline the user experience across the above-mentioned channels. These configurations can be done at multiple levels:

  • Bot level – at the time of channel enablement;
  • Component level – once you enable the voice properties at the bot level, then you can define the behavior for various components like:
    • Entity Node
    • Message Node
    • Confirmation Node
    • Standard Responses
    • Welcome Message

This document details the voice call properties and how they vary across various channels.

Channel Settings

Field Description Applicable
to
Channel
IVR Data Extraction Key Specify the syntax to extract the filled data

For Entity and Confirmation nodes, you can define the extraction rule overriding the channel level setting. This is particularly helpful with ASR engines that provide transcription results in a different format based on the input type. For example, VXML can contain the word format of the credit card in one key and the number format in another key

IVR
End of Conversation Behavior
(post ver7.1)
This property can be used to define the bot behavior at the end of the conversation. The options are:

  • Trigger End of Conversation Behavior and configure the Task, Script or Message to be initiated. See here for details.
  • Terminate the call.
IVR,
Twilio,
IVR-AudioCodes
Call Termination Handler Select the name of the Dialog task that you want to use as the call termination handler when the call ends in error. IVR,
Twilio,
IVR-AudioCodes
Call Control Parameters Click Add Parameter. Enter property names and values to use in defining the call behavior.

Note: You should use these properties and values in the VXML files for all call flows in the IVR system and Session Parameters in AudioCodes channel.
IVR,
IVR-AudioCodes
ASR Confidence Threshold
Threshold Key This is the variable where the ASR confidence levels are stored. This field is pre-populated, do not change it unless you are aware of the internal working of VXML. IVR
Define ASR threshold confidence In the range between 0 to 1.0 which defines when the IVR system hands over the control to the Bot. IVR
Timeout Prompt Enter the default prompt text to play when the user doesn’t provide the input within the timeout period. If you do not specify a Timeout Prompt for any node, this prompt takes its place. IVR,
Twilio,
IVR-AudioCodes
Grammar Define the grammar that should be used to detect user’s utterance

  • The input type can be Speech or DTMF
  • Source of grammar can be Custom or Link
    • For Custom, write VXML grammar in the textbox.
    • For Link, enter the URL of the grammar. Ideally, the URL should be accessible to the IVR system so that the resource can be accessed while executing the calls at runtime

See below for a detailed configuration for Grammar syntax.
Note: If the Enable Transcription option is enabled for the bot along with specifying the source of the transcription engine, defining grammar isn’t mandatory.

IVR
No Match Prompt Enter the default prompt text to play when user input is not present in the defined grammar. If you do not specify a No Match Prompt for any node, this prompt takes its place. IVR
Barge-In Select whether you want to allow a user input while a prompt is in progress. If you select no, the user cannot provide input until IVR completes the prompt. IVR,
Twilio,
IVR-AudioCodes
Timeout Select the maximum wait time to receive user input from the drop-down list, from 1 second up to 60 seconds. IVR,
Twilio,
IVR-AudioCodes
No. of Retries Select the maximum number of retries to allow. You can select from just 1 retry up to 10 retries. IVR,
Twilio,
IVR-AudioCodes
Log Select Yes if you want to send the chat log to the IVR system. IVR

Dialog Node Settings

On the Voice Call Properties panel for a node, you can enter node-specific prompts, grammar, as well as parameters for call-flow behavior such as time-out and retries.

Voice Call Properties apply only for the following nodes and message types:

  • Entity Node
  • Message Node
  • Confirmation Node
  • Standard Responses
  • Welcome Message
Note: Most settings are the same for all nodes, with a few exceptions.

Voice Call Settings Field Reference
The following sections provide detailed descriptions of each IVR setting, including descriptions, applicability to nodes, default values, and other key information.

Notes on Prompts: 

  • You can enter prompts in one of these formats: Plain text, Script, File location of an audio file. If you want to define JavaScript or attach an audio file, click the icon before the prompt text message box and select a mode. By default, it is set to Text mode.
  • You can enter more than one prompt messages of different types. You can define their order of sequence by dragging and dropping them.
  • Multiple prompts are useful in scenarios where the prompt has to be played more than once, to avoid repetition, since the prompts are played in order.
Field Description Applicable
to
Nodes
Applicable
to
Channel
Initial Prompts Prompts that are played when the IVR first executes the node. If you do not enter a prompt for a node, the default user prompt for the node plays by default. If you do not enter a prompt for Standard Responses and Welcome Message, the default Standard Response and Welcome Message are played by default. Entity,
Confirmation,
Message nodes;
Standard Responses and
Welcome Message
IVR,
Twilio,
AudioCodes
Timeout Prompts Prompts that are played on the IVR channel when the user has not given any input within the specified time. If you do not enter a prompt for a node, the default Error Prompt of the node is played. Standard Responses and Welcomes have a default Timeout Prompt that plays if you don’t define No Match Prompts. Entity,
Confirmation;
Standard Responses
and Welcome Message
IVR,
Twilio,
AudioCodes
No Match Prompts Prompts that are played on the IVR channel when the user’s input has not matched any value in the defined grammar. If you do not enter a prompt here or select No Grammar option for an Entity or Confirmation node, the default Error Prompt of the node is played. Standard Responses and Welcomes have a default No Match Prompt that plays if you do not enter it. Entity,
Confirmation;
Standard Responses and
Welcome Message
IVR
Error Prompts Prompts that are played on the IVR channel when user input is an invalid Entity type. If you do not enter a prompt here, the default Error Prompt of the node is played. Entity,
Confirmation;
IVR,
Twilio,
AudioCodes
Grammar Define the grammar that should be used to detect a user’s utterance

  • The input type can be Speech or DTMF
  • Source of grammar can be Custom or Link
    • For Custom, write VXML grammar in the textbox.
    • For Link, enter the URL of the grammar. Ideally, the URL should be accessible to the IVR system so that the resource can be accessed while executing the calls at runtime

See below for a detailed configuration for Grammar syntax.
Note: If the Enable Transcription option is enabled for the bot along with specifying the source of the transcription engine, defining grammar isn’t mandatory.

Confirmation;
Standard Responses and
Welcome Message
IVR,
Twilio
Advanced Controls
These properties override the properties set in the Bot IVR Settings page.
Timeout Select the maximum wait time to receive user input from the drop-down list, from 1 second up to 60 seconds. The default value is the same as defined in the Bot IVR Settings page. N/A IVR,
Twilio,
AudioCodes
No. of Retries Select the maximum number of retries to allow. You can select from just 1 retry up to 10 retries.
The default value is the same as defined in the Bot IVR Settings page.
N/A IVR,
Twilio,
AudioCodes
Behavior on Exceeding Retries
(applies only to entity node)
Define behavior when either the timeout or number of retry attempts exceed the specified limit. Options include:

  • Invoke CallTermination Handler
  • Initiate Dialog: Select a Dialog task from the list of bot tasks.
  • Jump to a specific node in the current task: Select a node from the list of nodes in the current Dialog task.

Post v7.3, this feature has been enhanced so that on exceeding entity error count, the platform will trigger the Behavior on Exceeding Retries behavior, when the transcription is enabled.

N/A IVR,
Twilio,
AudioCodes
Barge-In Select whether you want to allow a user input while a prompt is in progress. If you select no, the user input is not considered until the prompt is completed. The default value is No. N/A IVR,
Twilio,
AudioCodes
Call Control Parameters Click Add Property. Enter property names and values to use in defining the VXML definition in the IVR system and Session Parameters in AudioCodes channel. These values defined for a node or a standard response override the global Call Control Parameters defined in the Bot IVR /AudioCodes settings page. N/A IVR,
AudioCodes
Log Select Yes if you want to send the chat log to the IVR system. The default value is No. N/A IVR
Recording Define the state of recording to be initiated. The default value is Stop. N/A IVR

Configuring Grammar

You will need to define at least one Speech Grammar to the IVR system.
There is no default Grammar that will be considered by the system. In this section, we will walk you through the steps needed to configure a Grammar system for the bot to function on the IVR system.

Typically for an IVR enabled bot, the speech utterance of the user will be vetted and parsed by the Grammar syntax at the IVR system before being diverted to the Bot.

Kore.ai supports the following Grammar systems:

  • Nuance
  • Voximal
  • UniMRCP

Each one requires its own configuration.

Nuance

In case you want to use grammar syntax rules from Nuance Speech Recognition System, you need to get a license for the same. Once you register and obtain a license from Nuance, you will be given access to two files – dlm.zip & nle.zip. Ensure that the path to this VXML is accessible to the bot.

Configurations:

  1. Set Enable Transcription to no
  2. In the Grammar section:
    • Select the Speech or DTMF option as per your requirement.
    • In the text box to define vxml enter the vxml path to dlm.zip file. The url will be of the format: http://nuance.kore.ai/downloads/kore_dlm.zip?nlptype=krypton&dlm_weight=0.2&lang=en-US
    • Replace the above path according to your setup
    • The language code “lang=en-US” will be based on your setup
  3. Add Grammar to add another path to nle.zip. Follow the above-mentioned steps.
  4. Save the settings.

Voximal/UniMRCP

In case you want to use grammar syntax rules from Voximal or UniMRCP, you need to specify the transcription source.

Configurations:

  1. Set Enable Transcription to yes
  2. In the Transcription engine source text box that appears:
    • for Voximal, enter “builtin:grammar/text”
    • for UniMRCP, enter “builtin:grammar/transcribe”
  3. You can leave the Grammar section blank, the above transcription source uri will handle the syntax and grammar vetting of the speech.
  4. Save the settings.