{"id":24745,"date":"2020-09-28T06:21:52","date_gmt":"2020-09-28T05:21:52","guid":{"rendered":"https:\/\/multisite.korebots.com\/v9-0\/?p=24745"},"modified":"2021-07-12T09:29:44","modified_gmt":"2021-07-12T08:29:44","slug":"voice-call-properties","status":"publish","type":"post","link":"https:\/\/multisite.korebots.com\/v9-0\/docs\/bots\/bot-builder-tool\/dialog-task\/voice-call-properties\/","title":{"rendered":"Voice Call Properties"},"content":{"rendered":"<section class=\"l-section wpb_row height_auto\"><div class=\"l-section-h i-cf\"><div class=\"g-cols vc_row via_grid cols_1 laptops-cols_inherit tablets-cols_inherit mobiles-cols_1 valign_top type_default stacking_default\"><div class=\"wpb_column vc_column_container\"><div class=\"vc_column-inner\"><div class=\"wpb_text_column\"><div class=\"wpb_wrapper\"><p>You can enable voice interaction with your virtual assistant, i.e., users can talk to the virtual assistant. For this, you need to enable one of the voice channels like <a href=\"\/docs\/bots\/advanced-topics\/ivr-integration\/ivr-integration\/\" target=\"_blank\" rel=\"noopener noreferrer\">IVR<\/a>, <a href=\"\/docs\/bots\/channel-enablement\/adding-the-twilio-voice-channel\/\" target=\"_blank\" rel=\"noopener noreferrer\">Twilio<\/a>, <a href=\"\/docs\/bots\/channel-enablement\/adding-the-ivr-audiocodes-channel\/\" target=\"_blank\" rel=\"noopener noreferrer\">IVR-AudioCodes<\/a>, etc and publish the bot on those channels.<\/p>\n<p>There are some Voice Properties you can configure to streamline the user experience across the above-mentioned channels. These configurations can be done at multiple levels:<\/p>\n<ul>\n<li>Bot level &#8211; at the time of channel enablement;<\/li>\n<li>Component level &#8211; once you enable the voice properties at the bot level, then you can define the behavior for various components like:\n<ul>\n<li>Entity Node<\/li>\n<li>Message Node<\/li>\n<li>Confirmation Node<\/li>\n<li>Standard Responses<\/li>\n<li>Welcome Message<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>This document details the voice call properties and how they vary across various channels.<\/p>\n<\/div><\/div><div class=\"w-separator size_small with_line width_default thick_1 style_solid color_border align_center\"><div class=\"w-separator-h\"><\/div><\/div><div class=\"wpb_text_column\"><div class=\"wpb_wrapper\"><h2><span class=\"ez-toc-section\" id=\"Channel_Settings\"><\/span>Channel Settings<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<th>Applicable<br \/>\nto<br \/>\nChannel<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>IVR Data Extraction Key<\/td>\n<td>Specify the syntax to extract the filled data<\/p>\n<p>For Entity and Confirmation nodes, you can define the extraction rule overriding the channel level setting. This is particularly helpful with ASR engines that provide transcription results in a different format based on the input type. For example, VXML can contain the word format of the credit card in one key and the number format in another key<\/td>\n<td>IVR<\/td>\n<\/tr>\n<tr>\n<td>End of Conversation Behavior<br \/>\n(post ver7.1)<\/td>\n<td>This property can be used to define the bot behavior at the end of the conversation. The options are:<\/p>\n<ul>\n<li>Trigger End of Conversation Behavior and configure the Task, Script or Message to be initiated. <a href=\"\/docs\/bots\/advanced-topics\/event-based-bot-actions\/#End_of_Conversation\" target=\"_blank\" rel=\"noopener noreferrer\">See here for details<\/a>.<\/li>\n<li>Terminate the call.<\/li>\n<\/ul>\n<\/td>\n<td>IVR,<br \/>\nTwilio,<br \/>\nIVR-AudioCodes<\/td>\n<\/tr>\n<tr>\n<td>Call Termination Handler<\/td>\n<td>Select the name of the Dialog task that you want to use as the call termination handler when the call ends in error.<\/td>\n<td>IVR,<br \/>\nTwilio,<br \/>\nIVR-AudioCodes<\/td>\n<\/tr>\n<tr>\n<td>Call Control Parameters<\/td>\n<td>Click <b>Add Parameter<\/b>. Enter property names and values to use in defining the call behavior.<\/p>\n<div class=\"alert alert-info\"><b>Note<\/b>: You should use these properties and values in the VXML files for all call flows in the IVR system and Session Parameters in AudioCodes channel.<\/div>\n<\/td>\n<td>IVR,<br \/>\nIVR-AudioCodes<\/td>\n<\/tr>\n<tr>\n<td colspan=\"3\">ASR Confidence Threshold<\/td>\n<\/tr>\n<tr>\n<td>Threshold Key<\/td>\n<td>This is the variable where the ASR confidence levels are stored. This field is pre-populated, do not change it unless you are aware of the internal working of VXML.<\/td>\n<td>IVR<\/td>\n<\/tr>\n<tr>\n<td>Define ASR threshold confidence<\/td>\n<td>In the range between 0 to 1.0 which defines when the IVR system hands over the control to the Bot.<\/td>\n<td>IVR<\/td>\n<\/tr>\n<tr>\n<td>Timeout Prompt<\/td>\n<td>Enter the default prompt text to play when the user doesn\u2019t provide the input within the timeout period. If you do not specify a Timeout Prompt for any node, this prompt takes its place.<\/td>\n<td>IVR,<br \/>\nTwilio,<br \/>\nIVR-AudioCodes<\/td>\n<\/tr>\n<tr>\n<td>Grammar<\/td>\n<td>Define the grammar that should be used to detect user\u2019s utterance<\/p>\n<ul>\n<li>The input type can be Speech or DTMF<\/li>\n<li>Source of grammar can be Custom or Link\n<ul>\n<li>For Custom, write VXML grammar in the textbox.<\/li>\n<li>For Link, enter the URL of the grammar. Ideally, the URL should be accessible to the IVR system so that the resource can be accessed while executing the calls at runtime<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><a href=\"#Configuring_Grammar\">See below for a detailed configuration for Grammar syntax<\/a>.<br \/>\n<b>Note<\/b>: If the\u00a0<b>Enable Transcription\u00a0<\/b>option is enabled for the bot along with specifying the source of the transcription engine, defining grammar isn\u2019t mandatory.<\/td>\n<td>IVR<\/td>\n<\/tr>\n<tr>\n<td>No Match Prompt<\/td>\n<td>Enter the default prompt text to play when user input is not present in the defined grammar. If you do not specify a <em>No Match Prompt<\/em> for any node, this prompt takes its place.<\/td>\n<td>IVR<\/td>\n<\/tr>\n<tr>\n<td>Barge-In<\/td>\n<td>Select whether you want to allow a user input while a prompt is in progress. If you select no, the user cannot provide input until IVR completes the prompt.<\/td>\n<td>IVR,<br \/>\nTwilio,<br \/>\nIVR-AudioCodes<\/td>\n<\/tr>\n<tr>\n<td>Timeout<\/td>\n<td>Select the maximum wait time to receive user input from the drop-down list, from 1 second up to 60 seconds.<\/td>\n<td>IVR,<br \/>\nTwilio,<br \/>\nIVR-AudioCodes<\/td>\n<\/tr>\n<tr>\n<td>No. of Retries<\/td>\n<td>Select the maximum number of retries to allow. You can select from just 1 retry up to 10 retries.<\/td>\n<td>IVR,<br \/>\nTwilio,<br \/>\nIVR-AudioCodes<\/td>\n<\/tr>\n<tr>\n<td>Log<\/td>\n<td>Select <b>Yes <\/b>if you want to send the chat log to the IVR system.<\/td>\n<td>IVR<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div><\/div><div class=\"wpb_text_column\"><div class=\"wpb_wrapper\"><h2><span class=\"ez-toc-section\" id=\"Dialog_Node_Settings\"><\/span>Dialog Node Settings<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>On the Voice Call Properties panel for a node, you can enter node-specific prompts, grammar, as well as parameters for call-flow behavior such as time-out and retries.<\/p>\n<p>Voice Call Properties apply only for the following nodes and message types:<\/p>\n<ul>\n<li>Entity Node<\/li>\n<li>Message Node<\/li>\n<li>Confirmation Node<\/li>\n<li>Standard Responses<\/li>\n<li>Welcome Message<\/li>\n<\/ul>\n<div class=\"alert alert-info\"><b>Note<\/b>: Most settings are the same for all nodes, with a few exceptions.<\/div>\n<div><\/div>\n<p><strong>Voice Call Settings Field Reference<\/strong><br \/>\nThe following sections provide detailed descriptions of each IVR setting, including descriptions, applicability to nodes, default values, and other key information.<\/p>\n<p><strong>Notes on Prompts:\u00a0<\/strong><\/p>\n<ul>\n<li>You can enter prompts in one of these formats: Plain text, Script, File location of an audio file. If you want to define JavaScript or attach an audio file, click the icon before the prompt text message box and select a mode. By default, it is set to Text mode.<\/li>\n<li>You can enter more than one prompt messages of different types. You can define their order of sequence by dragging and dropping them.<\/li>\n<li>Multiple prompts are useful in scenarios where the prompt has to be played more than once, to avoid repetition, since the prompts are played in order.<\/li>\n<\/ul>\n<table>\n<thead>\n<tr>\n<th>Field<\/th>\n<th>Description<\/th>\n<th>Applicable<br \/>\nto<br \/>\nNodes<\/th>\n<th>Applicable<br \/>\nto<br \/>\nChannel<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>Initial Prompts<\/td>\n<td>Prompts that are played when the IVR first executes the node. If you do not enter a prompt for a node, the default user prompt for the node plays by default. If you do not enter a prompt for Standard Responses and Welcome Message, the default Standard Response and Welcome Message are played by default.<\/td>\n<td>Entity,<br \/>\nConfirmation,<br \/>\nMessage nodes;<br \/>\nStandard Responses and<br \/>\nWelcome Message<\/td>\n<td>IVR,<br \/>\nTwilio,<br \/>\nAudioCodes<\/td>\n<\/tr>\n<tr>\n<td>Timeout Prompts<\/td>\n<td>Prompts that are played on the IVR channel when the user has not given any input within the specified time. If you do not enter a prompt for a node, the default Error Prompt of the node is played. Standard Responses and Welcomes have a default Timeout Prompt that plays if you don&#8217;t define No Match Prompts.<\/td>\n<td>Entity,<br \/>\nConfirmation;<br \/>\nStandard Responses<br \/>\nand Welcome Message<\/td>\n<td>IVR,<br \/>\nTwilio,<br \/>\nAudioCodes<\/td>\n<\/tr>\n<tr>\n<td>No Match Prompts<\/td>\n<td>Prompts that are played on the IVR channel when the user&#8217;s input has not matched any value in the defined grammar. If you do not enter a prompt here or select <b>No Grammar<\/b> option for an Entity or Confirmation node, the default Error Prompt of the node is played. Standard Responses and Welcomes have a default No Match Prompt that plays if you do not enter it.<\/td>\n<td>Entity,<br \/>\nConfirmation;<br \/>\nStandard Responses and<br \/>\nWelcome Message<\/td>\n<td>IVR<\/td>\n<\/tr>\n<tr>\n<td>Error Prompts<\/td>\n<td>Prompts that are played on the IVR channel when user input is an invalid Entity type. If you do not enter a prompt here, the default Error Prompt of the node is played.<\/td>\n<td>Entity,<br \/>\nConfirmation;<\/td>\n<td>IVR,<br \/>\nTwilio,<br \/>\nAudioCodes<\/td>\n<\/tr>\n<tr>\n<td>Grammar<\/td>\n<td>Define the grammar that should be used to detect a user\u2019s utterance<\/p>\n<ul>\n<li>The input type can be Speech or DTMF<\/li>\n<li>Source of grammar can be Custom or Link\n<ul>\n<li>For Custom, write VXML grammar in the textbox.<\/li>\n<li>For Link, enter the URL of the grammar. Ideally, the URL should be accessible to the IVR system so that the resource can be accessed while executing the calls at runtime<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p><a href=\"#Configuring_Grammar\">See below for a detailed configuration for Grammar syntax<\/a>.<br \/>\n<b>Note<\/b>: If the\u00a0<b>Enable Transcription\u00a0<\/b>option is enabled for the bot along with specifying the source of the transcription engine, defining grammar isn\u2019t mandatory.<\/td>\n<td>Confirmation;<br \/>\nStandard Responses and<br \/>\nWelcome Message<\/td>\n<td>IVR,<br \/>\nTwilio<\/td>\n<\/tr>\n<tr>\n<td colspan=\"4\"><strong>Advanced Controls<\/strong><\/td>\n<\/tr>\n<tr>\n<td colspan=\"4\">These properties override the properties set in the Bot IVR Settings page.<\/td>\n<\/tr>\n<tr>\n<td>Timeout<\/td>\n<td>Select the maximum wait time to receive user input from the drop-down list, from 1 second up to 60 seconds. The default value is the same as defined in the Bot IVR Settings page.<\/td>\n<td>N\/A<\/td>\n<td>IVR,<br \/>\nTwilio,<br \/>\nAudioCodes<\/td>\n<\/tr>\n<tr>\n<td>No. of Retries<\/td>\n<td>Select the maximum number of retries to allow. You can select from just 1 retry up to 10 retries.<br \/>\nThe default value is the same as defined in the Bot IVR Settings page.<\/td>\n<td>N\/A<\/td>\n<td>IVR,<br \/>\nTwilio,<br \/>\nAudioCodes<\/td>\n<\/tr>\n<tr>\n<td>Behavior on Exceeding Retries<br \/>\n(applies only to <span style=\"text-decoration: underline;\">entity node<\/span>)<\/td>\n<td>Define behavior when either\u00a0the timeout or number of retry attempts exceed the specified limit. Options include:<\/p>\n<ul>\n<li>Invoke CallTermination Handler<\/li>\n<li>Initiate Dialog: Select a Dialog task from the list of bot tasks.<\/li>\n<li>Jump to a specific node in the current task: Select a node from the list of nodes in the current Dialog task.<\/li>\n<\/ul>\n<p>Post v7.3, this feature has been enhanced so that on exceeding entity error count, the platform will trigger the Behavior on Exceeding Retries behavior, when the transcription is enabled.<\/td>\n<td>N\/A<\/td>\n<td>IVR,<br \/>\nTwilio,<br \/>\nAudioCodes<\/td>\n<\/tr>\n<tr>\n<td>Barge-In<\/td>\n<td>Select whether you want to allow a user input while a prompt is in progress. If you select no, the user input is not considered until the prompt is completed. The default value is <b>No<\/b>.<\/td>\n<td>N\/A<\/td>\n<td>IVR,<br \/>\nTwilio,<br \/>\nAudioCodes<\/td>\n<\/tr>\n<tr>\n<td>Call Control Parameters<\/td>\n<td>Click <b>Add Property. <\/b>Enter property names and values to use in defining the VXML definition in the IVR system and Session Parameters in AudioCodes channel. These values defined for a node or a standard response override the global Call Control Parameters defined in the Bot IVR \/AudioCodes settings page.<\/td>\n<td>N\/A<\/td>\n<td>IVR,<br \/>\nAudioCodes<\/td>\n<\/tr>\n<tr>\n<td>Log<\/td>\n<td>Select <b>Yes <\/b>if you want to send the chat log to the IVR system. The default value is <b>No<\/b>.<\/td>\n<td>N\/A<\/td>\n<td>IVR<\/td>\n<\/tr>\n<tr>\n<td>Recording<\/td>\n<td>Define the state of recording to be initiated. The default value is <b>Stop<\/b>.<\/td>\n<td>N\/A<\/td>\n<td>IVR<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div><\/div><div class=\"w-separator size_small with_line width_default thick_1 style_solid color_border align_center\"><div class=\"w-separator-h\"><\/div><\/div><div class=\"wpb_text_column\"><div class=\"wpb_wrapper\"><h2 id=\"config-grammar\"><span class=\"ez-toc-section\" id=\"Configuring_Grammar\"><\/span>Configuring Grammar<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>You will need to define at least one Speech Grammar to the IVR system.<br \/>\nThere is no default Grammar that will be considered by the system. In this section, we will walk you through the steps needed to configure a Grammar system for the bot to function on the IVR system.<\/p>\n<p>Typically for an IVR enabled bot, the speech utterance of the user will be vetted and parsed by the Grammar syntax at the IVR system before being diverted to the Bot.<\/p>\n<p>Kore.ai supports the following Grammar systems:<\/p>\n<ul>\n<li>Nuance<\/li>\n<li>Voximal<\/li>\n<li>UniMRCP<\/li>\n<\/ul>\n<p>Each one requires its own configuration.<\/p>\n<h4>Nuance<\/h4>\n<p>In case you want to use grammar syntax rules from Nuance Speech Recognition System, you need to get a license for the same. Once you register and obtain a license from Nuance, you will be given access to two files &#8211; <strong>dlm.zip<\/strong> &amp; <strong>nle.zip<\/strong>. Ensure that the path to this VXML is accessible to the bot.<\/p>\n<p><strong>Configurations<\/strong>:<\/p>\n<ol>\n<li>Set\u00a0<strong>Enable Transcription<\/strong> to\u00a0<em>no<\/em><\/li>\n<li>In the <strong>Grammar<\/strong> section:\n<ul>\n<li>Select the <strong>Speech<\/strong> or <strong>DTMF <\/strong>option as per your requirement.<\/li>\n<li>In the text box to define vxml enter the vxml path to dlm.zip file. The url will be of the format: <code>http:\/\/nuance.kore.ai\/downloads\/kore_dlm.zip?nlptype=krypton&amp;dlm_weight=0.2&amp;lang=en-US<\/code><\/li>\n<li>Replace the above path according to your setup<\/li>\n<li>The language code &#8220;<em>lang=en-US<\/em>&#8221; will be based on your setup<\/li>\n<\/ul>\n<\/li>\n<li><strong>Add Grammar<\/strong> to add another path to <code>nle.zip<\/code>. Follow the above-mentioned steps.<\/li>\n<li><strong>Save<\/strong> the settings.<\/li>\n<\/ol>\n<h4>Voximal\/UniMRCP<\/h4>\n<p>In case you want to use grammar syntax rules from Voximal or UniMRCP, you need to specify the transcription source.<\/p>\n<p><strong>Configurations<\/strong>:<\/p>\n<ol>\n<li>Set\u00a0<strong>Enable Transcription<\/strong> to <em>yes<\/em><\/li>\n<li>In the\u00a0<strong>Transcription engine source<\/strong> text box that appears:\n<ul>\n<li>for Voximal, enter <em>&#8220;builtin:grammar\/text&#8221;<\/em><\/li>\n<li>for UniMRCP, enter <em>&#8220;builtin:grammar\/transcribe&#8221;<\/em><\/li>\n<\/ul>\n<\/li>\n<li>You can leave the <strong>Grammar<\/strong> section blank, the above transcription source uri will handle the syntax and grammar vetting of the speech.<\/li>\n<li><strong>Save<\/strong> the settings.<\/li>\n<\/ol>\n<\/div><\/div><div class=\"w-separator size_small with_line width_default thick_1 style_solid color_border align_center\"><div class=\"w-separator-h\"><\/div><\/div><\/div><\/div><\/div><\/div><\/section>\n","protected":false},"excerpt":{"rendered":"You can enable voice interaction with your virtual assistant, i.e., users can talk to the virtual assistant. For this, you need to enable one of the voice channels like IVR, Twilio, IVR-AudioCodes, etc and publish the bot on those channels. There are some Voice Properties you can configure to streamline the user experience across the...","protected":false},"author":9,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[29],"tags":[],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/multisite.korebots.com\/v9-0\/wp-json\/wp\/v2\/posts\/24745"}],"collection":[{"href":"https:\/\/multisite.korebots.com\/v9-0\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/multisite.korebots.com\/v9-0\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/multisite.korebots.com\/v9-0\/wp-json\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/multisite.korebots.com\/v9-0\/wp-json\/wp\/v2\/comments?post=24745"}],"version-history":[{"count":17,"href":"https:\/\/multisite.korebots.com\/v9-0\/wp-json\/wp\/v2\/posts\/24745\/revisions"}],"predecessor-version":[{"id":29061,"href":"https:\/\/multisite.korebots.com\/v9-0\/wp-json\/wp\/v2\/posts\/24745\/revisions\/29061"}],"wp:attachment":[{"href":"https:\/\/multisite.korebots.com\/v9-0\/wp-json\/wp\/v2\/media?parent=24745"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/multisite.korebots.com\/v9-0\/wp-json\/wp\/v2\/categories?post=24745"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/multisite.korebots.com\/v9-0\/wp-json\/wp\/v2\/tags?post=24745"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}