Abstract:
A method for prompting user input for a multimodal interface including the steps of providing a multimodal interface to a user, where the interface includes a visual interface having a plurality of input regions, each having at least one input field; selecting an input region and processing a multi-token speech input provided by the user, where the processed speech input includes at least one value for at least one input field of the selected input region; and storing at least one value in at least one input field.
Abstract:
A method for prompting user input for a multimodal interface including the steps of providing a multimodal interface to a user, where the interface includes a visual interface having a plurality of input regions, each having at least one input field; selecting an input region and processing a multi-token speech input provided by the user, where the processed speech input includes at least one value for at least one input field of the selected input region; and storing at least one value in at least one input field.
Abstract:
A method for prompting user input for a multimodal interface including the steps of providing a multimodal interface to a user, where the interface includes a visual interface having a plurality of input regions, each having at least one input field; selecting an input region and processing a multi-token speech input provided by the user, where the processed speech input includes at least one value for at least one input field of the selected input region; and storing at least one value in at least one input field.
Abstract:
Methods, apparatus, and computer program products are described for automatic speech recognition (‘ASR’) that include accepting by the multimodal application speech input and visual input for selecting or deselecting items in a selection list, the speech input enabled by a speech recognition grammar; providing, from the multimodal application to the grammar interpreter, the speech input and the speech recognition grammar; receiving, by the multimodal application from the grammar interpreter, interpretation results including matched words from the grammar that correspond to items in the selection list and a semantic interpretation token that specifies whether to select or deselect items in the selection list; and determining, by the multimodal application in dependence upon the value of the semantic interpretation token, whether to select or deselect items in the selection list that correspond to the matched words.
Abstract:
Methods, apparatus, and computer program products are described for automatic speech recognition (‘ASR’) that include accepting by the multimodal application speech input and visual input for selecting or deselecting items in a selection list, the speech input enabled by a speech recognition grammar; providing, from the multimodal application to the grammar interpreter, the speech input and the speech recognition grammar; receiving, by the multimodal application from the grammar interpreter, interpretation results including matched words from the grammar that correspond to items in the selection list and a semantic interpretation token that specifies whether to select or deselect items in the selection list; and determining, by the multimodal application in dependence upon the value of the semantic interpretation token, whether to select or deselect items in the selection list that correspond to the matched words.
Abstract:
Improving speech capabilities of a multimodal application including receiving, by the multimodal browser, a media file having a metadata container; retrieving, by the multimodal browser, from the metadata container a speech artifact related to content stored in the media file for inclusion in the speech engine available to the multimodal browser; determining whether the speech artifact includes a grammar rule or a pronunciation rule; if the speech artifact includes a grammar rule, modifying, by the multimodal browser, the grammar of the speech engine to include the grammar rule; and if the speech artifact includes a pronunciation rule, modifying, by the multimodal browser, the lexicon of the speech engine to include the pronunciation rule.
Abstract:
A method (10) in a speech recognition application callflow can include the steps of assigning (11) an individual option and a pre-built grammar to a same prompt, treating (15) the individual option as a valid output of the pre-built grammar if the individual option is a potential valid match to a recognition phrase (12) or an annotation (13) in the pre-built grammar, and treating (14) the individual option as an independent grammar from the pre-built grammar if the individual option fails to be a potential valid match to the recognition phrase or the annotation in the pre-built grammar.
Abstract:
A method of adjusting music length to expected waiting time while a caller is on hold includes choosing one or more media selections based upon their play duration and matching the selection(s) to the expected waiting time.
Abstract:
The present invention discloses a system and a method for creating a reduced script, which is read by a voice talent to create a concatenative text-to-speech (TTS) voice. The method can automatically process pre-recorded audio to derive speech assets for a concatenative TTS voice. The pre-recording audio can include sets of recorded phrases used by a speech user interface (Sill). A set of unfulfilled speech assets needed for foil phonetic coverage of the concatenative TTS voice can be determined. A reduced script can be constructed that includes a set of phrases, which when read by a voice talent result in a reduced corpus. When the reduced corpus is automatically processed, a reduced set of speech assets result. The reduced set includes each of the unfulfilled speech assets. When this reduced corpus is combined with existing speech assets the result will be a voice with a complete set of speech assets.
Abstract:
A method and system for addressing disambiguation issues in interactive applications by creating a disambiguation system for generating complex grammars that includes homonym detection and grouping, and provides optimization feedback that eliminates time-consuming and repetitive iterative steps during the grammar generation portion of the interactive application configuration.