Abstract:
The present invention relates to an automatic interpretation method and system for converting only voice of a speaker into a target language. The present invention may significantly improve performance of automatic interpretation with a foreigner to be communicated with, even in a high-noise environment in which multiple speakers utter at the same time by utilizing voice and image information input to a smart device in a complex manner. In addition, the present invention may determine a situation based on text information and image information existing around a user, and reflect the situation information together with multimodal information to an interpretation engine in real time. In addition, the present invention may significantly improve user convenience of an automatic interpretation system by directly augmenting and displaying an interpreted sentence directly next to a speaker image or generating a synthesized sound by distinguishing the interpreted sentence from other speeches.
Abstract:
A method of correcting errors in a speech recognition system includes a process of searching a speech recognition error-answer pair DB based on a sound model for a first candidate answer group for a speech recognition error, a process of searching a word relationship information DB for a second candidate answer group for the speech recognition error, a process of searching a user error correction information DB for a third candidate answer group for the speech recognition error, a process of searching a domain articulation pattern DB and a proper noun DB for a fourth candidate answer group for the speech recognition error, and a process of aligning candidate answers within each of the retrieved candidate answer groups and displaying the aligned candidate answers.
Abstract:
A voice recognition device having a barge-in function and a method thereof are proposed. In an exemplary embodiment, there are disclosed an intelligent robot and a method for operating the intelligent robot, including an input unit for receiving a user's voice data, one or more processors, and an output unit for outputting a response generated on a basis of the user's voice data, wherein the processors generate the response corresponding to the users' voice data while maintaining a listening mode for identifying a dialogue partner by using the user's face image data and the user's voice data, and perform a speaking mode for control so as to perform an operation corresponding to the response.
Abstract:
Provided is an end-to-end speech recognition technology capable of improving speech recognition performance in a desired specific domain, which includes collecting domain text data be specialized and comparing the data with a basic transcript text DB to determine domain text that is not included in the basic transcript text DB and requires additional training and constructing a specialization target domain text DB. The end-to-end speech recognition technology generates a speech signal from the domain text of the specialization target domain text DB, and trains a speech recognition neural network with the generated speech signal to generate an end-to-end speech recognition model specialized for the domain to be specialized. The specialized speech recognition model may be applied to the end-to-end speech recognizer to perform the domain-specific end-to-end speech recognition.
Abstract:
A method of generating a sympathetic back-channel signal is provided. The method includes receiving a voice signal from a user, determining whether predetermined timing is timing at which a back-channel signal is output in response to the input of the voice signal at the predetermined timing, storing the voice signal that has been input so far if the predetermined timing is the timing at which the back-channel signal is output as a result of the determination, determining back-channel signal information based on the stored voice signal, and outputting the determined back-channel signal information.
Abstract:
An apparatus for improving context-based automatic interpretation performance includes: an uttered voice input unit configured to receive a voice signal from a user; a previous sentence input unit configured to determine whether there is a user’s previous utterance when the voice signal is input by the uttered voice input unit; a voice encoding processing unit configured to decode only the voice signal through the uttered voice input unit when it is determined that there is no user’s previous utterance and extract a vector of the voice signal when it is determined that there is the user’s previous utterance; a context encoding processing unit configured to extract a context vector from a previous utterance when there is the previous utterance and transmit the extracted context vector of the previous utterance; and an interpretation decoding processing unit configured to output an interpretation result text.