Abstract:
Provided are an automatic interpretation system and method for generating a synthetic sound having characteristics similar to those of an original speaker's voice. The automatic interpretation system for generating a synthetic sound having characteristics similar to those of an original speaker's voice includes a speech recognition module configured to generate text data by performing speech recognition for an original speech signal of an original speaker and extract at least one piece of characteristic information among pitch information, vocal intensity information, speech speed information, and vocal tract characteristic information of the original speech, an automatic translation module configured to generate a synthesis-target translation by translating the text data, and a speech synthesis module configured to generate a synthetic sound of the synthesis-target translation.
Abstract:
Provided a method performed by an automatic interpretation server based on a zero user interface (UI), which communicates with a plurality of terminal devices having a microphone function, a speaker function, a communication function, and a wearable function. The method includes connecting terminal devices disposed within a designated automatic interpretation zone, receiving a voice signal of a first user from a first terminal device among the terminal devices within the automatic interpretation zone, matching a plurality of users placed within a speech-receivable distance of the first terminal device, and performing automatic interpretation on the voice signal and transmitting results of the automatic interpretation to a second terminal device of at least one second user corresponding to a result of the matching.
Abstract:
Provided are an apparatus and method for providing a personal assistant service based on automatic translation. The apparatus for providing a personal assistant service based on automatic translation includes an input section configured to receive a command of a user, a memory in which a program for providing a personal assistant service according to the command of the user is stored, and a processor configured to execute the program. The processor updates at least one of a speech recognition model, an automatic interpretation model, and an automatic translation model on the basis of an intention of the command of the user using a recognition result of the command of the user and provides the personal assistant service on the basis of an automatic translation call.
Abstract:
An automatic translation device includes a communications module transmitting and receiving data to and from an ear-set device including a speaker, a first microphone, and a second microphone, a memory storing a program generating a result of translation using a dual-channel audio signal, and a processor executing the program stored in the memory. When the program is executed, the processor compares a first audio signal including a voice signal of a user, received using the first microphone, with a second audio signal including a noise signal and the voice signal of the user, received using the second microphone, and entirely or selectively extracting the voice signal of the user from the first and second audio signals, based on a result of the comparison, to perform automatic translation.
Abstract:
A voice signal processing apparatus includes: an input unit which receives a voice signal of a user; a detecting unit which detects an auxiliary signal, and a signal processing unit which transmits the voice signal to an external terminal in a first operation mode and transmits the voice signal and the auxiliary signal to the external terminal using the same or different protocols in a second operation mode.