Abstract:
Provided a method performed by an automatic interpretation server based on a zero user interface (UI), which communicates with a plurality of terminal devices having a microphone function, a speaker function, a communication function, and a wearable function. The method includes connecting terminal devices disposed within a designated automatic interpretation zone, receiving a voice signal of a first user from a first terminal device among the terminal devices within the automatic interpretation zone, matching a plurality of users placed within a speech-receivable distance of the first terminal device, and performing automatic interpretation on the voice signal and transmitting results of the automatic interpretation to a second terminal device of at least one second user corresponding to a result of the matching.
Abstract:
Provided are an automatic interpretation system and method for generating a synthetic sound having characteristics similar to those of an original speaker's voice. The automatic interpretation system for generating a synthetic sound having characteristics similar to those of an original speaker's voice includes a speech recognition module configured to generate text data by performing speech recognition for an original speech signal of an original speaker and extract at least one piece of characteristic information among pitch information, vocal intensity information, speech speed information, and vocal tract characteristic information of the original speech, an automatic translation module configured to generate a synthesis-target translation by translating the text data, and a speech synthesis module configured to generate a synthetic sound of the synthesis-target translation.
Abstract:
Provided is a zero user interface (UI)-based automatic interpretation method including receiving a plurality of speech signals uttered by a plurality of users from a plurality of terminal devices, acquiring a plurality of speech energies from the plurality of received speech signals, determining main speech signal uttered in a current utterance turn among the plurality of speech signals by comparing the plurality of acquired speech energies, and transmitting an automatic interpretation result acquired by performing automatic interpretation on the determined main speech signal to the plurality of terminal devices.
Abstract:
An automatic interpretation method performed by a correspondent terminal communicating with an utterer terminal includes receiving, by a communication unit, voice feature information about an utterer and an automatic translation result, obtained by automatically translating a voice uttered in a source language by the utterer in a target language, from the utterer terminal and performing, by a sound synthesizer, voice synthesis on the basis of the automatic translation result and the voice feature information to output a personalized synthesis voice as an automatic interpretation result. The voice feature information about the utterer includes a hidden variable including a first additional voice result and a voice feature parameter and a second additional voice feature, which are extracted from a voice of the utterer.
Abstract:
Provided are an apparatus and method for providing a personal assistant service based on automatic translation. The apparatus for providing a personal assistant service based on automatic translation includes an input section configured to receive a command of a user, a memory in which a program for providing a personal assistant service according to the command of the user is stored, and a processor configured to execute the program. The processor updates at least one of a speech recognition model, an automatic interpretation model, and an automatic translation model on the basis of an intention of the command of the user using a recognition result of the command of the user and provides the personal assistant service on the basis of an automatic translation call.
Abstract:
An automatic translation device includes a communications module transmitting and receiving data to and from an ear-set device including a speaker, a first microphone, and a second microphone, a memory storing a program generating a result of translation using a dual-channel audio signal, and a processor executing the program stored in the memory. When the program is executed, the processor compares a first audio signal including a voice signal of a user, received using the first microphone, with a second audio signal including a noise signal and the voice signal of the user, received using the second microphone, and entirely or selectively extracting the voice signal of the user from the first and second audio signals, based on a result of the comparison, to perform automatic translation.
Abstract:
A voice recognition system that divides a search space for voice recognition into a general domain search space and a specific domain search space. A mobile terminal receives a voice recognition target word from a user, and a voice recognition server divides a search space for voice recognition into a general domain search space and a specific domain search space and stores the spaces and performs voice recognition for the voice recognition target word through linkage of the general domain search space and the specific domain search space.
Abstract:
Provided are an apparatus and method for selecting a speaker by using smart glasses. The apparatus includes a camera configured to capture a front angle video of a user and track guest interpretation interlocutors in the captured video, smart glasses configured to display a virtual space map image including the guest interpretation interlocutors tracked through the camera, a gaze-tracking camera configured to select a target person for interpretation by tracking a gaze of the user so that a guest interpretation interlocutor displayed in the video may be selected, and an interpretation target processor configured to provide an interpretation service in connection with the target person selected through the gaze-tracking camera.
Abstract:
A speech recognition method capable of automatic generation of phones according to the present invention includes: unsupervisedly learning a feature vector of speech data; generating a phone set by clustering acoustic features selected based on an unsupervised learning result; allocating a sequence of phones to the speech data on the basis of the generated phone set; and generating an acoustic model on the basis of the sequence of phones and the speech data to which the sequence of phones is allocated.
Abstract:
Provided is a method of performing automatic interpretation based on speaker separation by a user terminal, the method including: receiving a first speech signal including at least one of a user speech of a user and a user surrounding speech around the user from an automatic interpretation service providing terminal, separating the first speech signal into speaker-specific speech signals, performing interpretation on the speaker-specific speech signals in a language selected by the user on the basis of an interpretation mode, and providing a second speech signal generated as a result of the interpretation to at least one of a counterpart terminal and the automatic interpretation service providing terminal according to the interpretation mode.