Abstract:
Provided is a zero user interface (UI)-based automatic interpretation method including receiving a plurality of speech signals uttered by a plurality of users from a plurality of terminal devices, acquiring a plurality of speech energies from the plurality of received speech signals, determining main speech signal uttered in a current utterance turn among the plurality of speech signals by comparing the plurality of acquired speech energies, and transmitting an automatic interpretation result acquired by performing automatic interpretation on the determined main speech signal to the plurality of terminal devices.
Abstract:
An automatic interpretation method performed by a correspondent terminal communicating with an utterer terminal includes receiving, by a communication unit, voice feature information about an utterer and an automatic translation result, obtained by automatically translating a voice uttered in a source language by the utterer in a target language, from the utterer terminal and performing, by a sound synthesizer, voice synthesis on the basis of the automatic translation result and the voice feature information to output a personalized synthesis voice as an automatic interpretation result. The voice feature information about the utterer includes a hidden variable including a first additional voice result and a voice feature parameter and a second additional voice feature, which are extracted from a voice of the utterer.
Abstract:
Provided is a method of providing an automatic speech translation service. The method includes, by an automatic speech translation device of a user, searching for and finding a nearby automatic speech translation device based on strength of a signal for wireless communication, exchanging information for automatic speech translation with the found automatic speech translation device, generating a list of candidate devices for the automatic speech translation using the automatic speech translation information and the signal strength, and connecting to a candidate device having a greatest variation of the signal strength among devices in the generated list.
Abstract:
The present invention suggests an interface device for processing a voice of a user which efficiently outputs various information so as to allow a user to contribute to the voice recognition or the automatic interpretation and a method thereof. For this purpose, the present invention suggests an interface device for processing a voice of a user which includes an utterance input unit configured to input utterance of a user, an utterance end recognizing unit configured to recognize the end of the input utterance; and an utterance result output unit configured to output at least one of a voice recognition result, a translation result, and an interpretation result of the ended utterance.