Abstract:
A voice recognition system that divides a search space for voice recognition into a general domain search space and a specific domain search space. A mobile terminal receives a voice recognition target word from a user, and a voice recognition server divides a search space for voice recognition into a general domain search space and a specific domain search space and stores the spaces and performs voice recognition for the voice recognition target word through linkage of the general domain search space and the specific domain search space.
Abstract:
A speech recognition method capable of automatic generation of phones according to the present invention includes: unsupervisedly learning a feature vector of speech data; generating a phone set by clustering acoustic features selected based on an unsupervised learning result; allocating a sequence of phones to the speech data on the basis of the generated phone set; and generating an acoustic model on the basis of the sequence of phones and the speech data to which the sequence of phones is allocated.
Abstract:
A method of generating a sympathetic back-channel signal is provided. The method includes receiving a voice signal from a user, determining whether predetermined timing is timing at which a back-channel signal is output in response to the input of the voice signal at the predetermined timing, storing the voice signal that has been input so far if the predetermined timing is the timing at which the back-channel signal is output as a result of the determination, determining back-channel signal information based on the stored voice signal, and outputting the determined back-channel signal information.
Abstract:
A speech recognition method capable of automatic generation of phones according to the present invention includes: unsupervisedly learning a feature vector of speech data; generating a phone set by clustering acoustic features selected based on an unsupervised learning result; allocating a sequence of phones to the speech data on the basis of the generated phone set; and generating an acoustic model on the basis of the sequence of phones and the speech data to which the sequence of phones is allocated.
Abstract:
Disclosed is an apparatus for speech recognition and automatic translation operated in a PC or a mobile device. The apparatus for speech recognition according to the present invention includes a display unit that displays a screen for selecting a domain as a unit for a speech recognition region previously sorted for speech recognition to a user; a user input unit that receives a selection of a domain from the user; and a communication unit that transmits the user selection information for the domain. According to the present invention, the apparatus for speech recognition using an intuitive and simple user interface is provided to a user to enable the user to easily select/correct a designation domain of a speech recognition system and improve accuracy and performance of speech recognition and automatic translation by the designated system for speech recognition.