Abstract:
The present invention provides a method and apparatus for training a duration prediction model, method and apparatus for duration prediction, method and apparatus for speech synthesis. Said method for training a duration prediction model, comprising: generating an initial duration prediction model with a plurality of attributes related to duration prediction and at least part of possible attribute combinations of said plurality of attributes, in which each of said plurality of attributes and said attribute combinations is included as an item; calculating importance of each said item in said duration prediction model; deleting the item having the lowest importance calculated; re-generating a duration prediction model with the remaining items; determining whether said re-generated duration prediction model is an optimal model; and repeating said step of calculating importance and the following steps, if said duration prediction model is determined as not optimal model.
Abstract:
One provides (101) a plurality of frames of sampled audio content and then processes (102) that plurality of frames using a speech recognition search process that comprises, at least in part, searching for at least two of state boundaries, subword boundaries, and word boundaries using different search resolutions.
Abstract:
The acoustic model generating method for speech recognition enables a high representation effect on the basis of the minimum possible model parameters. In an initial model having a smaller number of signal sources, the acoustic model for speech recognition is generated by selecting the splitting processing or the merging processing for the signal sources successively and repeatedly. The merging processing is executed prior to the splitting processing. In the merging processing, when the merged result is not appropriate, the splitting processing is executed for the model obtained before merging processing (without use of the merged result). Further, the splitting processing includes three methods at the same time, as (1) a method of splitting the signal source into two and reconstructing a shared structure between a plurality of states having common signal sources to be split, (2) a method of splitting one state into two states corresponding to different phoneme context categories in phoneme context direction, (3) a method of splitting one state into two states corresponding to different speech sections in time direction. One of the three methods is selected by obtaining three pieces of maximum likelihood for the three splitting steps and judging which one is the biggest to select the splitting step for which the biggest likelihood is obtained.
Abstract:
The present invention provides a method and apparatus for training a duration prediction model, method and apparatus for duration prediction, method and apparatus for speech synthesis. Said method for training a duration prediction model, comprising: generating an initial duration prediction model with a plurality of attributes related to duration prediction and at least part of possible attribute combinations of said plurality of attributes, in which each of said plurality of attributes and said attribute combinations is included as an item; calculating importance of each said item in said duration prediction model; deleting the item having the lowest importance calculated; re-generating a duration prediction model with the remaining items; determining whether said re-generated duration prediction model is an optimal model; and repeating said step of calculating importance and the following steps, if said duration prediction model is determined as not optimal model.
Abstract:
A method of enrolling phone-based speaker specific commands includes the first step of providing a set of (H) of speaker-independent phone-based Hidden Markov Models (HMMs), grammar (G) comprising a loop of phones with optional between word silence (BWS) and two utterances U1, and U2 of the command produced by the enrollment speaker and wherein the first frames of the first utterance contain only background noise. The processor generates a sequence of phone-like HMMs and the number of HMMs in that sequence as output. The second step performs model mean adjustment to suit enrollment microphone and speaker characteristics and performs segmentation. The third step generates an HMM for each segment except for silence for utterance U1. The fourth step re-estimates the HMM using both utterance U1 and U2.
Abstract:
A voice recognition system (204, 206, 207, 208) assigns a penalty to a score in a voice recognition system. The system generates a lower threshold for the number of frames assigned to at least one state of at least one model and an upper threshold for the number of frames assigned to at least one state of at least one model. The system assigns an out of state transition penalty to an out of state transition score in an allocation assignment algorithm if the lower threshold has not been met. The out of state transition penalty is proportional to the number of frames that the dwell time is below the lower threshold. A self loop penalty is applied to a self loop score if the upper threshold number of frames assigned to a state has been exceeded. The out of state transition penalty is proportional to the number of frames that the dwell time is above the upper threshold.
Abstract:
A method and an apparatus for a parameter sharing speech recognition system are provided. Speech signals are received into a processor of a speech recognition system. The speech signals are processed using a speech recognition system hosting a shared hidden Markov model (HMM) produced by generating a number of phoneme models, some of which are shared. The phoneme models are generated by retaining as a separate phoneme model any triphone model having a number of trained frames available that exceeds a prespecified threshold. A shared phoneme model is generated to represent each of the groups of triphone phoneme models for which the number of trained frames having a common biphone exceed the prespecified threshold. A shared phoneme model is generated to represent each of the groups of triphone phoneme models for which the number of trained frames having an equivalent effect on a phonemic context exceed the prespecified threshold. A shared phoneme model is generated to represent each of the groups of triphone phoneme models having the same center context. The generated phoneme models are trained, and shared phoneme model states are generated that are shared among the phoneme models. Shared probability distribution functions are generated that are shared among the phoneme model states. Shared probability sub-distribution functions are generated that are shared among the phoneme model probability distribution functions. The shared phoneme model hierarchy is reevaluated for further sharing in response to the shared probability sub-distribution functions. Signals representative of the received speech signals are generated.
Abstract:
A speech synthesis device of an embodiment includes a memory unit, a creating unit, a deciding unit, a generating unit and a waveform generating unit. The memory unit stores, as statistical model information of a statistical model, an output distribution of acoustic feature parameters including pitch feature parameters and a duration distribution. The creating unit creates a statistical model sequence from context information and the statistical model information. The deciding unit decides a pitch-cycle waveform count of each state using a duration based on the duration distribution of each state of each statistical model in the statistical model sequence, and pitch information based on the output distribution of the pitch feature parameters. The generating unit generates an output distribution sequence based on the pitch-cycle waveform count, and acoustic feature parameters based on the output distribution sequence. The waveform generating unit generates a speech waveform from the generated acoustic feature parameters.
Abstract:
In speech recognition, the duration of a phoneme is taken into account when determining recognition scores. Specifically, the duration of a phoneme may be evaluated relative to the duration of neighboring phonemes. A phoneme that is interpreted to be significantly longer or shorter than its neighbors may be given a lower duration score. A duration score for a phoneme may be calculated and used to adjust a recognition score. In this manner a duration model may supplement an acoustic model and language model to improve speech recognition results.
Abstract:
Exemplary embodiments of the present invention enhance the recognition ability by optimizing state numbers of respective HMM's. Exemplary embodiments provide a description length computing unit to find description lengths of respective syllable HMM's for which the number of states forming syllable HMM's is set to plural kinds of state numbers from a given value to the maximum state number, using the Minimum Description Length criterion, for each of syllable HMM's set to their respective state numbers. An HMM selecting unit selects an HMM having the state number with which the description length found by the description length computing device is a minimum. An HMM re-training unit re-trains the syllable HMM selected by the syllable HMM selecting unit with the use of training speech data.