摘要:
In a speech synthesizer apparatus, a weighting coefficient training controller calculates acoustic distances in second acoustic feature parameters between one target phoneme from the same phoneme and the phoneme candidates other than the target phoneme based on first acoustic feature parameters and prosodic feature parameters, and determines weighting coefficient vectors for respective target phonemes defining degrees of contribution to the second acoustic feature parameters for respective phoneme candidates by executing a predetermined statistical analysis therefor. Then, a speech unit selector searches for a combination of phoneme candidates which correspond to a phoneme sequence of an input sentence and which minimizes a cost including a target cost representing approximate costs between a target phoneme and the phoneme candidates and a concatenation cost representing approximate costs between two phoneme candidates to be adjacently concatenated, and outputs index information on the searched out combination of phoneme candidates. Further, a speech synthesizer synthesizes a speech signal corresponding to the input phoneme sequence by sequentially reading out speech segments of speech waveform signals corresponding to the index information and concatenating the read speech segments of the speech waveform signals.
摘要:
Voice synthesis with improved expressivity is obtained in a voice synthesiser of source-filter type by making use of a library of source sound categories in the source module. Each source sound category corresponds to a particular morphological category and is derived from analysis of real vocal sounds, by inverse filtering so as to subtract the effect of the vocal tract. The library may be parametrical, that is, the stored data corresponds not to the inverse-filtered sounds themselves but to synthesis coefficients for resynthesising the inverse-filtered sounds using any suitable re-synthesis technique, such as the phase vocoder technique. The coefficients are derived by STFT analysis.
摘要:
Data in the same range of the fundamental frequency F.sub.0 as speech segments are used as learning data to prepare a reference codebook CB.sub.M for a spectrum envelope. The same learning data for a higher range than F.sub.0 and the same learning data for a lower range are subject to a linear stretch matching with respect to the learning data for the range F.sub.0. For each vector code in the reference codebook CB.sub.M, the spectrum envelope is clustered to prepare a high range codebook CB.sub.H and a low range codebook CB.sub.L. The spectrum envelope of input speech segments are fuzzy vector quantized (S402) with the reference codebook, and depending on the synthesized F.sub.0, a high, middle or low codebooks is selected. The selected codebook is used to decode the fuzzy vector quantized code, and the decoded output is subject to the inverse FFT. Alternatively, codebooks CM.sub.MH and CB.sub.ML each comprising differential vectors for corresponding code vectors between CB.sub.M and CB.sub.H and between CB.sub.M and CB.sub.L are prepared. The quantized code is decoded using either CB.sub.MH or CB.sub.ML, and the decoded differential vector is stretched in accordance with a difference in the fundamental frequency between the synthesized speech and the original speech for CB.sub.M. The stretched differential vector is added to the code vector which was used for the fuzzy vector quantization.
摘要:
Method envelope-invariant for audio signal synthesis from elementary audio waveforms stored in a dictionary wherein:the waveforms are perfectly periodic, and stored as one of their period,synthesis is obtained by overlap-adding of the waveforms obtained from time-domain repetition of the periodic waveforms with a weighting window whose size is approximately two times the period of the signals to weight, and whose relative position inside of the period is fixed to any value identical for all the periods, each extracted from a reharmonized and thus periodic waveform, obtained by modifying, without changing the spectral envelope, the frequencies and amplitudes of harmonics in the spectrum of a frame of the original continuous speech waveform,whereby the time shift between two successive waveforms obtained by weighting the original signals is set to the imposed fundamental frequency of the signal to synthesize.
摘要:
Synthetic speech is generated by production of a digital waveform from a text in phonemes. A linked database is used which comprises an extended text in phonemes and its equivalent in the form of a digital waveform. The two portions of the database are linked by a parameter which establishes equivalent points in both the phoneme text and the digital waveform. The input text (in phonemes) is analyzed to locate a matching portion in the phoneme portion of the database. This matching utilizes exact equivalence of phonemes where this is possible; otherwise relation between phonemes is utilized. The selection process identifies input phonemes in context whereby improved conversions are obtained. Having analyzed the input exit into matching strings in the input form of the database beginning and ending parameters for the sections are established. The output text is produced by abutting sections of the digital waveform and defined by the beginning and ending parameters.
摘要:
A speech synthesis apparatus includes; a memory for storing a plurality of typical waveforms corresponding to a plurality of frames, the typical waveforms each previously obtained by extracting in units of at least one frame from a prediction error signal formed in predetermined units, a voiced speech source generator including an interpolation circuit for performing interpolation between the typical waveforms read out from the memory means to obtain a plurality of interpolation signals each having at least one of an interpolation pitch period and a signal level which changes smoothly between the corresponding frames, a superposition circuit for superposing the interpolation signals obtained by the interpolation circuit to form a voiced speech source signal, an unvoiced speech source generator for generating an unvoiced speech source signal, and a vocal tract filter selectively driven by the voiced speech source signal outputted from the voiced speech source generator and the unvoiced speech source signal from the unvoiced speech source generator to generate synthetic speech. Further, interpolation positions can be determined bases on the pitch period.
摘要:
The pitch of synthesized speech signals is varied by separating the speech signals into a spectral component and an excitation component. The latter is multiplied by a series of overlapping window functions synchronous, in the case of voiced speech, with pitch timing mark information corresponding at least approximately to instants of vocal excitation, to separate it into windowed speech segments which are added together again after the application of a controllable time-shift. The spectral and excitation components are then recombined. The multiplication employs at least two windows per pitch period, each having a duration of less than one pitch period. Alternatively each window has a duration of less than twice the pitch period between timing marks and is asymmetric about the timing mark.
摘要:
A synthesis method in which that part of each interval of the original signal which contains the fundamental information is left unchanged, and only the remaining part of the interval is altered. In this way, not only is processing time reduced, but the natural sound of the synthetic signal is also improved. The main part of the interval is an exact reproduction of the original signal. At least the waveforms associated to voiced sounds are subdivided into a plurality of intervals, corresponding to the responses of the vocal duct to a series of excitation impulses of the vocal cords, synchronous with the fundamental frequency of the signal. Each interval is subjected to a weighting. The signals resulting from the weighting are replaced with a replica thereof shifted in time by an amount that depends on a prosodic information. The synthesis is then carried out by overlapping and adding the shifted signals. In each interval of original signal to be reproduced in synthesis, an unchanging part is identified, which contains the fundamental information and which is reproduced unaltered in the synthesized signal, and the operations of weighting, overlapping and adding involve only the remaining part of the interval. The search utilizes searching among all zero crossings for a suitable division between the unchanging and variable parts.
摘要:
The invention relates to a method and an arrangement for speech synthesis and provides an automatic mechanism for simulating human speech. The method provides a number of control parameters for controlling a speech synthesis device. The invention solves the problem of coarticulation by using an interpolation mechanism. The control parameters are stored in a matrix or a sequence list for each polyphone. The behaviour of the respective parameter with time is defined around each phoneme boundary and polyphones are joined by forming a weighted mean value of the curves which are defined by their two associated matrices/sequences list. The invention also provides an arrangement for carrying out the method.
摘要:
Speech generation from phonetic code is carried out by a microcomputer based system which stores digitized waveform segments and appropriately joins the segments and outputs them to a digital to analog converter and then to a speaker. An allophone is generated for each phoneme designated by the phonetic codes according to the articulation type of each adjacent phoneme. Each phoneme is classified as neutral, labial, glottal, or medial according to its effect on the articulation of adjacent phonemes. Each phoneme is characterized by at least one center waveform dependent on the phonetic code, and an initial waveform and a final waveform, each of which depend on the phonetic code and the articulation type of the neighboring phoneme. Tables of waveform pointers are accessed according to phonetic code and articulation type, and other tables provide articulation types, times of each waveform portion, transition rate, fricative state, and pitch for each phonetic code. Adjacent waveforms are gradually blended together. Continuously varying center waveforms are afforded by indexing through successive waveform pointers at a given rate during the center phoneme period, the rate and the period being retrieved from the tables.