Abstract:
A method of analysing a sequence of images, for example a sequence of images from a video signal, in which the amount the image changes between two images in a sequence is used to classify the sequence as being either a cartoon or a non cartoon sequence.
Abstract:
Fraudulent callers that masquerade as legitimate callers in order to discover details of bank accounts or other accounts are an increasing problem. In order to detect possible fraudsters and preventing them from obtaining such details a method and system is proposed that transform the recorded speech of a batch of incoming calls to strings of phonemes or text. Thereafter similar speech patterns, such as distinct similar phrases or wording, in the recorded speech are determined and calls having similar speech patterns, and preferably also similar acoustic properties, are grouped together and identified as being from the same fraudulent caller. Transactions initiated by the fraudulent caller can as a result be stopped and preferably a voiceprint of the fraudulent caller's speech is generated and stored in a database for further use.
Abstract:
A speaker verification method is proposed that first builds a general model of user utterances using a set of general training speech data. The user also trains the system by providing a training utterance, such as a passphrase or other spoken utterance. Then in a test phase, the user provides a test utterance which includes some background noise as well as a test voice sample. The background noise is used to bring the condition of the training data closer to that of the test voice sample by modifying the training data and a reduced set of the general data, before creating adapted training and general models. Match scores are generated based on the comparison between the adapted models and the test voice sample, with a final match score calculated based on the difference between the match scores. This final match score gives a measure of the degree of matching between the test voice sample and the training utterance and is based on the degree of matching between the speech characteristics from extracted feature vectors that make up the respective speech signals, and is not a direct comparison of the raw signals themselves. Thus, the method can be used to verify a speaker without necessarily requiring the speaker to provide an identical test phrase to the phrase provided in the training sample.
Abstract:
A method and apparatus for pattern recognition comprising comparing an input signal representing an unknown pattern with reference data representing each of a plurality of pre-defined patterns, at least one of the pre-defined patterns being represented by at least two instances of reference data. Successive segments of the input signal are compared with successive segments of the reference data and comparison results for each successive segment are generated. For each pre-defined pattern having at least two instances of reference data, the comparison results for the closest matching segment of reference data for each segment of the input signal are recorded to produce a composite comparison result for the said pre-defined pattern. The unknown pattern is the identified on the basis of the comparison results. Thus the effect of a mismatch between the input signal and each instance of the reference data is reduced by selecting the best segments from the instances of reference data for each pre-defined pattern.
Abstract:
A speaker verification method is proposed that first builds a general model of user utterances using a set of general training speech data. The user also trains the system by providing a training utterance, such as a passphrase or other spoken utterance. Then in a test phase, the user provides a test utterance which includes some background noise as well as a test voice sample. The background noise is used to bring the condition of the training data closer to that of the test voice sample by modifying the training data and a reduced set of the general data, before creating adapted training and general models. Match scores are generated based on the comparison between the adapted models and the test voice sample, with a final match score calculated based on the difference between the match scores. This final match score gives a measure of the degree of matching between the test voice sample and the training utterance and is based on the degree of matching between the speech characteristics from extracted feature vectors that make up the respective speech signals, and is not a direct comparison of the raw signals themselves. Thus, the method can be used to verify a speaker without necessarily requiring the speaker to provide an identical test phrase to the phrase provided in the training sample.
Abstract:
Apparatus and method for speaker recognition includes generating, in response to a speech signal, a plurality of feature data having a series of coefficient sets, each set having a plurality of coefficients indicating the short term special amplitude in a plurality of frequency bands. The feature data is compared with predetermined speaker reference data, and recognition of a corresponding speaker is indicated in dependence upon such comparison. The frequency bands are unevenly spaced along the frequency axis, and a long term average spectral magnitude of at least one of said coefficients is derived and used for normalizing the at least one coefficient.