Abstract:
A mixed audio separation system (100) which separates a specific audio from among a mixed audio (S100) includes a local frequency information generation unit (105) which obtains pieces of local frequency information (S103) corresponding to local reference waveforms (S102), based on the local reference waveforms (S102) and an analysis waveform which is the waveform of the mixed audio (S100). Each of the local reference waveforms (S102) (i) constitutes a part of a reference waveform for analyzing a predetermined frequency, (ii) has a predetermined temporal/spatial resolution and (iii) includes at least one of an amplification spectrum and a phase spectrum in the predetermined frequency. The system includes: a specific audio's frequency feature value extraction unit (106) which performs pattern matching between a first set which is the pieces of local frequency information and a second set of pieces of frequency information (S103) of a predetermined specific audio, and extracts the first set of the pieces of local frequency information (S103), based on a result of the pattern matching; and an audio signal generation unit which generates a signal of the specific audio, based on the first set of the pieces of local frequency information (S103) extracted by the specific audio's frequency feature value extraction unit.
Abstract:
A sound source direction detector comprises FFT analysis sections (103(1) to 103(3)) for generating a frequency spectrum in at least one frequency band of acoustic signals for each of the acoustic signals collected by two or more microphones arranged apart from one another, detection sound identifying sections (104(1) to 104(3)) for identifying a time portion of the frequency spectrum of a detection sound which obtains a sound source direction from the frequency spectrum in the frequency band, and a direction detecting section (105) for obtaining the difference between the times at which the detection sound reaches the microphones, obtaining the sound source direction from the time difference, the distance between the microphones, and the sound velocity, and outputting it depending on the degree of coincidence between the microphones of the frequency spectrum in the time portion identified by the detection sound identifying sections (104(1) to 104(3)) in a time interval which is the time unit to detect the sound source direction.
Abstract:
An audio restoration apparatus which restores an audio to be restored having a missing audio part and being included in a mixed audio. The audio restoration apparatus includes: a mixed audio separation unit which extracts the audio to be restored included in the mixed audio; an audio structure analysis unit which generates at least one of a phoneme sequence, a character sequence and a musical note sequence of the missing audio part in the extracted audio to be restored, based on an audio structure knowledge database in which semantics of audio are registered; an unchanged audio characteristic domain analysis unit which segments the extracted audio to be restored into time domains in each of which an audio characteristic remains unchanged; an audio characteristic extraction unit which identifies a time domain where the missing audio part is located, from among the segmented time domains, and extract audio characteristics of the identified time domain in the audio to be restored; and an audio restoration unit which restores the missing audio part in the audio to be restored, using the extracted audio characteristics and the generated one or more of phoneme sequence, character sequence and musical note sequence.
Abstract:
To provide a speech recognition apparatus which appropriately performs speech recognition by generating, in real time, language models adapted to a new topic even in the case where topics are changed. The speech recognition apparatus includes: a word specification unit for obtaining and specifying a word; a language model information storage unit for storing language models for recognizing speech and the respectively corresponding pieces of tag information; a combination coefficient calculation unit for calculating the weights of the respective language models, as combination coefficients, according to the word obtained by the word specification unit, based on the relevance degree between the word obtained by the word specification unit and the tag information of each language model; a language probability calculation unit for calculating the probabilities of word appearance by combining the respective language models according to the calculated combination coefficients; and a speech recognition unit for recognizing speech using the calculated probabilities of word appearance.
Abstract:
Provided is a sound source localization device which can detect a source location of an extraction sound, including at least two microphones; an analysis unit (103) which (i) analyze frequencies of the mixed sound including the noise and received by each microphone, and (ii) generates frequency signals; and an extraction unit (105) which, for each source location candidate, (a) adjusts time axes of the frequency signals corresponding to the microphones, so that there is no time difference between when the mixed sound reaches one microphone from the source location candidate and when the mixed sound reaches another microphone from the source location candidate, and (b) determines frequency signals having a difference distance equal to or smaller than a threshold value, from among the frequency signals corresponding to the microphones with the time axis having been adjusted, the difference distance representing a degree of a difference in the frequency signals between the microphones, and (c) extracts the source location of the extraction sound from among the source location candidates, in accordance with a degree of matching of the determined frequency signals between the microphones.
Abstract:
A sound identification apparatus which reduces the chance of a drop in the identification rate, including: a frame sound feature extraction unit which extracts a sound feature per frame of an inputted audio signal; a frame likelihood calculation unit which calculates a frame likelihood of the sound feature in each frame, for each of a plurality of sound models; a confidence measure judgment unit which judges a confidence measure based on the frame likelihood; a cumulative likelihood output unit time determination unit which determines a cumulative likelihood output unit time based on the confidence measure; a cumulative likelihood calculation unit which calculates a cumulative likelihood in which the frame likelihoods of the frames included in the cumulative likelihood output unit time are cumulated, for each sound model; a sound type candidate judgment unit which determines, for each cumulative likelihood output unit time, a sound type corresponding to the sound model that has a maximum cumulative likelihood; a sound type frequency calculation unit which calculates the frequency of the sound type candidate; and a sound type interval determination unit which determines the sound type of the inputted audio signal and the interval of the sound type, based on the frequency of the sound type.
Abstract:
A target sound analysis apparatus capable of distinguishing between a sound having the same fundamental period as a target sound but which differs therefrom and the target sound and analyzing whether or not the target sound is contained in an evaluation sound is an target sound analysis apparatus that analyzes whether or not a target sound is included in an evaluation sound, and includes: a target sound preparation unit that prepares a target sound that is an analysis waveform to be used for analyzing a fundamental period; an evaluation sound preparation unit that prepares an evaluation sound that is an analyzed waveform in which its fundamental period will be analyzed; and an analysis unit that temporally shifts the target sound with respect to the evaluation sound to sequentially calculate differential values of the evaluation sound and the target sound at corresponding points in time, calculate an iterative interval between the points in time where the differential value is equal to or lower than a predetermined threshold value, and judge whether or not the target sound exists in the evaluation sound based on a period of the iterative interval and the fundamental period of the target sound.
Abstract:
A vehicle-in-blind-spot detecting apparatus detects a vehicle positioned in a blind spot by mounting the apparatus on an operator's vehicle. The vehicle-in-blind-spot includes a presenting unit which presents information; at least one microphone which detects a sound; a vehicle sound extracting unit which extracts a vehicle sound from the sound detected by the microphone; and a sound source direction detecting unit which detects a sound source direction of the vehicle sound extracted by the vehicle sound extracting unit. A vehicle-in-blind-spot determining unit causes the presenting unit to present the information indicating that a vehicle is found in a blind spot in the case where the sound source direction of the vehicle sound detected by the sound source direction detecting unit is a first direction representing above the vehicle-in-blind-spot detecting apparatus with respect to a ground.
Abstract:
A target sound analysis apparatus capable of distinguishing between a sound having the same fundamental period as a target sound but which differs therefrom and the target sound and analyzing whether or not the target sound is contained in an evaluation sound is an target sound analysis apparatus that analyzes whether or not a target sound is included in an evaluation sound, and includes: a target sound preparation unit that prepares a target sound that is an analysis waveform to be used for analyzing a fundamental period; an evaluation sound preparation unit that prepares an evaluation sound that is an analyzed waveform in which its fundamental period will be analyzed; and an analysis unit that temporally shifts the target sound with respect to the evaluation sound to sequentially calculate differential values of the evaluation sound and the target sound at corresponding points in time, calculate an iterative interval between the points in time where the differential value is equal to or lower than a predetermined threshold value, and judge whether or not the target sound exists in the evaluation sound based on a period of the iterative interval and the fundamental period of the target sound.
Abstract:
Provided is a sound source localization device which can detect a source location of an extraction sound, including at least two microphones; an analysis unit (103) which (i) analyze frequencies of the mixed sound including the noise and received by each microphone, and (ii) generates frequency signals; and an extraction unit (105) which, for each source location candidate, (a) adjusts time axes of the frequency signals corresponding to the microphones, so that there is no time difference between when the mixed sound reaches one microphone from the source location candidate and when the mixed sound reaches another microphone from the source location candidate, and (b) determines frequency signals having a difference distance equal to or smaller than a threshold value, from among the frequency signals corresponding to the microphones with the time axis having been adjusted, the difference distance representing a degree of a difference in the frequency signals between the microphones, and (c) extracts the source location of the extraction sound from among the source location candidates, in accordance with a degree of matching of the determined frequency signals between the microphones.