Abstract:
A system for estimating the location of a stationary or moving sound source includes multiple microphones, which need not be physically aligned in a linear array or a regular geometric pattern in a given environment, an auralizer that generates auralized multi-channel signals based at least on array-related transfer functions and room impulse responses of the microphones as well as signal labels corresponding to the auralized multi-channel signals, a feature extractor that extracts features from the auralized multi-channel signals for efficient processing, and a neural network that can be trained to estimate the location of the sound source based at least on the features extracted from the auralized multi-channel signals and the corresponding signal labels.
Abstract:
Methods and apparatus relating to microphone devices and signal processing techniques are provided. In an example, a microphone device can detect sound, as well as enhance an ability to perceive at least a general direction from which the sound arrives at the microphone device. In an example, a case of the microphone device has an external surface which at least partially defines funnel-shaped surfaces. Each funnel-shaped surface is configured to direct the sound to a respective microphone diaphragm to produce an auralized multi-microphone output. The funnel-shaped surfaces are configured to cause direction-dependent variations in spectral notches and frequency response of the sound as received by the microphone diaphragms. A neural network can device-shape the auralized multi-microphone output to create a binaural output. The binaural output can be auralized with respect to a human listener.
Abstract:
Summaries of audio or audio-video events are created from audio or audio-video recordings based on the needs of a particular user. The summarized events may have shorter timespans than the actual timespans of audio or audio-video recordings. Audio or audio-video recordings may be provided by one or more recording devices or sensors to a network, such as a cloud. A summarizer is provided in the network, and may include an audio marker, an audio enhancer, and an audio compiler. The audio marker tags segments of an audio or audio-video stream using one or more audio detectors based on user preferences. The audio enhancer may enhance the quality of tagged audio segments by enhancing desired sound features and suppressing undesired sound features. The audio compiler compiles the tagged audio segments based on event scores and generates audio or audio-video summaries for the user.
Abstract:
This application discloses a method implemented by an electronic device to detect a signature event (e.g., a baby cry event) associated with an audio feature (e.g., baby sound). The electronic device obtains a classifier model from a remote server. The classifier model is determined according to predetermined capabilities of the electronic device and ambient sound characteristics of the electronic device, and distinguishes the audio feature from a plurality of alternative features and ambient noises. When the electronic device obtains audio data, it splits the audio data to a plurality of sound components each associated with a respective frequency or frequency band and including a series of time windows. The electronic device further extracts a feature vector from the sound components, classifies the extracted feature vector to obtain a probability value according to the classifier model, and detects the signature event based on the probability value.
Abstract:
Systems and methods are described for improving endpoint detection of a voice query submitted by a user. In some implementations, a synchronized video data and audio data is received. A sequence of frames of the video data that includes images corresponding to lip movement on a face is determined. The audio data is endpointed based on first audio data that corresponds to a first frame of the sequence of frames and second audio data that corresponds to a last frame of the sequence of frames. A transcription of the endpointed audio data is generated by an automated speech recognizer. The generated transcription is then provided for output.
Abstract:
Apparatus and method for training a neural network for signal recognition in multiple sensory domains, such as audio and video domains, are provided. For example, an identity of a speaker in a video clip may be identified based on audio and video features extracted from the video clip and comparisons of the extracted audio and video features to stored audio and video features with their associated labels obtained from one or more training video clips. In another example, a direction of sound propagation or a location of a sound source in a video clip may be determined based on the audio and video features extracted from the video clip and comparisons of the extracted audio and video features to stored audio and video features with their associated direction or location labels obtained from one or more training video clips.
Abstract:
A neural network is provided for recognition and enhancement of multi-channel sound signals received by multiple microphones, which need not be aligned in a linear array in a given environment. Directions and distances of sound sources may also be detected by the neural network without the need for a beamformer connected to the microphones. The neural network may be trained by knowledge gained from free-field array impulse responses obtained in an anechoic chamber, array impulse responses that model simulated environments of different reverberation times, and array impulse responses obtained in actual environments.
Abstract:
An example method includes receiving, by a computing system, an indication of one or more audible sounds that are detected by a first sensing device, the one or more audible sounds originating from a user; determining, by the computing system and based at least in part on an indication of one or more signals detected by a second sensing device, a distance between the user and the second sensing device; determining, by the computing system and based at least in part on the indication of the one or more audible sounds, one or more acoustic features that are associated with the one or more audible sounds; and determining, by the computing system, and based at least in part on the one or more acoustic features and the distance between the user and the second sensing device, one or more words that correspond to the audible sounds.
Abstract:
A sensor device may include a computing device in communication with multiple microphones. A neural network executing on the computing device may receive audio signals from each microphone. One microphone signal may serve as a reference signal. The neural network may extract differences in signal characteristics of the other microphone signals as compared to the reference signal. The neural network may combine these signal differences into a lossy compressed signal. The sensor device may transmit the lossy compressed signal and the lossless reference signal to a remote neural network executing in a cloud computing environment for decompression and sound recognition analysis.
Abstract:
A method for auralizing a multi-microphone device. Path information for one or more sound paths using dimensions and room reflection coefficients of a simulated room for one of a plurality of microphones included in a multi-microphone device is determined. An array-related transfer functions (ARTFs) for the one of the plurality of microphones is retrieved. The auralized impulse response for the one of the plurality of microphones is generated based at least on the retrieved ARTFs and the determined path information.