Abstract:
Systems, methods, and apparatus for pitch trajectory analysis are described. Such techniques may be used to remove vocals and/or vibrato from an audio mixture signal. For example, such a technique may be used to pre-process the signal before an operation to decompose the mixture signal into individual instrument components.
Abstract:
Systems, methods, and apparatus are described for applying, based on angles of arrival of source components relative to the axes of different microphone pairs, a spatially directive filter to a multichannel audio signal to produce an output signal.
Abstract:
A mobile platform includes a microphone array and is capable of implementing beamforming to amplify or suppress audio information from a sound source. The sound source is indicated through a user input, such as pointing the mobile platform in the direction of the sound source or through a touch screen display interface. The mobile platform further includes orientation sensors capable of detecting movement of the mobile platform. When the mobile platform moves with respect to the sound source, the beamforming is adjusted based on the data from the orientation sensors so that beamforming is continuously implemented in the direction of the sound source. The audio information from the sound source may be included or suppressed from a telephone or video-telephony conversation. Images or video from a camera may be likewise controlled based on the data from the orientation sensors.
Abstract:
A method for encoding three dimensional audio by a wireless communication device is disclosed. The wireless communication device detects an indication of a plurality of localizable audio sources. The wireless communication device also records a plurality of audio signals associated with the plurality of localizable audio sources. The wireless communication device also encodes the plurality of audio signals.
Abstract:
A mobile platform includes a microphone array and is capable of implementing beamforming to amplify or suppress audio information from a sound source. The sound source is indicated through a user input, such as pointing the mobile platform in the direction of the sound source or through a touch screen display interface. The mobile platform further includes orientation sensors capable of detecting movement of the mobile platform. When the mobile platform moves with respect to the sound source, the beamforming is adjusted based on the data from the orientation sensors so that beamforming is continuously implemented in the direction of the sound source. The audio information from the sound source may be included or suppressed from a telephone or video-telephony conversation. Images or video from a camera may be likewise controlled based on the data from the orientation sensors.
Abstract:
A method for mapping a source location by an electronic device is described. The method includes obtaining sensor data. The method also includes mapping a source location to electronic device coordinates based on the sensor data. The method further includes mapping the source location from electronic device coordinates to physical coordinates. The method additionally includes performing an operation based on a mapping.
Abstract:
Systems, methods, and apparatus for projecting an estimated direction of arrival of sound onto a plane that does not include the estimated direction are described.
Abstract:
Systems, methods, and apparatus for matching pair-wise differences (e.g., phase delay measurements) to an inventory of source direction candidates, and application of pair-wise source direction estimates, are described.
Abstract:
A wearable device may include a processor configured to detect a self-voice signal, based on one or more transducers. The processor may be configured to separate the self-voice signal from a background signal in an external audio signal based on using a multi-microphone speech generative network. The processor may also be configured to apply a first filter to an external audio signal, detected by at least one external microphone on the wearable device, during a listen through operation based on an activation of the audio zoom feature to generate a first listen-through signal that includes the external audio signal. The processor may be configured to produce an output audio signal that is based on at least the first listen-through signal that includes the external signal, and is based on the detected self-voice signal.