Abstract:
A system which tracks a social interaction between a plurality of participants, includes a fixed beamformer that is adapted to output a first spatially filtered output and configured to receive a plurality of second spatially filtered outputs from a plurality of steerable beamformers. Each steerable beamformer outputs a respective one of the second spatially filtered outputs associated with a different one of the participants. The system also includes a processor capable of determining a similarity between the first spatially filtered output and each of the second spatially filtered outputs. The processor determines the social interaction between the participants based on the similarity between the first spatially filtered output and each of the second spatially filtered outputs.
Abstract:
A crosstalk cancelation technique reduces feedback in a shared acoustic space by canceling out some or all parts of sound signals that would otherwise be produced by a loudspeaker to only be captured by a microphone that, recursively, would cause these sounds signals to be reproduced again on the loudspeaker as feedback. Crosstalk cancelation can be used in a multichannel acoustic system (MAS) comprising an arrangement of microphones, loudspeakers, and a processor to together enhance conversational speech between in a shared acoustic space. To achieve crosstalk cancelation, a processor analyzes the inputs of each microphone, compares it to the output of far loudspeaker(s) relative to each such microphone, and cancels out any portion of a sound signal received by the microphone that matches signals that were just produced by the far loudspeaker(s) and sending only the remaining sound signal (if any) to such far loudspeakers.
Abstract:
Techniques for processing directionally-encoded audio to account for spatial characteristics of a listener playback environment are disclosed. The directionally-encoded audio data includes spatial information indicative of one or more directions of sound sources in an audio scene. The audio data is modified based on input data identifying the spatial characteristics of the playback environment. The spatial characteristics may correspond to actual loudspeaker locations in the playback environment. The directionally-encoded audio may also be processed to permit focusing/defocusing on sound sources or particular directions in an audio scene. The disclosed techniques may allow a recorded audio scene to be more accurately reproduced at playback time, regardless of the output loudspeaker setup. Another advantage is that a user may dynamically configure audio data so that it better conforms to the user's particular loudspeaker layouts and/or the user's desired focus on particular subjects or areas in an audio scene.
Abstract:
A method for echo reduction by an electronic device is described. The method includes nulling at least one speaker. The method also includes mixing a set of runtime audio signals based on a set of acoustic paths to determine a reference signal. The method also includes receiving at least one composite audio signal that is based on the set of runtime audio signals. The method further includes reducing echo in the at least one composite audio signal based on the reference signal.
Abstract:
A wireless device is described. The wireless device includes at least two microphones on the wireless device. The microphones are configured to capture sound from a target user. The wireless device also includes processing circuitry. The processing circuitry is coupled to the microphones. The processing circuitry is configured to locate the target user. The wireless device further includes a communication interface. The communication interface is coupled to the processing circuitry. The communication interface is configured to receive external device microphone audio from at least one external device microphone to assist the processing circuitry in the wireless device to locate the target user.
Abstract:
A multi-channel sound (MCS) system features intelligent calibration (e.g., of acoustic echo cancelation (AEC)) for use in dynamic acoustic environments. A sensor subsystem is utilized to detect and identify changes in the acoustic environment and determine a “scene” corresponding to the resulting acoustic characteristics for that environment. This detected scene is compared to predetermined scenes corresponding to the acoustic environment. Each predetermined scene has a corresponding pre-tuned filter configuration for optimal AEC performance. Based on the results of the comparison, the pre-tuned filter configuration corresponding to the predetermined scene that most closely matches the detected scene is utilized by the AEC subsystem of the multi-channel sound system.
Abstract:
A method for speech restoration by an electronic device is described. The method includes obtaining a noisy speech signal. The method also includes suppressing noise in the noisy speech signal to produce a noise-suppressed speech signal. The noise-suppressed speech signal has a bandwidth that includes at least three subbands. The method further includes iteratively restoring each of the at least three subbands. Each of the at least three subbands is restored based on all previously restored subbands of the at least three subbands.
Abstract:
A method for signal level matching by an electronic device is described. The method includes capturing a plurality of audio signals from a plurality of microphones. The method also includes determining a difference signal based on an inter-microphone subtraction. The difference signal includes multiple harmonics. The method also includes determining whether a harmonicity of the difference signal exceeds a harmonicity threshold. The method also includes preserving the harmonics to determine an envelope. The method further applies the envelope to a noise-suppressed signal.
Abstract:
A method of processing audio may include receiving, by a computing device, a plurality of real-time audio signals outputted by a plurality of microphones communicatively coupled to the computing device. The computing device may output to a display a graphical user interface (GUI) that presents audio information associated with the received audio signals. The one or more received audio signals may be processed based on a user input associated with the audio information presented via the GUI to generate one or more processed audio signals. The one or more processed audio signals may be output to, for example, one or more output devices such as speakers, headsets, and the like.
Abstract:
A device includes a memory, a receiver, a processor, and a display. The memory is configured to store a speaker model. The receiver is configured to receive an input audio signal. The processor is configured to determine a first confidence level associated with a first portion of the input audio signal based on the speaker model. The processor is also configured to determine a second confidence level associated with a second portion of the input audio signal based on the speaker model. The display is configured to present a graphical user interface associated with the first confidence level or associated with the second confidence level.