Abstract:
A video conference endpoint detects a face and determines a face angle of the detected face relative to a reference direction based on images captured with a camera. The endpoint determines an angle of arrival of sound (i.e., a sound angle) received at a microphone array that transduces the sound relative to the reference direction based on the transduced sound and a sound speed parameter indicative of a speed of sound in air. The endpoint compares the face angle against the sound angle, and adjusts the sound speed parameter so as to reduce the angle difference if the compare indicates an angle difference greater than zero between the face and sound angles.
Abstract:
A system and method for joint acoustic echo control and adaptive array processing, comprising the decomposition of a captured sound field into N sub-sound fields, applying linear echo cancellation to each sub-sound field, selecting L sub-sound fields from the N sub-sound fields, performing L channel adaptive array processing utilizing the L selected sub-sound fields, and applying non-linear audio echo cancellation.
Abstract:
Systems, processes, devices, apparatuses, algorithms and computer readable medium for suppressing spatial interference using a dual microphone array for receiving, from a first microphone and a second microphone that are separated by a predefined distance, and that are configured to receive source signals, respective first and second microphone signals based on received source signals. A phase difference between the first and the second microphone signals is calculated based on the predefined distance. An angular distance between directions of arrival of the source signals and a desired capture direction is calculated based on the phase difference. Directional-filter coefficients are calculated based on the angular distance. Undesired source signals are filtered from an output based on the directional-filter coefficients.
Abstract:
Systems, processes, devices, apparatuses, algorithms and computer readable medium for suppressing spatial interference using a dual microphone array for receiving, from a first microphone and a second microphone that are separated by a predefined distance, and that are configured to receive source signals, respective first and second microphone signals based on received source signals. A phase difference between the first and the second microphone signals is calculated based on the predefined distance. An angular distance between directions of arrival of the source signals and a desired capture direction is calculated based on the phase difference. Directional-filter coefficients are calculated based on the angular distance. Undesired source signals are filtered from an output based on the directional-filter coefficients.
Abstract:
Systems, processes, devices, apparatuses, algorithms and computer readable medium for suppressing spatial interference using a dual microphone array for receiving, from a first microphone and a second microphone that are separated by a predefined distance, and that are configured to receive source signals, respective first and second microphone signals based on received source signals. A phase difference between the first and the second microphone signals is calculated based on the predefined distance. An angular distance between directions of arrival of the source signals and a desired capture direction is calculated based on the phase difference. Directional-filter coefficients are calculated based on the angular distance. Undesired source signals are filtered from an output based on the directional-filter coefficients.
Abstract:
A video conference endpoint determines a position of a best audio pick-up region for placement of a sound source relative to a microphone having a receive pattern configured to capture sound signals from the best region. The endpoint captures an image of a scene that encompasses the best region and displays the image of the scene. The endpoint generates an image representative of the best region and displays the generated image representative of the best region as an overlay of the scene image.
Abstract:
A video conference endpoint detects a face and determines a face angle of the detected face relative to a reference direction based on images captured with a camera. The endpoint determines an angle of arrival of sound (i.e., a sound angle) received at a microphone array that transduces the sound relative to the reference direction based on the transduced sound and a sound speed parameter indicative of a speed of sound in air. The endpoint compares the face angle against the sound angle, and adjusts the sound speed parameter so as to reduce the angle difference if the compare indicates an angle difference greater than zero between the face and sound angles.
Abstract:
Presented herein are techniques for controlling the level of ultrasound pairing signals generated in a teleconferencing environment. The levels of ultrasound pairing signals transmitted in a meeting room are adjusted automatically based on the ultrasound signal levels received at one or more of the sound receiving devices that can communicate with a teleconferencing endpoint in the meeting room.