Abstract:
Clock synchronization for an acoustic echo canceller (AEC) with a speaker and a microphone connected over a digital link may be provided. A clock difference may be estimated by analyzing the speaker signal and the microphone signal in the digital domain. The clock synchronization may be combined in both hardware and software. This synchronization may be performed in two stages, first with coarse synchronization in hardware, then fine synchronization in software with, for example, a re-sampler.
Abstract:
In one implementation, at least two images are captured from a respective at least two cameras of a telepresence system. The at least two cameras have horizontally overlapping fields of view such that the at least two images horizontally overlap. A processor identifies, by image processing of the overlap of the at least two images, portions of each of the at least two images. The portions spatially correspond to adjacent displays and do not include the overlap. Subsequent images captured by the at least two cameras are displayed on the adjacent displays in a video conference. The displayed images are for the portions of the field of view corresponding to the adjacent displays
Abstract:
Clock synchronization for an acoustic echo canceller (AEC) with a speaker and a microphone connected over a digital link may be provided. A clock difference may be estimated by analyzing the speaker signal and the microphone signal in the digital domain. The clock synchronization may be combined in both hardware and software. This synchronization may be performed in two stages, first with coarse synchronization in hardware, then fine synchronization in software with, for example, a re-sampler.
Abstract:
Acoustic echo cancellation is improved by receiving a speaker signal that is used to produce audio in a room, and receiving audio signals that capture audio from an array of microphones in the room, including an acoustic echo from the speakers. To cancel the acoustic echo, one adaptive filter is associated with a corresponding subspace in the room. Each of the audio signals is assigned to at least one of the adaptive filters, and a set of coefficients is iteratively determined for each of the adaptive filters. The coefficients for an adaptive filter are determined by selecting each of the audio signals assigned to that adaptive filter and adapting the filter to remove an acoustic echo from each of the selected audio signals. At each iteration, a different audio signal is selected from the audio signals assigned to the adaptive filter in order to determine the set of coefficients.
Abstract:
In one implementation, at least two images are captured from a respective at least two cameras of a telepresence system. The at least two cameras have horizontally overlapping fields of view such that the at least two images horizontally overlap. A processor identifies, by image processing of the overlap of the at least two images, portions of each of the at least two images. The portions spatially correspond to adjacent displays and do not include the overlap. Subsequent images captured by the at least two cameras are displayed on the adjacent displays in a video conference. The displayed images are for the portions of the field of view corresponding to the adjacent displays.
Abstract:
Acoustic echo cancellation is improved by receiving a speaker signal that is used to produce audio in a room, and receiving audio signals that capture audio from an array of microphones in the room, including an acoustic echo from the speakers. To cancel the acoustic echo, one adaptive filter is associated with a corresponding subspace in the room. Each of the audio signals is assigned to at least one of the adaptive filters, and a set of coefficients is iteratively determined for each of the adaptive filters. The coefficients for an adaptive filter are determined by selecting each of the audio signals assigned to that adaptive filter and adapting the filter to remove an acoustic echo from each of the selected audio signals. At each iteration, a different audio signal is selected from the audio signals assigned to the adaptive filter in order to determine the set of coefficients.