Abstract:
Techniques are presented herein for a server to automatically determine the locations of collaboration endpoints and user devices throughout a building, and then provide the user devices with directions to the endpoints. The server collects air pressure and GPS readings from the user devices over a period of time. The readings may be collected from user devices that are paired or unpaired with endpoints at the time of the readings. Over a period of time, the plurality of readings may be used by the server to calculate user device offsets, endpoint offsets, floor level offsets, and endpoint relative orientations within the building. The server may then, in response to a notification of an upcoming meeting, provide the user device with directions to the meeting based on the collected readings.
Abstract:
Techniques for adaptive noise cancellation for multiple audio endpoints in a shared space are described. According to one example, a method includes detecting, by a first audio endpoint, one or more audio endpoints co-located with the first audio endpoint at a first location. A selected audio endpoint of the one or more audio endpoints is identified as a target noise source. The method includes obtaining, from the selected audio endpoint, a loudspeaker reference signal associated with a loudspeaker of the selected audio endpoint and removing the loudspeaker reference signal from a microphone signal associated with a microphone of the first audio endpoint. The method also includes providing the microphone signal from the first audio endpoint to at least one of a voice user interface (VUI) or a second audio endpoint, wherein the second audio endpoint is located remotely from the first location.
Abstract:
A controller of a collaboration endpoint generates a primary audio signal for an ultrasonic source audio signal produced by a source audio speaker, a reference audio signal for the ultrasonic source audio signal, and, based on the reference audio signal, a predicted signal that is predictive of the primary audio signal. The controller produces a prediction error of the predicted signal by comparing the primary audio signal with the predicted signal and determines whether the prediction error is indicative of a motion of one or more persons near the collaboration endpoint.
Abstract:
Techniques for adaptive noise cancellation for multiple audio endpoints in a shared space are described. According to one example, a method includes detecting, by a first audio endpoint, one or more audio endpoints co-located with the first audio endpoint at a first location. A selected audio endpoint of the one or more audio endpoints is identified as a target noise source. The method includes obtaining, from the selected audio endpoint, a loudspeaker reference signal associated with a loudspeaker of the selected audio endpoint and removing the loudspeaker reference signal from a microphone signal associated with a microphone of the first audio endpoint. The method also includes providing the microphone signal from the first audio endpoint to at least one of a voice user interface (VUI) or a second audio endpoint, wherein the second audio endpoint is located remotely from the first location.
Abstract:
Techniques are presented herein for a server to automatically determine the locations of collaboration endpoints and user devices throughout a building, and then provide the user devices with directions to the endpoints. The server collects air pressure and GPS readings from the user devices over a period of time. The readings may be collected from user devices that are paired or unpaired with endpoints at the time of the readings. Over a period of time, the plurality of readings may be used by the server to calculate user device offsets, endpoint offsets, floor level offsets, and endpoint relative orientations within the building. The server may then, in response to a notification of an upcoming meeting, provide the user device with directions to the meeting based on the collected readings.
Abstract:
A computer-implemented method including at a server in communication with at least first and second collaboration endpoints each located within a same physical space: determining a relative positioning of the first and second collaboration endpoints; and configuring content displayed at each of the first and second endpoints based on the relative positioning of the first and second collaboration endpoints is disclosed.
Abstract:
Techniques are presented herein for a server to automatically determine the locations of collaboration endpoints and user devices throughout a building, and then provide the user devices with directions to the endpoints. The server collects air pressure and GPS readings from the user devices over a period of time. The readings may be collected from user devices that are paired or unpaired with endpoints at the time of the readings. Over a period of time, the plurality of readings may be used by the server to calculate user device offsets, endpoint offsets, floor level offsets, and endpoint relative orientations within the building. The server may then, in response to a notification of an upcoming meeting, provide the user device with directions to the meeting based on the collected readings.
Abstract:
A controller of a collaboration endpoint generates a primary audio signal for an ultrasonic source audio signal produced by a source audio speaker, a reference audio signal for the ultrasonic source audio signal, and, based on the reference audio signal, a predicted signal that is predictive of the primary audio signal. The controller produces a prediction error of the predicted signal by comparing the primary audio signal with the predicted signal and determines whether the prediction error is indicative of a motion of one or more persons near the collaboration endpoint.
Abstract:
A video conference endpoint includes one or more cameras to capture video of different views and a microphone array to sense audio. One or more closeup views are defined. The endpoint detects faces in the captured video and active audio sources from the sensed audio. The endpoint detects any active talker having detected face positions that coincide with detected active audio sources, and also uses speaker clustering to detect whether any active talker is associated with a previously stored closeup views. Based on whether an active talker is detected in any of the stored closeup views, the endpoint switches between capturing video of one of the closeup views and a best overview of the participants in the conference room.
Abstract:
A loudspeaker transmits an ultrasonic signal into a spatial region. A microphone transduces ultrasonic sound, including an echo of the transmitted ultrasonic signal, received from the spatial region into a received ultrasonic signal. A controller transforms the ultrasonic signal and the received ultrasonic signal into respective time-frequency domains that cover respective ultrasound frequency ranges. The controller computes an error signal, representative of an estimate of an echo-free received ultrasonic signal, based on the transformed ultrasonic signal and the transformed received ultrasonic signal. The controller computes power estimates of the error signal over time, and detects a change in people presence in the spatial region based on a change in the power estimates of the error signal over time.