Abstract:
Upon receiving a continuous presence video image, an endpoint of a videoconference may identify its self image and replace the self image with other video data, including an alternate video image from another endpoint or a background color. Embedded markers may be placed in a continuous presence video image corresponding to the endpoint. The embedded markers identify the location of the self image of the endpoint in the continuous presence video image. The embedded markers may be inserted by the endpoint or a multipoint control unit serving the endpoint.
Abstract:
A videoconferencing unit for enhancing direct eye-contact between participants can include a curved fully reflective mirror to reflect the image of the near end to a camera. The curved mirror can be placed in front of the display screen near a location where images of faces/eyes of far end participants are to appear. In another example, the videoconferencing unit can include a disintegrated camera configuration that provides an air gap between a front lens element and a rear lens element. The front lens element can be located behind an aperture within the display screen. The air gap can provide an unobstructed path to light from projectors and therefore avoid any undesirable shadows from appearing on the display screen. In another example, the videoconferencing unit can include a combination of the disintegrated camera configuration and mirrors for providing direct eye contact videoconferencing.
Abstract:
Content is shared by rendering the content in a shared session and providing the rendered content to the participating devices. The originating device has access to an original version of the content in a virtual session, which is accessed by logging into cloud content services and downloading the desired content into the virtual session. A rendering engine in a rendered session then renders the content and distributes the rendered content to the participants. Only rendered content is provided to the participants, so that the participants cannot see the credentials of the originating user, cannot see the document source and do not have access to the document itself. The participants can mark up the rendered content, which markups are shared to the other participants.
Abstract:
The amount of far-field noise transmitted by a primary communication device in an open-plan office environment is reduced by defining an acoustic perimeter of reference microphones around the primary device. Reference microphones generate a reference audio input including far-field noise in the proximity of the primary device. The primary device generates a main audio input including the voice of the primary speaker as well as background noise. Reference audio input is compared to main audio input to identify the background noise portion of the main audio signal. A noise reduction algorithm suppresses the identified background noise in the main audio signal. The one or more reference microphones defining the acoustic perimeter may be included in separate microphone devices placed in proximity to the main desktop phone, microphones within other nearby desktop telephone devices, or a combination of both types of devices.
Abstract:
A novel universal bridge (UB) can handle and conduct multimedia multipoint conferences between a plurality of MREs and LEPs without using an MRM, an MCU and a gateway. Further, a UB can be configured to allocate and release resources dynamically according to the current needs of each conferee and the session.
Abstract:
A technique for merging conference session dialogs allows presenting content and media streams from a non-Skype endpoint to a Skype multipoint control unit (MCU), so that they present a single caller in a conference with both media and content. A signaling adapter intercepts session dialogs and merges or other modifies. When adding the non-Skype endpoint, requests from a content server are dropped while requests from the MCU handling non-Skype media streams are forwarded to the Skype MCU. Responses to the request from the MCU are also forwarded to the content server. When creating subscription dialogs, requests from the content server are modified to appear as if they came from the MCU, while responses go back to the proper requester. Conference notifications are forked to go to both the content server and the MCU. Because Skype uses separate media and content dialogs, merging of audio/video and content dialogs may be omitted. By merging dialogs, user experience is improved.
Abstract:
MEMS microphone assembly which is formed by the combination of front and rear single piece boots, which are configured to mate, and a MEMS microphone. The front boot includes two ports for receiving sound waves which are provided to ports of the MEMS microphone. The front boot includes two collars to form the ports and which are used to align the MEMS microphone assembly in a housing containing the MEMS microphone assembly. Acoustic tubes transfer the sound waves from the ports to the MEMS microphone. There can be air channels provided with the acoustic tubes to reduce microphonics. The front and rear boots contain recesses to capture the MEMS microphone to simplify alignment and assembly.
Abstract:
A video encoding method eliminates the need for back channel feedback by using long-term reference frames for recovering data transmission errors. A videoconferencing endpoint captures image data and designates a first frame as a long term reference (LTR) frame. Subsequent intra frames us the LTR frame for reference. When a new LTR frame is designated, subsequent frames use the newly designated LTR frame for reference only after it is determined that the newly designated LTR frame is fully operational to serve in that role.
Abstract:
An endpoint receives audio from a remote endpoint. A first signal corresponding to the audio is received at an adaptive filter, and a filtered signal is generated. First audio is emitted at a loudspeaker based on the first signal. A microphone collects second audio which is based on the first audio. The microphone signal emits a signal based on the second audio. The filtered signal is subtracted from microphone signal to generate an adapted signal, the adapted signal having an energy level. The adapted signal is then transmitted to a double-talk detector, the double-talk detector configured to allow transmission of the adapted signal to the remote endpoint when the energy level of the adapted signal exceeds an energy threshold. The degree of cross-correlation between the first signal and the adapted signal is determined (iteratively). If the cross-correlation exceeds a cross-correlation threshold, the energy threshold of the double-talk detector is raised.
Abstract:
MEMS microphone assembly which is formed by the combination of front and rear single piece boots, which are configured to mate, and a MEMS microphone. The front boot includes two ports for receiving sound waves which are provided to ports of the MEMS microphone. The front boot includes two collars to form the ports and which are used to align the MEMS microphone assembly in a housing containing the MEMS microphone assembly. Acoustic tubes transfer the sound waves from the ports to the MEMS microphone. There can be air channels provided with the acoustic tubes to reduce microphonics. The front and rear boots contain recesses to capture the MEMS microphone to simplify alignment and assembly.