Abstract:
A videoconferencing system has a videoconferencing unit that use portable devices as peripherals for the system. The portable devices obtain near-end audio and send the audio to the videoconferencing unit via a wireless connection. In turn, the videoconferencing unit sends the near-end audio from the loudest portable device along with near-end video to the far-end. The portable devices can control the videoconferencing unit and can initially establish the videoconference by connecting with the far-end and then transferring operations to the videoconferencing unit. To deal with acoustic coupling between the unit's loudspeaker and the portable device's microphone, the unit uses an echo canceller that is compensated for differences in the clocks used in the A/D and D/A converters of the loudspeaker and microphone.
Abstract:
A system and method for manipulating images in a videoconferencing session provides users with a 3-D-like view of one or more presented sites, without the need for 3-D equipment. A plurality of cameras may record a room at a transmitting endpoint, and the receiving endpoint may select one of the received video streams based upon a point of view of a conferee at the receiving endpoint. The conferee at the receiving endpoint will thus experience a 3-D-like view of the presented site.
Abstract:
The present invention is a system and a method for providing a control unit for a multipoint multimedia/audio conference that enables one or more participants to take part in more than one conference simultaneously. The control unit can operate in audio and/or video sections of an MCU and/or Management and Control System (MCS). The MCS, in an exemplary embodiment of the present invention, controls the participation of at least one participant in more than one conference simultaneously by using a Cross-Conference Database (CCDB). The MCU performs connection changes affecting which participants are associated with one or more conferences based on the information that is stored in the CCDB.
Abstract:
Conferencing methods and systems are disclosed wherein tags are associated with conferencing endpoints. The tags provide information enabling a decision-making entity to determine the preferability of one or more processing aspects of the endpoints. In a multipoint conference a tag can allow a decision making entity such as an MCU to determine the most appropriate mode for rendering video or other signals sent from a tagged endpoint. The tag itself can indicate the most appropriate mode or can contain information from which the decision-making entity can determine the most appropriate mode using an algorithm. A tag can be associated with an endpoint manually, for example based on a user's or controller's inputs concerning the endpoint. Alternatively, the tag can be assigned automatically based on automatically sensing one or more conditions at an endpoint or analyzing one or more parameters of a data stream transmitted from the endpoint.
Abstract:
Audio transmitter circuitry is disclosed that is configurable into different modes by the user, and can output either a differential or single-ended audio signal on two signal wires. Depending on the mode, the transmitter deals with noise on the signal wires by adjusting the input resistance that such noise sees looking into the transmitter. If the transmitter is configured in a differential mode, the input resistance looking back into the transmitter from the perspective of the noise on both signal wires is relatively high. If the transmitter is configured in a single ended mode, the input resistance of noise looking back from the active signal wire into the transmitter is relatively low, to in effect ground such noise back into the transmitter, without significantly presenting such noise to the receiver.
Abstract:
Disclosed herein are methods, systems, and devices for improved audio, video, and data conferencing. The present invention provides a conferencing system comprising a plurality of endpoints communicating data including audio data and control data according to a communication protocol. A local conference endpoint may control or be controlled by a remote conference endpoint. Data comprising control signals may be exchanged between the local endpoint and remote endpoint via various communication protocols. In other embodiments, the present invention provides for improved bridge architecture for controlling functions of conference endpoints including controlling functions of the bridge.
Abstract:
Methods and devices for improving the intelligibility of audio in a teleconferencing unit. Multiple microphones and multiple audio channels are used, in which only the best microphones are selected to represent each audio channel. Multiple microphones signals may be mixed according to microphones' positions in a room to form a single signal to represent one audio channel. The audio signal may be further processed to effectuate other features.
Abstract:
A communication system that includes multiple conferencing devices connected in a daisy-chain configuration is communicably connected to far-end conference participants. Multiple conferencing devices provide improved sound quality to the far-end participants by reducing aural artifacts resulting from reverberation and echo. The daisy-chain communication system also reduces the processing and transmission time of the near-end audio signal by processing and transmitting the audio signal in frequency domain. Each conferencing device in the daisy chain performs signal conditioning on its audio signal before transmitting it in the frequency domain to a mixer. The output signal of the mixer is converted back to the time domain before being transmitted to the far-end. The daisy-chain configuration also provides a distributed bridge to external communication devices that can be connected to each conferencing device.
Abstract:
A videoconferencing system for determining alignment information for images captured by two or more cameras is disclosed. The videoconferencing system can include a plurality of endpoints and at least one control unit (CU) such as a multipoint control unit (MCU), for example. An endpoint can include a plurality of cameras and at least one projector. The projector is used to project a pattern at the near end site, which pattern is captured by the plurality of cameras. The image frames produced by the cameras are processed to determine the identity and location coordinates of the images of the projected patterns. The location coordinates can be used as reference points to be used by applications such as telepresence, 3D videoconferencing, and morphing.
Abstract:
A videoconference multipoint control unit (MCU) automatically generates display layouts for videoconference endpoints. Display layouts are generated based on attributes associated with video streams received from the endpoints and display configuration information of the endpoints. An endpoint can include one or more attributes in each outgoing stream. Attributes can be assigned based on video streams' role, content, camera source, etc. Display layouts can be regenerated if one or more attributes change. A mixer can generate video streams to be displayed at the endpoints based on the display layout.