Abstract:
Content presented in association with a transmitting endpoint is obtained by the transmitting endpoint and transmitted to a receiving endpoint of a videoconference, along with associated information regarding interaction of a presenter with the content at the transmitting side. The receiving endpoint presents the presented content obtained from the transmitting endpoint similar to the presentation format at the transmitting side. Where the original presentation format is not suitable for transmission, a video image of the content in the original presentation format may be transmitted. An intermediate control unit, such as a multipoint control unit, may relay content and associated information between transmitting and receiving endpoints.
Abstract:
An automatic process for producing professional, directed, production crew quality, video for videoconferencing is described. Rule based logic is integrated into an automatic process for producing director quality video for videoconferencing. The automatic process uses sensor data to process video streams for video conferencing. A method and system for automatically processing sensor data on room activity into general room analytics for further processing by application of rules based logic to produce production quality video for videoconferencing is described. Sensory devices and equipment, for example motion, infrared, audio, sound source localization (SSL) and video are used to detect room activity or room stimulus. The room activity is analyzed (for example, to determine whether individuals are in the subject room, speaker identification and movement within the room) and processed to produce room analytics. Speaker identification and relative placement information are used to logically depict speakers conversing with one another.
Abstract:
A system and method for manipulating images in a videoconferencing session provides users with a 3-D-like view of one or more presented sites, without the need for 3-D equipment. A plurality of cameras may record a room at a transmitting endpoint, and the receiving endpoint may select one of the received video streams based upon a point of view of a conferee at the receiving endpoint. The conferee at the receiving endpoint will thus experience a 3-D-like view of the presented site.
Abstract:
A communication device includes a magnetically aligning handset. In an embodiment, an alignment element in the handset magnetically couples with a corresponding alignment element in a cradle portion when the handset is in an on-hook position. The handset engages a hookswitch on the cradle portion of the communication device while in the on-hook position. When the hookswitch is engaged, the handset is not active for communication. The magnetic alignment elements may assist in guiding the handset into the on-hook position and may secure the handset against inadvertent disturbances to the off-hook position. Each alignment element may include a number of magnetic regions and non-magnetic regions, selected to align the handset in a particular orientation within the cradle.
Abstract:
Systems for videoconferencing are designed for where people are seated around a video conferencing system. The systems include a camera so the far site can see the local participants and the systems include displays that show the far site. The displays are properly aligned with the cameras so that when people at the far site view the displayed images of the near site, it looks like they have eye contact with the near site. Obtaining the alignments of the camera and the displays to provide this apparent eye contact result requires meeting a series of different constraints relating to the various sizes and angles of the components and the locations of the participants.
Abstract:
A videoconferencing endpoint includes at least one processor a number of microphones and at least one camera. The endpoint can receive audio information and visual motion information during a teleconferencing session. The audio information includes one or more angles with respect to the microphone from a location of a teleconferencing session. The audio information is evaluated automatically to determine at least one candidate angle corresponding to a possible location of an active talker. The candidate angle can be analyzed further with respect to the motion information to determine whether the candidate angle correctly corresponds to person who is speaking during the teleconferencing session.
Abstract:
Various embodiments for implementing a multimedia conference session utilizing a software defined networking (SDN) architecture are described. Various embodiments include a SDN media controller (SDNMC) that initially receives a request to establish a multimedia conferencing session between a plurality of endpoints. Based on the request, the SDNMC allocates at least one virtual media address for the multimedia conferencing session and creates a stream table based on the at least one virtual media address. After processing the request, the SDNMC transmits one or more SDN commands that includes the stream table to the SDN controller. The SDN controller receives the SDN commands at a northbound interface and sends one or more SDN instructions to one or more SDN devices at a southbound interface. The SDN devices update their routing information in order to relay media traffic corresponding to the virtual media address directly between the endpoints.
Abstract:
Systems for videoconferencing are designed for where people are seated around a video conferencing system. The systems include a camera so the far site can see the local participants and the systems include displays that show the far site. The displays are properly aligned with the cameras so that when people at the far site view the displayed images of the near site, it looks like they have eye contact with the near site. Obtaining the alignments of the camera and the displays to provide this apparent eye contact result requires meeting a series of different constraints relating to the various sizes and angles of the components and the locations of the participants.
Abstract:
Dynamically adapting a continuous presence (CP) layout in a videoconference enhances a videoconferencing experience by providing optimum visibility to regions of interest within the CP layout and ignoring regions of no interest. Based on the CP layout, a CP video image can be built, in which a conferee at a receiving endpoint can observe, simultaneously, several other participants' sites in the conference. For example, more screen space within the CP layout is devoted to presenting the participants in the conference and little or no screen space is used to present an empty seat, an empty room, or an unused portion of a room. Aspect ratios of segments of the CP layout (e.g., landscape vs. portrait) can be adjusted to optimally present the regions of interest. The CP layout can be adjusted as regions of interest change depending on the dynamics of the video conference.
Abstract:
Disclosed herein are methods, systems, and techniques for creating media conferencing layouts that are intelligent (i.e., based on some underlying principle to enhance user-perceived conference quality) and persistent (i.e., consistent within a call and from one call to the next).