Abstract:
Some embodiments provide a novel method for in-conference adjustment of encoded video pictures captured by a mobile device having at least first and second cameras. The method may involve real-time modifications of composite video displays that are generated by the mobile devices involved in such a conference. Specifically, in some embodiments, the mobile devices generate composite displays that simultaneously display multiple videos captured by multiple cameras of one or more devices. In some cases, the composite displays place the videos in adjacent display areas (e.g., in adjacent windows). In other cases, the composite display is a picture-in-picture (PIP) display that includes at least two display areas that show two different videos where one of the display areas is a background main display area and the other is a foreground inset display area that overlaps the background main display area.
Abstract:
Some embodiments provide an architecture for establishing multi-participant video conferences. This architecture has a central distributor that receives video images from two or more participants. From the received images, the central distributor generates composite images that the central distributor transmits back to the participants. Each composite image includes a set of sub images, where each sub image belongs to one participant. In some embodiments, the central distributor saves network bandwidth by removing each particular participant's image from the composite image that the central distributor sends to the particular participant. In some embodiments, images received from each participant are arranged in the composite in a non-interleaved manner. For instance, in some embodiments, the composite image includes at most one sub-image for each participant, and no two sub-images are interleaved.
Abstract:
Some embodiments provide an architecture for establishing a multi-participant conference. This architecture has one participant's computer in the conference act as a central content distributor for the conference. The central distributor receives data (e.g., video and/or audio streams) from the computer of each other participant, and distributes the received data to the computers of all participants. In some embodiments, the central distributor receives A/V data from the computers of the other participants. From such received data, the central distributor of some embodiments generates composite data (e.g., composite image data and/or composite audio data) that the central distributor distributes back to the participants.
Abstract:
Some embodiments provide an architecture for establishing a multi-participant conference. This architecture has one participant's computer in the conference act as a central content distributor for the conference. The central distributor receives data (e.g., video and/or audio streams) from the computer of each other participant, and distributes the received data to the computers of all participants. In some embodiments, the central distributor receives A/V data from the computers of the other participants. From such received data, the central distributor of some embodiments generates composite data (e.g., composite image data and/or composite audio data) that the central distributor distributes back to the participants.
Abstract:
Techniques for encoding data based at least in part upon an awareness of the decoding complexity of the encoded data and the ability of a target decoder to decode the encoded data are disclosed. In some embodiments, a set of data is encoded based at least in part upon a state of a target decoder to which the encoded set of data is to be provided. In some embodiments, a set of data is encoded based at least in part upon the states of multiple decoders to which the encoded set of data is to be provided.