Abstract:
Video coding and decoding techniques are provided in which entropy coding states are stored for regions of video frames of a sequence of video frames, upon completion of coding of those regions. Entropy coding initialization states for regions of a current video frame are derived based on entropy coding states of corresponding regions of a prior video frame in the sequence of video frames. This process may be performed at a video encoder and a video decoder, though some signaling may be sent from the encoder to the decoder to direct the decoder is certain operations.
Abstract:
In one embodiment, a video encoder generates an encoded bitstream representing a sequence of video frames including a keyframe. The encoder generates information for use by a decoder that receives the encoded bitstream to enable the decoder to generate display frames from a pre-keyframe video frame that is prior to the keyframe in the sequence of video frames. The encoded bitstream is sent to the decoder. In another embodiment, a video decoder receives from an encoder an encoded bitstream representing a sequence of video frames including a keyframe. The keyframe includes information to enable the decoder to generate display frames from a pre-keyframe video frame that was received prior to the keyframe in the sequence of video frames. The decoder generates display frames using the pre-keyframe video frame, information included with the keyframe and information included with an encoder-determined number of decoded frames subsequent to the keyframe.
Abstract:
Techniques for video conferencing include receiving a stream of video slices from a participant, designating the video slices as a primary sub-picture of a frame of video, encoding, with a first encoder, a first secondary sub-picture of the frame of video to obtain an encoded first secondary sub-picture of a frame of video, encoding, with a second encoder, a second secondary sub-picture of the frame of video to obtain an encoded first secondary sub-picture of a frame of video, combining the primary sub-picture with the encoded first secondary sub-picture to obtain a first video stream, combining the primary sub-picture with the encoded second secondary sub-picture to obtain a second video stream, and transmitting the first and second video streams to respective recipients.
Abstract:
In one embodiment, a video encoder generates an encoded bitstream representing a sequence of video frames including a keyframe. The encoder generates information for use by a decoder that receives the encoded bitstream to enable the decoder to generate display frames from a pre-keyframe video frame that is prior to the keyframe in the sequence of video frames. The encoded bitstream is sent to the decoder. In another embodiment, a video decoder receives from an encoder an encoded bitstream representing a sequence of video frames including a keyframe. The keyframe includes information to enable the decoder to generate display frames from a pre-keyframe video frame that was received prior to the keyframe in the sequence of video frames. The decoder generates display frames using the pre-keyframe video frame, information included with the keyframe and information included with an encoder-determined number of decoded frames subsequent to the keyframe.
Abstract:
Presented herein are downstream recovery (error correction) techniques for an aggregated/consolidated media stream. In one example, a consolidated media stream that includes source media packets from one or more sources is sent to one or more downstream receiving devices. Based on the source media packets, one or more self-describing recovery packets for downstream error correction of the source media packets are generated. The self-describing recovery packets include a mapping to the source media packets used to generate the self-describing recovery packets, thereby avoiding the addition of error correction information in the consolidated media stream. The one or more self-describing recovery packets are sent to each of the downstream receiving devices as a separate stream.
Abstract:
Presented herein are downstream recovery (error correction) techniques for an aggregated/consolidated media stream. In one example, a consolidated media stream that includes source media packets from one or more sources is sent to one or more downstream receiving devices. Based on the source media packets, one or more self-describing recovery packets for downstream error correction of the source media packets are generated. The self-describing recovery packets include a mapping to the source media packets used to generate the self-describing recovery packets, thereby avoiding the addition of error correction information in the consolidated media stream. The one or more self-describing recovery packets are sent to each of the downstream receiving devices as a separate stream.
Abstract:
Techniques for video conferencing including receiving bandwidth and/or codec characteristics of a plurality of video conference participants, determining whether or not any of the bandwidth and/or codec characteristics are sufficiently different from others of the bandwidth and/or codec characteristics to warrant different treatment, when one or more of the bandwidth and/or codec characteristics are sufficiently different, grouping video conference participants into at least a first group and a second group according to video conference participants having same or similar bandwidth and/or codec characteristics, and establishing a video conference with at least first and second subconferences to service the first and second groups, respectively, wherein each of the video conference participants receives frames of video in which a first portion of the frames is encoded by a shared encoder, and wherein a second portion of the frames is encoded by different encoders respectively designated for each of the video conference participants.
Abstract:
Presented herein are techniques for creating video for participants in a video conference. A designated primary video stream is decoded and the resulting video composed in accordance with a primary sub-picture portion of a frame. Other video streams are designated as secondary video streams output by secondary entities, and are decoded and composed in accordance with a secondary sub-picture portion of the frame structured for the secondary entities. The composed primary video stream is encoded for display at each secondary entity, to obtain encoded slices of a primary video stream. The composed secondary video stream is encoded for display at one of the secondary entities, to obtain encoded slices of a secondary video stream. The encoded primary and secondary video streams are combined at the encoded slice level into a single video stream for transmission to, and decode and display at, the one of the secondary entities.
Abstract:
Large source data packets having large packet sizes and small source data packets having small packet sizes that are smaller than the large packet sizes are received. The small source data packets and the large source data packets are sent to a receiving device without forward error correction (FEC). The small source data packets are aggregated into a container packet having a header configured to differentiate the container packet from the large source data packets and the small source data packets. The large source data packets and the container packet are encoded with forward error correction to produce FEC-encoded packets to enable forward error correction of the large source data packets and the container packet at the receiving device. The FEC-encoded packets are sent to the receiving device.
Abstract:
Large source data packets having large packet sizes and small source data packets having small packet sizes that are smaller than the large packet sizes are received. The small source data packets and the large source data packets are sent to a receiving device without forward error correction (FEC). The small source data packets are aggregated into a container packet having a header configured to differentiate the container packet from the large source data packets and the small source data packets. The large source data packets and the container packet are encoded with forward error correction to produce FEC-encoded packets to enable forward error correction of the large source data packets and the container packet at the receiving device. The FEC-encoded packets are sent to the receiving device.