Abstract:
A first endpoint device has access to common video data including common video frames and encoded common video data having the common video frames encoded therein. The encoded common video data is downloaded to a second endpoint device. After, or during, the downloading of the encoded common video data, live video frames are played in a play order. The live video frames are encoded in the play order into encoded live video frames. To encode the live video frames, each live video frame is predicted based on a previous live video frame that has been encoded and a common video frame from the common video data that has been downloaded in the encoded common video data. The encoded live video frames include indications of the previous live video frame and the common video frame used to encode each encoded live video frame are transmitted to the second endpoint device.
Abstract:
Video coding and decoding techniques are provided in which entropy coding states are stored for regions of video frames of a sequence of video frames, upon completion of coding of those regions. Entropy coding initialization states for regions of a current video frame are derived based on entropy coding states of corresponding regions of a prior video frame in the sequence of video frames. This process may be performed at a video encoder and a video decoder, though some signaling may be sent from the encoder to the decoder to direct the decoder is certain operations.
Abstract:
A system and method packetizes data by fragmenting, with processing circuitry, a data structure into a plurality of data fragments, each data fragment being included in a separate packet, and inserting, with processing circuitry, an offset indicator within each of the packets, each offset indicator indicating an amount of fragment data encapsulated within preceding packets. A system and method decodes packetized data that includes the offset indicator.
Abstract:
A low-complexity process of generating an artificial frame that can be used for prediction. At least a first reference frame and a second reference frame of a video signal are obtained. A synthetic reference frame is generated from the first reference frame and the second reference frame. Reference blocks from each of the first reference frame and the second reference frame are combined to derive an interpolated block of the synthetic reference frame.
Abstract:
In one embodiment, a device de-multiplexes a stream of multimedia data into first and second media streams. The device determines that a portion of the first media stream is missing for co-presentation with a corresponding portion of the second media stream due to a present latency condition. The device associates the corresponding portion of the second media stream with a previously received portion of the second media stream. The device generates media data for the first media stream for co-presentation with the corresponding portion of the second media stream in lieu of the missing portion of the first media stream, based on a previously received portion of the first media stream associated with the previously received portion of the second media stream. The device provides the generated media data and the corresponding portion of the second media stream for co-presentation by one or more user interfaces.
Abstract:
In one embodiment, a device de-multiplexes a stream of multimedia data into first and second media streams. The device determines that a portion of the first media stream is missing for co-presentation with a corresponding portion of the second media stream due to a present latency condition. The device associates the corresponding portion of the second media stream with a previously received portion of the second media stream. The device generates media data for the first media stream for co-presentation with the corresponding portion of the second media stream in lieu of the missing portion of the first media stream, based on a previously received portion of the first media stream associated with the previously received portion of the second media stream. The device provides the generated media data and the corresponding portion of the second media stream for co-presentation by one or more user interfaces.
Abstract:
A first endpoint device has access to common video data including common video frames and encoded common video data having the common video frames encoded therein. The encoded common video data is downloaded to a second endpoint device. After, or during, the downloading of the encoded common video data, live video frames are played in a play order. The live video frames are encoded in the play order into encoded live video frames. To encode the live video frames, each live video frame is predicted based on a previous live video frame that has been encoded and a common video frame from the common video data that has been downloaded in the encoded common video data. The encoded live video frames include indications of the previous live video frame and the common video frame used to encode each encoded live video frame are transmitted to the second endpoint device.
Abstract:
Presented herein are techniques for a low-complexity process of generating an artificial frame that can be used for prediction. At least a first reference frame and a second reference frame of a video signal are obtained. A synthetic reference frame is generated from the first reference frame and the second reference frame. Reference blocks from each of the first reference frame and the second reference frame are combined to derive an interpolated block of the synthetic reference frame.
Abstract:
A method including: dividing a first video frame into a predetermined plurality of regions; assigning a quantization parameter to each of the predetermined plurality of regions in accordance with a first predetermined pattern of quantization parameters, the quantization parameters not being all the same; dividing video frames, subsequent to the first video frame, into the predetermined plurality of regions; and assigning a quantization parameter to each of the predetermined plurality of regions in the video frames subsequent to the first video frame, in accordance with another predetermined pattern, different from the first predetermined pattern.