Abstract:
An encoding system may include a video source that captures video image, a video coder, and a controller to manage operation of the system. The video coder may encode the video image into encoded video data using a plurality of subgroup parameters corresponding to a plurality of subgroups of pixels within a group. The controller may set the subgroup parameters for at least one of the subgroups of pixels in the video coder, based upon at least one parameters corresponding to the group. A decoding system may decode the video data based upon the motion prediction parameters.
Abstract:
A method of managing resources on a terminal includes determining a number of downloaded video streams active at the terminal, prioritizing the active video streams, assigning a decoding quality level to each active video stream based on a priority assignment for each active video stream, and apportioning reception bandwidth to each active video stream based on an assigned quality level of each active video stream.
Abstract:
Methods for organizing media data by automatically segmenting media data into hierarchical layers of scenes are described. The media data may include metadata and content having still image, video or audio data. The metadata may be content-based (e.g., differences between neighboring frames, exposure data, key frame identification data, motion data, or face detection data) or non-content-based (e.g., exposure, focus, location, time) and used to prioritize and/or classify portions of video. The metadata may be generated at the time of image capture or during post-processing. Prioritization information, such as a score for various portions of the image data may be based on the metadata and/or image data. Classification information such as the type or quality of a scene may be determined based on the metadata and/or image data. The classification and prioritization information may be metadata and may be used to organize the media data.
Abstract:
Chroma deblock filtering of reconstructed video samples may be performed to remove blockiness artifacts and reduce color artifacts without over-smoothing. In a first method, chroma deblocking may be performed for boundary samples of a smallest transform size, regardless of partitions and coding modes. In a second method, chroma deblocking may be performed when a boundary strength is greater than 0. In a third method, chroma deblocking may be performed regardless of boundary strengths. In a fourth method, the type of chroma deblocking to be performed may be signaled in a slice header by a flag. Furthermore, luma deblock filtering techniques may be applied to chroma deblock filtering.
Abstract:
The invention is directed to an efficient way for encoding and decoding video. Embodiments include identifying different coding units that share a similar characteristic. The characteristic can be, for example: quantization values, modes, block sizes, color space, motion vectors, depth, facial and non-facial regions, and filter values. An encoder may then group the units together as a coherence group. An encoder may similarly create a table or other data structure of the coding units. An encoder may then extract the commonly repeating characteristic or attribute from the coding units. The encoder may transmit the coherence groups along with the data structure, and other coding units which were not part of a coherence group. The decoder may receive the data, and utilize the shared characteristic by storing locally in cache, for faster repeated decoding, and decode the coherence group together.
Abstract:
In an example method, a decoder accesses a bitstream representing video content, and parses one or more flexible coefficient position (FCP) syntax from the bitstream, where the one or more FCP syntax indicate one or more index values. The decoder further determines side information representing one or more characteristics of an encoded portion of the video content. The decoder interprets the one or more FCP syntax based on the side information, including determining a coefficient position with respect to the encoded portion of the video content based on the one or more index values and the side information. The decoder decodes the encoded portion of the video content according to the coefficient position.
Abstract:
In an example method, a decoder obtains a data stream representing video content. The video content is partitioned into one or more logical units, and each of the logical units is partitioned into one or more respective logical sub-units. The decoder determines that the data stream includes first data indicating that a first logical unit has been encoded according to a flexible skip coding scheme. In response, the decoder determines a first set of decoding parameters based on the first data, and decodes each of the logical sub-units of the first logical unit according to the first set of decoding parameters.
Abstract:
An encoder or decoder can perform enhanced motion vector prediction by receiving an input block of data for encoding or decoding and accessing stored motion information for at least one other block of data. Based on the stored motion information, the encoder or decoder can generate a list of one or more motion vector predictor candidates for the input block in accordance with an adaptive list construction order. The encoder or decoder can predict a motion vector for the input block based on at least one of the one or more motion vector predictor candidates.
Abstract:
Chroma deblock filtering of reconstructed video samples may be performed to remove blockiness artifacts and reduce color artifacts without over-smoothing. In a first method, chroma deblocking may be performed for boundary samples of a smallest transform size, regardless of partitions and coding modes. In a second method, chroma deblocking may be performed when a boundary strength is greater than 0. In a third method, chroma deblocking may be performed regardless of boundary strengths. In a fourth method, the type of chroma deblocking to be performed may be signaled in a slice header by a flag. Furthermore, luma deblock filtering techniques may be applied to chroma deblock filtering.
Abstract:
Chroma deblock filtering of reconstructed video samples may be performed to remove blockiness artifacts and reduce color artifacts without over-smoothing. In a first method, chroma deblocking may be performed for boundary samples of a smallest transform size, regardless of partitions and coding modes. In a second method, chroma deblocking may be performed when a boundary strength is greater than 0. In a third method, chroma deblocking may be performed regardless of boundary strengths. In a fourth method, the type of chroma deblocking to be performed may be signaled in a slice header by a flag. Furthermore, luma deblock filtering techniques may be applied to chroma deblock filtering.