Abstract:
In an example, a method of decoding video data may include receiving a first block of video data. The first block of video data may be a sub-block of a prediction unit. The method may include receiving one or more blocks of video data that neighbor the first block of video data. The method may include determining motion information of at least one of the one or more blocks of video data that neighbor the first block of video data. The method may include decoding, using overlapped block motion compensation, the first block of video data based at least in part on the motion information of the at least one of the one or more blocks that neighbor the first block of video data.
Abstract:
An apparatus for coding video information includes a memory unit configured to store video information associated with a reference block; and a processor in communication with the memory unit, wherein the processor is configured to determine a value of a current video unit associated with the reference block based on, at least in part, a classification of the reference block and a scan order selected by the processor based upon the classification. The scan order indicates an order in which values within the reference block are processed to at least partially determine the value of the current video unit.
Abstract:
An apparatus for decoding video information according to certain aspects includes a memory unit and a processor operationally coupled to the memory unit. The memory unit is configured to store at least one reference picture list of an enhancement layer, the at least one reference picture list comprising residual prediction reference picture information. The processor is configured to: decode signaled information about residual prediction reference picture generation; generate a residual prediction reference picture based on an enhancement layer reference picture and the decoded signaled information such that the generated residual prediction reference picture has the same motion field and the same picture order count (POC) as the enhancement layer reference picture from which it is generated; and store the generated residual prediction reference picture in the at least one reference picture list of the enhancement layer.
Abstract:
Methods and systems for video image coding are provided. Sets of filters may be selected and applied to video information at least partially based on the type of inter layer prediction implemented in coding the video information. Different filters, or filter sets, may be used for inter layer intra prediction, difference domain intra prediction, and/or difference domain inter prediction. Filter selection information may be embedded in the video bit stream.
Abstract:
As one example, techniques for decoding video data include receiving a bitstream that includes one or more pictures of a coded video sequence (CVS), decoding a first picture according to a decoding order, wherein the first picture is a random access point (RAP) picture that is not an instantaneous decoding refresh (IDR) picture, and decoding at least one other picture following the first picture according to the decoding order based on the decoded first picture. As another example, techniques for encoding video data include generating a bitstream that includes one or more pictures of a CVS, wherein a first picture according to the decoding order is a RAP picture that is not an IDR picture, and avoiding including at least one other picture, other than the first picture, that corresponds to a leading picture associated with the first picture, in the bitstream.
Abstract:
A system and method for decoding video. A first syntax element for a block of video data is received, a value of the first syntax element indicating one of a plurality of mapping functions to be used to determine a magnitude of a scaling parameter for cross-component prediction. A second syntax element for the block of video data is received, a value of the second syntax element corresponding to the magnitude of the scaling parameter, wherein receiving the second syntax element includes decoding the value of the second syntax element with a specific binarization method. The magnitude of the scaling parameter is determined using the one of the plurality of mapping functions indicated by the first syntax element and the value of the second syntax element. Cross-component prediction is performed for at least one component of the block video data using the determined magnitude of the scaling parameter.
Abstract:
Methods for defining decoder capability for decoding multi-layer bitstreams containing video information, in which the decoder is implemented based on multiple single-layer decoder cores, are disclosed. In one aspect, the method may include identifying at least one allocation of layers of the bitstream into at least one set of layers. The method may further include detecting whether each set of layers is capable of being exclusively assigned to one of the decoder cores for the decoding of the bitstream. The method may also include determining whether the decoder is capable of decoding the bitstream based at least in part on detecting whether each set of layers is capable of being exclusively assigned to one of the decoder cores.
Abstract:
In general, this disclosure describes techniques for coding video blocks using a color-space conversion process. A video coder, such as a video encoder or a video decoder, may determine whether to use color-space conversion for encoding the video data. In response to determining to use color-space conversion, the video coder may quantize data of a first color component of the video data using a first offset of a first quantization parameter (QP) and quantize data of a second color component of the video data using a second offset of a second QP, wherein the second color component is different than the first color component, and the second QP is different than the first QP. The video coder may further inverse quantize data of the first color component using the first offset and inverse quantize data of the second color component using the second offset.
Abstract:
Systems and methods for encoding and decoding scalable video information are disclosed. The system may have a memory unit configured to store syntax elements for a multi-layer picture. The system may further comprise one or more processors operationally coupled to the memory unit. The processors may be configured to determine at least one phase offset value between a reference layer sample position in the multi-layer picture and a corresponding enhancement layer sample position. The processors may be further configured to generate a syntax element indicating the phase offset value the phase offset value representing a phase offset of a luma sample position and a chroma sample position of the reference layer position.
Abstract:
In one example, a device for coding video data includes a memory configured to store video data, and a video coder configured to code a value for a syntax element representative of whether a high bit depth is enabled for the video data, and when the value for the syntax element indicates that the high bit depth is enabled: code a value for a syntax element representative of the high bit depth for one or more parameters of the video data, code values for the parameters such that the values for the parameters are representative of bit depths that are based on the value for the syntax element representative of the high bit depth, and code the video data based at least in part on the values for the parameters.