Abstract:
An example method of coding video data includes coding, from a coded video bitstream, a syntax element that indicates whether a transpose process is applied to palette indices of a palette for a current block of video data; decoding, from the coded video bitstream and at a position in the coded video bitstream that is after the syntax element that indicates whether the transpose process is applied to palette indices of the palette for the current block of video data, one or more syntax elements related to delta quantization parameter (QP) and/or chroma QP offsets for the current block of video data; and decoding the current block of video data based on the palette for the current block of video data and the one or more syntax elements related to delta QP and/or chroma QP offsets for the current block of video data.
Abstract:
A device includes one or more processors configured to derive M most probable modes (MPMs) for intra prediction of a block of video data. A syntax element that indicates whether a MPM index or a non-MPM index is used to indicate a selected intra prediction mode of the plurality of intra prediction modes for intra prediction of the block of video data is decoded. The one or more processors are configured such that, based on the MPM index indicating the selected intra prediction mode, the one or more processors decode the non-MPM index. The non-MPM index is encoded in the bitstream as a code word shorter than [log2 N] bits if the non-MPM index satisfies a criterion and is encoded in the bitstream as a fixed length code with [log2 N] bits otherwise. The one or more processors reconstruct the block based on the selected intra prediction mode.
Abstract:
A device includes one or more processors configured to derive, from among a plurality of intra prediction modes, M most probable modes (MPMs) for intra prediction of a block of video data. A syntax element indicating whether a MPM index or a non-MPM index is used to indicate a selected intra prediction mode of the plurality of intra prediction modes for intra prediction of the block of video data is decoded. Based on the indicated one of the MPM index or the non-MPM index being the MPM index, the one or more processors select, for each of one or more context-modeled bins of the MPM index, based on intra prediction modes used to decode one or more neighboring blocks, a context index for the context-modeled bin. The one or more processors reconstruct the block of video data based on the selected intra prediction mode.
Abstract:
Techniques and systems are provided for coding video data. For example, a method of coding video data includes determining motion information for a current block and determining an illumination compensation status for the current block. The method further includes coding the current block based on the motion information and the illumination compensation status for the current block. In some examples, the method further includes determining the motion information for the current block based on motion information of a candidate block. In such examples, the method further includes determining an illumination compensation status of the candidate block and deriving the illumination compensation status for the current block based on the illumination compensation status of the candidate block.
Abstract:
This disclosure relates to processing video data, including processing video data to conform to a high dynamic range (HDR)/wide color gamut (WCG) color container. The techniques apply, on an encoding side, pre-processing of color values prior to application of a static transfer function and/or apply post-processing on the output from the application of the static transfer function. By applying pre-processing, the examples may generate color values that when compacted into a different dynamic range by application of the static transfer function linearize the output codewords. By applying post-processing, the examples may increase signal to quantization noise ratio. The examples may apply the inverse of the operations on the encoding side on the decoding side to reconstruct the color values.
Abstract:
A device may determine, based on data in a bitstream, a luma sample (Y) of a pixel, a Cb sample of the pixel, and the Cr sample of the pixel. Furthermore, the device may obtain, from the bitstream, a first scaling factor and a second scaling factor. Additionally, the device may determine, based on the first scaling factor, the Cb sample for the pixel, and Y, a converted B sample (B′) for the pixel. The device may determine, based on the second scaling factor, the Cr sample for the pixel, and Y, a converted R sample (R′) for the pixel. The device may apply an electro-optical transfer function (EOTF) to convert Y′, R′, and B′ to a luminance sample for the pixel, a R sample for the pixel, and a B sample for the pixel, respectively.
Abstract:
An example method of decoding video data includes obtaining, from a video bitstream, a representation of a difference between a motion vector (MV) predictor and a MV that identifies a predictor block for a current block of video data in a current picture; obtaining, from the video bitstream, a syntax element indicating whether adaptive motion vector resolution (AMVR) is used for the current block; determining, based on the representation of the difference between the MV predictor and the MV that identifies the predictor block, a value of the MV; storing the value of the MV at fractional-pixel resolution regardless of whether AMVR is used for the current block and regardless of whether the predictor block is included in the current picture; determining, based on the value of the stored MV, pixel values of the predictor block; and reconstructing the current block based on the pixel values of the predictor block.
Abstract:
In one example, a device for coding video data includes a memory configured to store video data, and a video coder configured to code a value for a syntax element representative of whether a high bit depth is enabled for the video data, and when the value for the syntax element indicates that the high bit depth is enabled: code a value for a syntax element representative of the high bit depth for one or more parameters of the video data, code values for the parameters such that the values for the parameters are representative of bit depths that are based on the value for the syntax element representative of the high bit depth, and code the video data based at least in part on the values for the parameters.
Abstract:
This disclosure relates to processing video data, including processing video data to conform to a high dynamic range/wide color gamut (HDR/WCG) color container. As will be explained in more detail below, the techniques of the disclosure including dynamic range adjustment (DRA) parameters and apply the DRA parameters to video data in order to make better use of an HDR/WCG color container. The techniques of this disclosure may also include signaling syntax elements that allow a video decoder or video post processing device to reverse the DRA techniques of this disclosure to reconstruct the original or native color container of the video data.
Abstract:
An apparatus for coding video information according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit stores video information associated with a reference layer and a corresponding enhancement layer. The processor determines a value of a video unit positioned at a position within the enhancement layer based at least in part on an intra prediction value weighted by a first weighting factor, wherein the intra prediction value is determined based on at least one additional video unit in the enhancement layer, and a value of a co-located video unit in the reference layer weighted by a second weighting factor, wherein the co-located video unit is located at a position in the reference layer corresponding to the position of the video unit in the enhancement layer. In some embodiments, the at least one of the first and second weighting factors is between 0 and 1.