Abstract:
In one example, a device for processing decoded video data a video decoder implemented by one or more hardware-based processing units comprising digital logic circuitry, and a postprocessing unit implemented by one or more hardware-based processing units comprising digital logic circuitry. The video decoder is configured to decode video data of a video bitsream according to a video coding standard, extract HDR postprocessing data from an SEI message of the video bitstream, and provide the decoded video data and the HDR postprocessing data to the postprocessing unit. The postprocessing unit is configured to process the decoded video data using the HDR postprocessing data according to the video coding standard. The device may additionally determine whether the video decoder is compliant with the video coding standard by comparing the processed video data with reference processed video data.
Abstract:
In an example, a method of processing video may include receiving a bitstream including encoded video data and a colour remapping information (CRI) supplemental enhancement information (SEI) message. The CRI SEI message may include information corresponding to one or more colour remapping processes. The method may include decoding the encoded video data to generate decoded video data. The method may include applying a process that does not correspond to the CRI SEI message to the decoded video data before applying at least one of the one or more colour remapping processes to the decoded video data to produce processed decoded video data.
Abstract:
In general, techniques are described for processing high dynamic range (HDR) and wide color gamut (WCG) video data for video coding. A device comprising a memory and a processor may perform the techniques. The memory may store compacted fractional chromaticity coordinate (FCC) formatted video data. The processor may inverse compact the compacted FCC formatted video data using one or more inverse adaptive transfer functions (TFs) to obtain decompacted FCC formatted video data. The processor may next inverse adjust a chromaticity component of the decompacted FCC formatted video data based on a corresponding luminance component of the decompacted FCC formatted video data to obtain inverse adjusted FCC formatted video data. The processor may convert the chromaticity component of the inverse adjusted FCC formatted video data from the FCC format to a color representation format to obtain High Dyanmic Range (HDR) and Wide Color Gamut (WCG) video data.
Abstract:
A video decoder may adaptively disable, based on a syntax element, one or more filters used for intra-prediction. In addition, the video decoder may perform intra-prediction to generate prediction data for a current block of a current video slice. Furthermore, a video encoder may adaptively disable one or more filters used for intra-prediction. Furthermore, the video encoder may signal a syntax element that controls the one or more filters. In addition, the video encoder may perform intra prediction to generate prediction data for a current video block the video data.
Abstract:
In an example, a method of decoding video data using palette mode may include receiving, from an encoded video bitstream, a first syntax element defining a value indicative of a scan order. The method may include receiving, from the encoded video bitstream, a second syntax element defining a value indicative of a swap operation. The method may include reconstructing a palette block from a plurality of palette index values based on the value of the second syntax element indicative of the swap operation.
Abstract:
Techniques are described to extend palette-mode coding techniques to cases where chroma components are at a different resolution than luma components. The entries of the palette table includes three color values and the three color values or a single one of the three color values are selected based on whether a pixel includes both a luma component and chroma components or only a luma component.
Abstract:
An example method of decoding video data includes determining a palette for decoding a block, the palette including entries each having a respective palette index, determining a reference run of palette indices for first pixels of the block, and determining a current run of palette indices for second pixels of the block, based on the reference run. Determining the second plurality of palette indices includes locating a reference index of the reference run, the reference index being spaced at least one line from an initial index of the current run, determining a run length of the reference run, a final index of the reference run being separated from the initial index of the current run by at least one index, copying the palette indices of the reference run as the current run of palette indices, and decoding pixels of the copied current run using the palette.
Abstract:
An example method for decoding video data includes receiving syntax elements (SEs) for a component of a block vector that represents a displacement between a current block of video data and a predictor block of video data; decoding the SEs to determine a value of the component by at least: decoding a first SE to determine whether or not an absolute value of the component (AbsValcomp) is greater than zero; where AbsValcomp is greater than zero, decoding a second SE to determine whether AbsValcomp is greater than a threshold based on an order of a set of codes; where AbsValcomp is greater than the threshold, decoding, using the set of codes, a third SE to determine AbsValcomp minus an offset based on the order of the set of codes; and where AbsValcomp is greater than zero, decoding a fourth SE to determine a sign of the value of the component.
Abstract:
In an example a method of processing video data includes determining a run value that indicates a run-length of a run of a palette index of a block of video data, wherein the palette index is associated with a color value in a palette of color values for coding the block of video data, the method also includes determining a context for context adaptive coding of data that represents the run value based on the palette index, and coding the data that represents run value from a bitstream using the determined context.
Abstract:
A reduction in the number of binarizations and/or contexts used in context adaptive binary arithmetic coding (CABAC) for video coding is proposed. In particular, this disclosure proposes techniques that may lower the number contexts used in CABAC by up to 56.