Abstract:
This disclosure describes techniques that may enable a video coder to simultaneously implement multiple parallel processing mechanisms, including two or more of wavefront parallel processing (WPP), tiles, and entropy slices. This disclosure describes signaling techniques that are compatible both with coding standards that only allow one parallel processing mechanism to be implemented at a time, but that are also compatible with potential future coding standards that may allow for more than one parallel processing mechanism to be implemented simultaneously. This disclosure also describes restrictions that may enable WPP and tiles to be implemented simultaneously.
Abstract:
A video coding device generates a motion vector (MV) candidate list for a prediction unit (PU) of a coding unit (CU) that is partitioned into four equally-sized PUs. The video coding device converts a bi-directional MV candidate in the MV candidate list into a uni-directional MV candidate. In addition, the video coding device determines a selected MV candidate in the merge candidate list and generates a predictive video block for the PU based at least in part on one or more reference blocks indicated by motion information specified by the selected MV candidate.
Abstract:
A video encoder generates a bitstream that includes a syntax element that indicates whether a picture is encoded according either a first coding mode or a second coding mode. In the first coding mode, the picture is entirely encoded using wavefront parallel processing (WPP). In the second coding mode, each tile of the picture is encoded without using WPP and the picture may have one or more tiles. A video decoder may parse the syntax element from the bitstream. In response to determining that the syntax element has a particular value, the video decoder decodes the picture entirely using WPP. In response to determining that the syntax element does not have the particular value, the video decoder decodes each tile of the picture without using WPP.
Abstract:
A video coding device generates a motion vector (MV) candidate list for a prediction unit (PU) of a coding unit (CU) that is partitioned into four equally-sized PUs. The video coding device converts a bi-directional MV candidate in the MV candidate list into a uni-directional MV candidate. In addition, the video coding device determines a selected MV candidate in the merge candidate list and generates a predictive video block for the PU based at least in part on one or more reference blocks indicated by motion information specified by the selected MV candidate.
Abstract:
A video encoder generates a bitstream that includes a syntax element that indicates whether a picture is encoded according either a first coding mode or a second coding mode. In the first coding mode, the picture is entirely encoded using wavefront parallel processing (WPP). In the second coding mode, each tile of the picture is encoded without using WPP and the picture may have one or more tiles. A video decoder may parse the syntax element from the bitstream. In response to determining that the syntax element has a particular value, the video decoder decodes the picture entirely using WPP. In response to determining that the syntax element does not have the particular value, the video decoder decodes each tile of the picture without using WPP.
Abstract:
A video encoder determines reference blocks for each inter-predicted prediction unit (PU) of a tree block group such that each of the reference blocks is in a reference picture that is in a reference picture subset for the tree block group. The reference picture subset for the tree block group includes less than all reference pictures in a reference picture set of the current picture. The tree block group comprises a plurality of concurrently-coded tree blocks in the current picture. For each inter-predicted PU of the tree block group, the video encoder indicates, in a bitstream that includes a coded representation of video data, a reference picture that includes the reference block for the inter-predicted PU. A video decoder receives the bitstream, determines the reference pictures of the inter-predicted PUs of the tree block group, and generates decoded video blocks using the reference blocks of the inter-predicted PUs.
Abstract:
A video coder can control in-picture prediction across slice boundaries within a picture. In one example, a first syntax element can control if in-picture prediction across slice boundaries for slices of a picture. If in-picture prediction across slice boundaries is enabled for the picture, then a second syntax element can control, for an individual slices, if in-picture prediction across slice boundaries is enabled for the slice.
Abstract:
A video decoder determines, based on a block size of a current block and a low-frequency non-separable transform (LFNST) syntax element, a zero-out pattern of normatively defined zero-coefficients. The LFNST syntax element is signaled at a transform unit (TU) level. Additionally, the video decoder determines transform coefficients of the current block. The transform coefficients of the current block include transform coefficients in an LFNST region of the current block and transform coefficients outside the LFNST region of the current block. As part of determining the transform coefficients of the current block, the video decoder applies an inverse LFNST to determine values of one or more transform coefficients in the LFNST region of the current block. The video decoder also determines that transform coefficients of the current block in a region of the current block defined by the zero-out pattern are equal to 0.
Abstract:
A video coder may determine a quantization parameter (QP) value for a block of video data, determine a residual coding method from a plurality of residual coding methods based on the QP value, wherein the plurality of residual coding methods include transform skip (TS) residual coding and regular residual coding, and code a residual of the block of video data using the determined residual coding method.
Abstract:
An example device includes memory and one or more processors implemented in circuitry and communicatively coupled to the memory. The one or more processors are configured to receive a first slice header syntax element for a slice of the video data and determine a first value for the first slice header syntax element, the first value being indicative of whether dependent quantization is enabled. The one or more processors are configured to receive a second slice header syntax element for the slice of the video data and determine a second value for the second slice header syntax element, the second value being indicative of whether sign data hiding is enabled. The one or more processors are configured to determine whether transform skip residual coding is disabled for the slice based on the first value and the second value and decode the slice based on the determinations.