Abstract:
A coding device configured to code video data that includes a buffer memory configured to store pictures of the video data and a at least one processor implemented in circuitry that is in communication with the buffer memory such that the processor is configured to code at least two pictures of a single coded video sequence (CVS) of the video data where each picture of the at least two pictures is associated with an identical picture order count (POC) value and where the at least two pictures are different from one another, associate respective data with each of the at least two pictures of the single CVS, and identify, for inclusion in a reference picture set, at least one picture among the at least two pictures based on the identical POC value associated with the at least two pictures and the respective data associated with the at least one picture.
Abstract:
Provided are systems, methods, and computer-readable medium for encoding and decoding video data. In various examples, a coding device can include multiple luma QP and chroma QP relationship tables. In performing quantization or inverse quantization one video data being encoded or decoded, respectively, the coding device can select a table. The table can be selected based on, for example, a slice type, a prediction mode, and/or a luminance value, among other factors. The coding device can then use the luma QP value to look up a chroma QP value from the table. The luma QP and chroma QP values can then be used in quantization or inverse quantization.
Abstract:
The present disclosure provides various techniques related to adaptive loop filtering (ALF), and particular to geometry transformation-based ALF (GALF). In an aspect, a method for decoding video data includes receiving an encoded bitstream having coded video data from which reconstructed video units are generated, identifying multiple filter supports for the reconstructed video units, and filtering the reconstructed video units using the respective multiple filter supports to produce a decoded video output. Another method includes enabling block-level control of ALF of chroma components for the reconstructed video units, performing, for the reconstructed video units, the block-level ALF for the chroma components when ALF is enabled for one video block and skip performing the block-level ALF for the chroma components when ALF is disabled for another video block, and generating, based on the enabled block-level control of ALF, a decoded video output. Related devices, means, and computer-readable medium are also described.
Abstract:
The techniques of this disclosure allow for wavefront parallel processing of video data with limited synchronization points. In one example, a method of decoding video data comprises synchronizing decoding of a first plurality of video block rows at a beginning of each video block row in the first plurality of video block rows, decoding the first plurality of video block rows in parallel, wherein decoding does not include any synchronization between any subsequent video block in the first plurality of video block rows, and synchronizing decoding of a second plurality of video block rows at a beginning of each video block row in the second plurality of video block rows.
Abstract:
This disclosure proposes techniques for encoding and decoding transform coefficients in a video coding process. In particular, this disclosure proposes techniques determining whether or not to apply a sign data hiding process for a group of transform coefficients, and techniques for applying the sign data hiding process. In one example, this disclosure describes a method for decoding video data comprising determining a block of transform coefficients, determining whether to perform a sign data hiding process for at least one transform coefficient in the block of transform coefficients based on a single variable compared to a threshold, and decoding sign information for the block based on the determination of whether to perform the sign data hiding process.
Abstract:
A video coder can be configured to code video data by determining a first block size threshold for a block of video data; determining a second block size threshold, wherein the second block size threshold is smaller than the first block size threshold; partitioning the block of video data into smaller sub-blocks; in response to determining that a first partition of the partitioned block is smaller or equal to the first block size threshold, determining that blocks within the partition belong to a parallel estimation area; and in response to determining that a second partition of the partitioned block is smaller or equal to the second block size threshold, determining that blocks within the second partition belong to an area for a shared candidate list.
Abstract:
Methods and devices for decoding including a processor configured to determine which picture is a collocated picture, and determine a location of an associated block of the video data in the collocated picture that corresponds to the current block of video data in the current coding picture, based on using previously decoded blocks in the current coding picture to find an initial motion vector between the associated block in the collocated picture and the current block in the current coding picture, where the associated block of the video data includes at least one first derived motion vector. The processor configured to determine at least one second derived motion vector associated with the current block in the current coding picture, when the initial motion vector points to the collocated picture, based on the at least on first derived motion vector associated with the associated block in the collocated picture.
Abstract:
A method of decoding video data comprising parsing a sub-prediction unit motion flag from received encoded video data, deriving a list of sub-prediction unit level motion prediction candidates if the sub-prediction unit motion flag is active, deriving a list of prediction unit level motion prediction candidates if the sub-prediction unit motion flag is not active, and decoding the encoded video data using a selected motion vector predictor.
Abstract:
The present disclosure provides various techniques related to adaptive loop filtering (ALF), and particular to geometry transformation-based ALF (GALF). In an aspect, a method for decoding video data includes receiving an encoded bitstream having coded video data from which reconstructed video units are generated, identifying multiple filter supports for the reconstructed video units, and filtering the reconstructed video units using the respective multiple filter supports to produce a decoded video output. Another method includes enabling block-level control of ALF of chroma components for the reconstructed video units, performing, for the reconstructed video units, the block-level ALF for the chroma components when ALF is enabled for one video block and skip performing the block-level ALF for the chroma components when ALF is disabled for another video block, and generating, based on the enabled block-level control of ALF, a decoded video output. Related devices, means, and computer-readable medium are also described.
Abstract:
Techniques are described for determining a scan order for transform coefficients of a block. The techniques may determine context for encoding or decoding significance syntax elements for the transform coefficients based on the determined scan order. A video encoder may encode the significance syntax elements and a video decoder may decode the significance syntax elements based on the determined contexts.