Abstract:
Techniques for coding video data include coding a plurality of blocks of video data, wherein at least one block of the plurality of blocks of video data is coded using a coding mode that is one of an intra pulse code modulation (IPCM) coding mode and a lossless coding mode. In some examples, the lossless coding mode may use prediction. The techniques further include assigning a non-zero quantization parameter (QP) value for the at least one block coded using the coding mode. The techniques also include performing deblocking filtering on one or more of the plurality of blocks of video data based on the coding mode used to code the at least one block and the assigned non-zero QP value for the at least one block.
Abstract:
An example device for coding point cloud data includes a memory configured to store point cloud data; and one or more processors implemented in circuitry and configured to: determine whether inter prediction data is coded for a current node of an octree of the point cloud data; determine whether planar mask data is coded for the current node; when at least one of the inter prediction data or the planar mask data is coded for the current node, avoid coding a single occupancy value for the current node, the single occupancy value indicating whether only a single sub-node of the current node includes a point; and code the current node. The processors may also be configured to determine a context for entropy coding the planar mask data according to planar mask data for a collocated node in a reference frame when the planar mask data is coded.
Abstract:
An example device for decoding point cloud data includes: a memory configured to store point cloud data; and one or more processors implemented in circuitry and configured to: decode encoded point cloud geometry data for a point cloud to form reconstructed point cloud geometry data for the point cloud; downscale the point cloud geometry data to form downscaled point cloud geometry data; decode attribute data for the point cloud using the downscaled point cloud geometry; apply the attribute data to the reconstructed point cloud geometry data to form intermediate point cloud data; and apply a residual learning network to the intermediate point cloud data to form a reconstructed point cloud.
Abstract:
Example devices, systems, and techniques are described. An example technique includes determining that resampling is to be applied to a first reference frame for a slice of point cloud data or a frame of the point cloud data. The technique includes applying resampling to the first reference frame to generate a resampled reference frame. The technique includes determining one or more inter prediction candidates based on the resampled reference frame. The technique includes processing the slice of the point cloud data or the frame of the point cloud data based on the one or more inter prediction candidates.
Abstract:
An example device for decoding video data includes a memory configured to store video data; and one or more processors implemented in circuitry and configured to: generate an intra-prediction block for a current block of video data using an angular intra-prediction mode, the angular intra-prediction mode being an upper-right angular intra-prediction mode or a lower-left angular intra-prediction mode; determine a prediction direction of the angular intra-prediction mode; for at least one sample of the intra-prediction block for the current block: calculate a gradient term for the at least one sample along the prediction direction; and combine a value of an intra-predicted sample of the intra-prediction block at a position of the at least one sample of the intra-prediction block with the gradient term to produce a value of the at least one sample of the intra-prediction block; and decode the current block using the intra-prediction block.
Abstract:
A method of encoding point cloud data includes determining that a node of the point cloud data is to be encoded in inferred direct coding mode (IDCM); determining that a number of points in the node is less than a threshold; and in a condition where the number of points in the node is less than the threshold, encoding the node with IDCM and with planar mode being disabled.
Abstract:
A device for decoding encoded point cloud data can be configured to, for a point of a point cloud, determine a first color value for a first color component based on a first predicted value and a first residual value; apply a scaling factor to the first residual value to determine a predicted second residual value, wherein the scaling factor has one or both of a non-integer value or an absolute value greater than one; for the point of the point cloud, receive a second residual value in the encoded point cloud data; determine a final second residual value based on the predicted second residual value and the received second residual value; and for the point of the point cloud, determine a second color value for a second color component based on a second predicted value and the final second residual value.
Abstract:
A G-PCC coder is configured to receive the point cloud data, determine a final quantization parameter (QP) value for the point cloud data as a function of a node QP offset multiplied by a geometry QP multiplier, and code the point cloud data using the final QP value to create an coded point cloud.
Abstract:
In some examples, a method of decoding a point cloud includes decoding an initial QP value from an attribute parameter set. The method also includes determining a first QP value for a first component of an attribute of point cloud data from the initial QP value. The method further includes determining a QP offset value for a second component of the attribute of the point cloud data and determining a second QP value for the second component of the attribute from the first QP value and from the QP offset value. The method includes decoding the point cloud data based on the first QP value and further based on the second QP value.
Abstract:
A video coder may be configured to code video data by performing splitting of a coding unit (CU) of video data using intra sub-partition (ISP) to form a set of prediction blocks. The video coder may group a plurality of the prediction blocks from the set of prediction blocks into a first prediction block group (PBG). The video coder may reconstruct samples of prediction blocks included in the first PBG independently of samples of other prediction blocks included in the first PBG.