Abstract:
Techniques for advanced residual prediction (ARP) for coding video data may include inter-view ARP. Inter-view ARP may include identifying a disparity motion vector (DMV) for a current video block. The DMV is used for inter-view prediction of the current video block based on an inter-view reference video block. The techniques for inter-view ARP may also include identifying temporal reference video blocks in the current and reference views based on a temporal motion vector (TMV) of the inter-view reference video block, and determining a residual predictor block based on a difference between the temporal reference video blocks.
Abstract:
In an example, a process for coding video data includes determining a partitioning pattern for a block of depth values comprising assigning one or more samples of the block to a first partition and assigning one or more other samples of the block to a second partition. The process also includes determining a predicted value for at least one of the first partition and the second partition based on the determined partition pattern. The process also includes coding the at least one of the first partition and the second partition based on the predicted value.
Abstract:
A video coder is configured to apply a separable bilinear interpolation filter when determining reference blocks as part of advanced residual prediction. Particularly, the video coder may determine, based on a motion vector of a current block in a current picture of video data, a location of a first reference block in a first reference picture. The video coder may also determine a location of a second reference block in a second reference picture. The video coder may apply a separable bilinear interpolation filter to samples of the second reference picture to determine samples of the second reference block. The video coder may apply the separable bilinear interpolation filter to samples of a third reference picture to determine samples of a third reference block. Each respective sample of a predictive block may be equal to a respective sample of the first reference block plus a respective residual predictor sample.
Abstract:
A device for decoding video data includes a memory configured to store video data and one or more processors configured to: receive a first block of the video data; determine a quantization parameter for the first block; in response to determining that the first block is coded using a color-space transform mode for residual data of the first block, modify the quantization parameter for the first block; perform a dequantization process for the first block based on the modified quantization parameter for the first block; receive a second block of the video data; receive a difference value indicating a difference between a quantization parameter for the second block and the quantization parameter for the first block; determine the quantization parameter for the second block based on the received difference value and the quantization parameter for the first block; and decode the second block based on the determined quantization parameter.
Abstract:
A video encoder generates, based on a reference picture set of a current view component, a reference picture list for the current view component. The reference picture set includes an inter-view reference picture set. The video encoder encodes the current view component based at least in part on one or more reference pictures in the reference picture list. In addition, the video encoder generates a bitstream that includes syntax elements indicating the reference picture set of the current view component. A video decoder parses, from the bitstream, syntax elements indicating the reference picture set of the current view component. The video decoder generates, based on the reference picture set, the reference picture list for the current view component. In addition, the video decoder decodes at least a portion of the current view component based on one or more reference pictures in the reference picture list.
Abstract:
A video coder decodes a coding unit (CU) of video data. In decoding the video data, the video coder determines that the CU was encoded using the color space conversion. The video coder determines the initial quantization parameter (QP), determines the final QP that is equal to a sum of the initial QP and a QP offset, and inverse quantizes, based on the final QP, a coefficient block, then reconstructs the CU based on the inverse quantized coefficient blocks.
Abstract:
In general, this disclosure describes techniques for coding video blocks using a color-space conversion process. A video coder, such as a video encoder or a video decoder, may determine a bit-depth of a luma component of the video data and a bit-depth of a chroma component of the video data. In response to the bit-depth of the luma component being different than the bit depth of the chroma component, the video coder may modify one or both of the bit depth of the luma component and the bit depth of the chroma component such that the bit depths are equal. The video coder may further apply the color-space transform process in encoding the video data.
Abstract:
A method for motion estimation for screen and non-natural content coding is disclosed. In one aspect, the method may include selecting a candidate block of a first frame of the video data for matching with a current block of a second frame of the video data, calculating a first partial matching cost for matching a first subset of samples of the candidate block to the current block, and determining whether the candidate block has a lowest matching cost with the current block based at least in part on the first partial matching cost.
Abstract:
A video encoder generates a bitstream that includes a reference picture list modification (RPLM) command. The RPLM command belongs to a type of RPLM commands for inserting short-term reference pictures into reference picture lists. The RPLM command instructs a video decoder to insert a synthetic reference picture into the reference picture list. The video decoder decodes, based at least in part on syntax elements parsed from the bitstream, one or more view components and generates, based at least in part on the one or more view components, the synthetic reference picture. The video decoder modifies, in response to the RPLM commands, a reference picture list to include the synthetic reference picture. The video decoder may use one or more pictures in the reference picture list as reference pictures to perform inter prediction on one or more video blocks of a picture.
Abstract:
A prediction unit (PU) of a coding unit (CU) is split into two or more sub-PUs including a first sub-PU and a second sub-PU. A first motion vector of a first type is obtained for the first sub-PU and a second motion vector of the first type is obtained for the second sub-PU. A third motion vector of a second type is obtained for the first sub-PU and a fourth motion vector of the second type is obtained for the second sub-PU, such that the second type is different than the first type. A first portion of the CU corresponding to the first sub-PU is coded according to advanced residual prediction (ARP) using the first and third motion vectors. A second portion of the CU corresponding to the second sub-PU is coded according to ARP using the second and fourth motion vectors.