Abstract:
A prediction unit (PU) of a coding unit (CU) is split into two or more sub-PUs including a first sub-PU and a second sub-PU. A first motion vector of a first type is obtained for the first sub-PU and a second motion vector of the first type is obtained for the second sub-PU. A third motion vector of a second type is obtained for the first sub-PU and a fourth motion vector of the second type is obtained for the second sub-PU, such that the second type is different than the first type. A first portion of the CU corresponding to the first sub-PU is coded according to advanced residual prediction (ARP) using the first and third motion vectors. A second portion of the CU corresponding to the second sub-PU is coded according to ARP using the second and fourth motion vectors.
Abstract:
Techniques are described for encoding and decoding depth data for three-dimensional (3D) video data represented in a multiview plus depth format using depth coding modes that are different than high-efficiency video coding (HEVC) coding modes. Examples of additional depth intra coding modes available in a 3D-HEVC process include at least two of a Depth Modeling Mode (DMM), a Simplified Depth Coding (SDC) mode, and a Chain Coding Mode (CCM). In addition, an example of an additional depth inter coding mode includes an Inter SDC mode. In one example, the techniques include signaling depth intra coding modes used to code depth data for 3D video data in a depth modeling table that is separate from the HEVC syntax. In another example, the techniques of this disclosure include unifying signaling of residual information of depth data for 3D video data across two or more of the depth coding modes.
Abstract:
A device performs a disparity vector derivation process to determine a disparity vector for a current block. As part of performing the disparity vector derivation process, when either a first or a second spatial neighboring block has a disparity motion vector or an implicit disparity vector, the device converts the disparity motion vector or the implicit disparity vector to the disparity vector for the current block. The number of neighboring blocks that is checked in the disparity vector derivation process is reduced, potentially resulting in decreased complexity and memory bandwidth requirements.
Abstract:
A device for decoding video data is configured to determine, based on a chroma sampling format for the video data, that adaptive color transform is enabled for one or more blocks of the video data; determine a quantization parameter for the one or more blocks based on determining that the adaptive color transform is enabled; and dequantize transform coefficients based on the determined quantization parameter. A device for decoding video data is configured to determine for one or more blocks of the video data that adaptive color transform is enabled; receive in a picture parameter set, one or more offset values in response to adaptive color transform being enabled; determine a quantization parameter for a first color component of a first color space based on a first of the one or more offset values; and dequantize transform coefficients based on the quantization parameter.
Abstract:
A device for decoding video data is configured to determine for one or more blocks of the video data that adaptive color transform is enabled; determine a quantization parameter for the one or more blocks; in response to a value of the quantization parameter being below a threshold, modify the quantization parameter to determine a modified quantization parameter; and dequantize transform coefficients based on the modified quantization parameter.
Abstract:
In an example, a process for coding video data includes coding, with a variable length code, a syntax element indicating depth modeling mode (DMM) information for coding a depth block of video data. The process also includes coding the depth block based on the DMM information.
Abstract:
A video encoder generates, based on a reference picture set of a current view component, a reference picture list for the current view component. The reference picture set includes an inter-view reference picture set. The video encoder encodes the current view component based at least in part on one or more reference pictures in the reference picture list. In addition, the video encoder generates a bitstream that includes syntax elements indicating the reference picture set of the current view component. A video decoder parses, from the bitstream, syntax elements indicating the reference picture set of the current view component. The video decoder generates, based on the reference picture set, the reference picture list for the current view component. In addition, the video decoder decodes at least a portion of the current view component based on one or more reference pictures in the reference picture list.
Abstract:
A device for encoding video data includes a memory configured to store video data and a video encoder comprising one or more processors configured to, for a current layer being encoded, determine that the current layer has no direct reference layers, based on determining that the current layer has no direct reference layers, set at least one of a first syntax element, a second syntax element, a third syntax element, or a fourth syntax element to a disabling value indicating that a coding tool corresponding to the syntax element is disabled for the current layer.
Abstract:
Techniques are described determining a partition pattern for intra-prediction encoding or decoding a depth block from a partition pattern of one or more partition patterns associated with smaller sized blocks. A video encoder may intra-prediction encode the depth block based on the determined partition pattern, and a video decoder may intra-prediction decode the depth block based on the determine partition pattern.
Abstract:
A video coder generates a list of merging candidates for coding a video block of the 3D video. A maximum number of merging candidates in the list of merging candidates may be equal to 6. As part of generating the list of merging candidates, the video coder determines whether a number of merging candidates in the list of merging candidates is less than 5. If so, the video coder derives one or more combined bi-predictive merging candidates. The video coder includes the one or more combined bi-predictive merging candidates in the list of merging candidates.