Abstract:
An example video coding device is configured to compare an inter-view predicted motion vector candidate (IPMVC) to a motion vector inheritance (MVI) candidate, where the IPMVC and the MVI candidate are each associated with a block of video data in a dependent depth view, and where the IPMVC is generated from a corresponding block of video data in a base depth view. The video coding device may be further configured to perform one of adding the IPMVC to a merge candidate list based on the IPMVC being different from the MVI candidate, or omitting the IPMVC from the merge candidate list based on the IPMVC being identical to the MVI candidate.
Abstract:
In some examples, a method of decoding depth data in a video coding process includes defining a depth prediction unit (PU) of a size greater than 32×32 within a depth coding unit (CU) and generating one or more partitions of the depth PU. The method also includes obtaining residual data for each of the partitions; obtaining prediction data for each of the partitions; and reconstructing each of the partitions based on the residual data and the prediction data for the respective partitions.
Abstract:
An example device for filtering a decoded block of video data includes one or more processors implemented in circuitry and configured to decode a current block of a current picture of the video data, select a filter (such as an adaptive loop filter) to be used to filter pixels of the current block, calculate a gradient of at least one pixel for the current block, select a geometric transform to be performed on one of a filter support region or coefficients of the selected filter, wherein the one or more processors are configured to select the geometric transform that corresponds to an orientation of the gradient of the at least one pixel, perform the geometric transform on either the filter support region or the coefficients of the selected filter, and filter the at least one pixel of the current block using the selected filter after performing the geometric transform.
Abstract:
For each respective coding unit (CU) of a slice of a picture of the video data, a video coder may set, in response to determining that the respective CU is the first CU of a coding tree block (CTB) row of the picture or the respective CU is the first CU of the slice, a derived disparity vector (DDV) to an initial value. Furthermore, the video coder may perform a neighbor-based disparity vector derivation (NBDV) process that attempts to determine a disparity vector for the respective CU. When performing the NBDV process does not identify an available disparity vector for the respective CU, the video coder may determine that the disparity vector for the respective CU is equal to the DDV.
Abstract:
A video coder reconstructs a set of chroma reference samples and reconstructs luma samples of a non-square prediction unit. Additionally, the video coder sub-samples the set of luma reference samples such that a total number of the luma reference samples that neighbor a longer side of the non-square prediction block is the same as a total number of the luma reference samples that neighbor a shorter side of the non-square prediction block. The video coder determines a Linear Model (LM) parameter based on: β=(Σyi−α·Σxi)/I, where I is a total number of reference samples in the set of the luma reference samples, xi is a luma reference sample in the set of luma reference samples, yi is a chroma reference sample in the set of chroma reference samples. The video coder uses the LM parameter in a process to determine values of chroma samples of the non-square prediction block.
Abstract:
Techniques are described for determining a block in a reference picture in a reference view based on a disparity vector for a current block. The techniques start the disparity vector from a bottom-right pixel in a center 2×2 sub-block within the current block, and determine a location within the reference picture to which the disparity vector refers. The determined block covers the location referred to by the disparity vector based on the disparity vector starting from the bottom-right pixel in the center 2×2 sub-block within the current block.
Abstract:
A video coder, such as a video encoder or a video decoder, uses a first Rice parameter derivation method and a second Rice parameter derivation method for coding coefficient levels of the TU. The first Rice parameter derivation method is a statistics-based derivation method. The second Rice parameter derivation method is a template-based derivation method.
Abstract:
A video coder generates a list of merging candidates for coding a video block of the 3D video. A maximum number of merging candidates in the list of merging candidates may be equal to 6. As part of generating the list of merging candidates, the video coder determines whether a number of merging candidates in the list of merging candidates is less than 5. If so, the video coder derives one or more combined bi-predictive merging candidates. The video coder includes the one or more combined bi-predictive merging candidates in the list of merging candidates.
Abstract:
In one example, a video coder (e.g., a video encoder or a video decoder) is configured to determine that a current block of video data is coded using a disparity motion vector, wherein the current block is within a containing block, based on a determination that a neighboring block to the current block is also within the containing block, substitute a block outside the containing block and that neighbors the containing block for the neighboring block in a candidate list, select a disparity motion vector predictor from one of a plurality of blocks in the candidate list, and code the disparity motion vector based on the disparity motion vector predictor. In this manner, the techniques of this disclosure may allow blocks within the containing block to be coded in parallel.
Abstract:
A device includes one or more processors configured to derive M most probable modes (MPMs) for intra prediction of a block of video data. A syntax element that indicates whether a MPM index or a non-MPM index is used to indicate a selected intra prediction mode of the plurality of intra prediction modes for intra prediction of the block of video data is decoded. The one or more processors are configured such that, based on the MPM index indicating the selected intra prediction mode, the one or more processors decode the non-MPM index. The non-MPM index is encoded in the bitstream as a code word shorter than [log2 N] bits if the non-MPM index satisfies a criterion and is encoded in the bitstream as a fixed length code with [log2 N] bits otherwise. The one or more processors reconstruct the block based on the selected intra prediction mode.