Abstract:
Techniques are described for sub-prediction unit (PU) based motion prediction for video coding in HEVC and 3D-HEVC. In one example, the techniques include an advanced temporal motion vector prediction (TMVP) mode to predict sub-PUs of a PU in single layer coding for which motion vector refinement may be allowed. The advanced TMVP mode includes determining motion vectors for the PU in at least two stages to derive motion information for the PU that includes different motion vectors and reference indices for each of the sub-PUs of the PU. In another example, the techniques include storing separate motion information derived for each sub-PU of a current PU predicted using a sub-PU backward view synthesis prediction (BVSP) mode even after motion compensation is performed. The additional motion information stored for the current PU may be used to predict subsequent PUs for which the current PU is a neighboring block.
Abstract:
Techniques for decoding video data include receiving residual data corresponding to a block of video data, wherein the block of video data is encoded using asymmetric motion partitioning, is uni-directionally predicted using backward view synthesis prediction (BVSP), and has a size of 16×12, 12×16, 16×4 or 4×16, partitioning the block of video data into sub-blocks, each sub-block having a size of 8×4 or 4×8, deriving a disparity motion vector for each of the sub-blocks from a corresponding depth block in a depth picture corresponding to a reference picture, synthesizing a respective reference block for each of the sub-blocks using the respective derived disparity motion vector, and decoding the block of video data by performing motion compensation on each of the sub-blocks using the residual data and the synthesized respective reference blocks.
Abstract:
This disclosure describes techniques for in-loop depth map filtering for 3D video coding processes. In one example, a method of decoding video data comprises decoding a depth block corresponding to a texture block, receiving a respective indication of one or more offset values for the decoded depth block, and performing a filtering process on edge pixels of the depth block using at least one of the one or more offset values to create a filtered depth block.
Abstract:
Techniques are described for determining a block in a reference picture in a reference view based on a disparity vector for a current block. The techniques start the disparity vector from a bottom-right pixel in a center 2×2 sub-block within the current block, and determine a location within the reference picture to which the disparity vector refers. The determined block covers the location referred to by the disparity vector based on the disparity vector starting from the bottom-right pixel in the center 2×2 sub-block within the current block.
Abstract:
Techniques are described for encoding and decoding depth data for three-dimensional (3D) video data represented in a multiview plus depth format using depth coding modes that are different than high-efficiency video coding (HEVC) coding modes. Examples of additional depth intra coding modes available in a 3D-HEVC process include at least two of a Depth Modeling Mode (DMM), a Simplified Depth Coding (SDC) mode, and a Chain Coding Mode (CCM). In addition, an example of an additional depth inter coding mode includes an Inter SDC mode. In one example, the techniques include signaling depth intra coding modes used to code depth data for 3D video data in a depth modeling table that is separate from the HEVC syntax. In another example, the techniques of this disclosure include unifying signaling of residual information of depth data for 3D video data across two or more of the depth coding modes.
Abstract:
A video coder stores only one derived disparity vector (DDV) for a slice of a current picture of the video data. The video coder uses the DDV for the slice in a Neighboring Block Based Disparity Vector (NBDV) derivation process to determine a disparity vector for a particular block. Furthermore, the video coder stores, as the DDV for the slice, the disparity vector for the particular block.
Abstract:
When a current view is a dependent texture view, a current coding unit (CU) is not intra coded, and a partitioning mode of the current CU is equal to PART_2N×2N, a video coder obtains, from a bitstream that comprises an encoded representation of the video data, a weighting factor index for the current CU, wherein the current CU is in a picture belonging to a current view. When the current view is not a dependent texture view, or the current CU is intra coded, or the partitioning mode of the current CU is not equal to PART_2N×2N, the video decoder assumes that the weighting factor index is equal to a particular value that indicates that residual prediction is not applied with regard to the current CU.
Abstract:
A video coder searches a set of neighbor blocks to generate a plurality of disparity vector candidates. Each of the neighbor blocks is a spatial or temporal neighbor of a current block. The video coder determines, based at least in part on the plurality of disparity vector candidates, a final disparity vector for the current block.
Abstract:
A parent block is partitioned into the plurality of blocks and a disparity vector derivation process is performed to derive a disparity vector for a representative block in the plurality of blocks. A video encoder generates a bitstream that includes a coded representation of the video data in part by performing, based on the derived disparity vector and without separately deriving disparity vectors for any block in the plurality of blocks other than the representative block, inter-view prediction for two or more blocks in the plurality of blocks. A video decoder reconstructs sample blocks for two or more blocks in the plurality of blocks in part by performing, based on the derived disparity vector and without separately deriving disparity vectors for any block in the plurality of blocks other than the representative block, inter-view prediction for the two or more blocks in the plurality of blocks.
Abstract:
In an example, a method of coding video data includes determining a location of a temporal reference block indicated by a temporal motion vector to a current block of video data, where the current block and the temporal reference block are located in a first layer of video data. The method also includes interpolating, with a first type of interpolation, a location of a disparity reference block indicated by a disparity vector of the current block, where the disparity reference block is located in a second, different layer, and where the first type of interpolation comprises a bi-linear filter. The method also includes determining a temporal-disparity reference block of the disparity reference block indicated by a combination of the temporal motion vector and the disparity vector, and coding the current block based on the temporal reference block, the disparity reference block, and the temporal-disparity reference block.