Abstract:
During a coding process, systems, methods, and apparatus may code data representative of the positions of elements of a chain that partitions a prediction unit of video data. Some examples may include generating the data representative of the positions of elements of a chain that partitions a prediction unit of video data. Each of the positions of the elements except for a last element may be within the prediction unit. The position of the last element may be outside the prediction unit. This can indicate that the penultimate element is the last element of the chain. Some examples may code the partitions of the prediction unit based on the chain.
Abstract:
Techniques for advanced residual prediction (ARP) in video coding may include receiving a first encoded block of video data in a first access unit, wherein the first encoded block of video data was encoded using advanced residual prediction and bi-directional prediction, determining temporal motion information for a first prediction direction of the first encoded block of video data, and identifying reference blocks for a second prediction direction, different than the first prediction direction, using the temporal motion information determined for the first prediction direction, wherein the reference blocks are in a second access unit.
Abstract:
In some examples, a method of decoding depth data in a video coding process includes defining a depth prediction unit (PU) of a size greater than 32×32 within a depth coding unit (CU) and generating one or more partitions of the depth PU. The method also includes obtaining residual data for each of the partitions; obtaining prediction data for each of the partitions; and reconstructing each of the partitions based on the residual data and the prediction data for the respective partitions.
Abstract:
For each respective coding unit (CU) of a slice of a picture of the video data, a video coder may set, in response to determining that the respective CU is the first CU of a coding tree block (CTB) row of the picture or the respective CU is the first CU of the slice, a derived disparity vector (DDV) to an initial value. Furthermore, the video coder may perform a neighbor-based disparity vector derivation (NBDV) process that attempts to determine a disparity vector for the respective CU. When performing the NBDV process does not identify an available disparity vector for the respective CU, the video coder may determine that the disparity vector for the respective CU is equal to the DDV.
Abstract:
In one example, the disclosure is directed to techniques that include, for each prediction unit (PU) of a respective coding unit (CU) of a slice of a picture of the video data, determining at least one disparity value based at least in part on at least one depth value of at least one reconstructed depth sample of at least one neighboring sample. The techniques further include determining at least one disparity vector based at least in part on the at least one disparity value, wherein the at least one disparity vector is for the respective CU for each PU. The techniques further include reconstructing, based at least in part on the at least one disparity vector, a coding block for the respective CU for each PU.
Abstract:
In an example, a method of coding video data includes determining a first depth value of a depth look up table (DLT), where the first depth value is associated with a first pixel of the video data. The method also includes determining a second depth value of the DLT, where the second depth value is associated with a second pixel of the video data, The method also includes coding the DLT including coding the second depth value relative to the first depth value.
Abstract:
Techniques are described for deriving a disparity vector for a current block based on a disparity motion vector of a neighboring block in a 3D-AVC video coding process. The disparity vector derivation allows for texture-first coding where a depth view component of a dependent view is coded subsequent to the coding of the corresponding texture component of the dependent view.
Abstract:
A video coder signals, in a bitstream, a syntax element that indicates whether inter-view/layer reference pictures are ever included in a reference picture list for a current view component/layer representation. A video decoder obtains, from the bitstream, the syntax element that indicates whether inter-view/layer reference pictures are ever included in a reference picture list for a current view component/layer representation. The video decoder decodes the current view component/layer representation.
Abstract:
In an example, a method of decoding video data includes determining whether a reference index for a current block corresponds to an inter-view reference picture, and when the reference index for the current block corresponds to the inter-view reference picture, obtaining, from an encoded bitstream, data indicating a view synthesis prediction (VSP) mode of the current block, where the VSP mode for the reference index indicates whether the current block is predicted with view synthesis prediction from the inter-view reference picture.
Abstract:
In one example, a device includes a video coder configured to determine a first co-located reference picture for generating a first temporal motion vector predictor candidate for predicting a motion vector of a current block, determine a second co-located reference picture for generating a second temporal motion vector predictor candidate for predicting the motion vector of the current block, determine a motion vector predictor candidate list that includes at least one of the first temporal motion vector predictor candidate and the second temporal motion vector predictor candidate, select a motion vector predictor from the motion vector predictor candidate list, and code the motion vector of the current block relative to the selected motion vector predictor.