Abstract:
In some examples, a method of decoding depth data in a video coding process includes defining a depth prediction unit (PU) of a size greater than 32×32 within a depth coding unit (CU) and generating one or more partitions of the depth PU. The method also includes obtaining residual data for each of the partitions; obtaining prediction data for each of the partitions; and reconstructing each of the partitions based on the residual data and the prediction data for the respective partitions.
Abstract:
In an example, a method of decoding video data includes decoding data that indicates a picture order count (POC) reset for a POC value of a first picture of a first layer of multi-layer video data, wherein the first picture is included in an access unit. The example method also includes, based on the data that indicates the POC reset for the POC value of the first picture and prior to decoding the first picture, decrementing POC values of all pictures stored to a decoded picture buffer (DPB) that precede the first picture in coding order including at least one picture of a second layer of the multi-layer video data.
Abstract:
A computing device obtains a Network Abstraction Layer (NAL) unit header of a NAL unit of the multi-layer video data. The NAL unit header comprises a layer identifier syntax element having a value that specifies an identifier of a layer of the NAL unit. The layer identifier syntax element comprises a plurality of bits that represent the value within a defined range of values. A requirement of the bitstream conforming to a video coding standard is that the value of the layer identifier syntax element is less than the maximum value of the range of values.
Abstract:
For each respective coding unit (CU) of a slice of a picture of the video data, a video coder may set, in response to determining that the respective CU is the first CU of a coding tree block (CTB) row of the picture or the respective CU is the first CU of the slice, a derived disparity vector (DDV) to an initial value. Furthermore, the video coder may perform a neighbor-based disparity vector derivation (NBDV) process that attempts to determine a disparity vector for the respective CU. When performing the NBDV process does not identify an available disparity vector for the respective CU, the video coder may determine that the disparity vector for the respective CU is equal to the DDV.
Abstract:
In one example, the disclosure is directed to techniques that include, for each prediction unit (PU) of a respective coding unit (CU) of a slice of a picture of the video data, determining at least one disparity value based at least in part on at least one depth value of at least one reconstructed depth sample of at least one neighboring sample. The techniques further include determining at least one disparity vector based at least in part on the at least one disparity value, wherein the at least one disparity vector is for the respective CU for each PU. The techniques further include reconstructing, based at least in part on the at least one disparity vector, a coding block for the respective CU for each PU.
Abstract:
In one example, a video coder is configured to code a value for a syntax element indicating whether at least a portion of a picture order count (POC) value of a picture is to be reset to a value of zero, when the value for the syntax element indicates that the portion of the POC value is to be reset to the value of zero, reset at least the portion of the POC value such that the portion of the POC value is equal to zero, and code video data using the reset POC value. Coding video data using the reset POC value may include inter-predicting a block of a subsequent picture relative to the picture, where the block may include a motion parameter that identifies the picture using the reset POC value. The block may be coded using temporal inter-prediction or inter-layer prediction.
Abstract:
In an example, a method of coding video data includes determining a first depth value of a depth look up table (DLT), where the first depth value is associated with a first pixel of the video data. The method also includes determining a second depth value of the DLT, where the second depth value is associated with a second pixel of the video data, The method also includes coding the DLT including coding the second depth value relative to the first depth value.
Abstract:
Techniques are described for deriving a disparity vector for a current block based on a disparity motion vector of a neighboring block in a 3D-AVC video coding process. The disparity vector derivation allows for texture-first coding where a depth view component of a dependent view is coded subsequent to the coding of the corresponding texture component of the dependent view.
Abstract:
A video coder signals, in a bitstream, a syntax element that indicates whether inter-view/layer reference pictures are ever included in a reference picture list for a current view component/layer representation. A video decoder obtains, from the bitstream, the syntax element that indicates whether inter-view/layer reference pictures are ever included in a reference picture list for a current view component/layer representation. The video decoder decodes the current view component/layer representation.
Abstract:
Techniques for encapsulating video streams containing multiple coded views in a media file are described herein. In one example, a method includes parsing a track of multiview video data, wherein the track includes one or more views, including only one of a texture view of a particular view and a depth view of the particular view. The method further includes parsing a track reference to determine a dependency of the track to a referenced track indicated in the track reference. Track reference types include ‘deps’ that indicates that the track includes the depth view of the particular view and the reference track includes the texture, ‘tref’ that indicates that the track depends on the texture view which is stored in the referenced track, and ‘dref’ that indicates that the track depends on the depth view which is stored in the referenced track.