Abstract:
The techniques of this disclosure generally relate to using motion information for a corresponding block from a texture view component that corresponds with a block in a depth view component in coding the block in the depth view component. In some examples, for coding purposes, the techniques may use motion information when the spatial resolution of the texture view component is different than the spatial resolution of the depth view component. Among the various IMVP techniques described in this disclosure, this disclosure describes IVMP techniques for use in coding scenarios where a partition of a depth view macroblock (MB) corresponds to a texture view MB that is either intra coded or partitioned into four partitions.
Abstract:
A video encoder generates a first network abstraction layer (NAL) unit. The first NAL unit contains a first fragment of a parameter set associated with video data. The video encoder also generates a second NAL unit. The second NAL unit contains a second fragment of the parameter set. A video decoder may receive a bitstream that includes the first and second NAL units. The video decoder decodes, based at least in part on the parameter set, one or more coded pictures of the video data.
Abstract:
According to aspects of this disclosure, a device for decoding video data includes a memory configured to store the video data and a video decoder comprising one or more processor configured to determine that a current block of the video data is to be decoded using a 1D dictionary mode; receive, for a current pixel of the current block, a first syntax element indicating a starting location of reference pixels and a second syntax element identifying a number of reference pixels; based on the first syntax element and the second syntax element, locate a plurality of luma samples corresponding to the reference pixels; based on the first syntax element and the second syntax element, locate a plurality of chroma samples corresponding to the reference pixels; and copy the plurality of luma samples and the plurality of chroma samples to decode the current block.
Abstract:
In one example, a device for coding video data includes a memory comprising a decoded picture buffer (DPB) configured to store video data, and a video coder configured to code data representative of a value for a picture order count (POC) resetting period identifier, wherein the data is included in a slice segment header for a slice associated with a coded picture of a layer of video data, and wherein the value of the POC resetting period identifier indicates a POC resetting period including the coded pictureslice, and reset at least part of a POC value for the codeda picture in the POC resetting period in the layer and POC values for one or more pictures in the layer that are currently stored in the DPB.
Abstract:
An example method of decoding video data includes receiving encoded video data representing a parameter set, and receiving, in the encoded video data, a syntax element indicating whether the parameter set includes two or more extension syntax structures. The method may further include, in the case that the syntax element indicates that the parameter set includes the two or more extension syntax structures, receiving a corresponding syntax element for each of two or more corresponding coding modes, where the corresponding syntax element indicates whether or not the parameter set includes a respective extension syntax structure for the corresponding coding mode, and decoding the encoded video data corresponding to the parameter set.
Abstract:
The example techniques of this disclosure are directed to default construction techniques for the construction of a combined reference picture list, and default mapping techniques for the combined reference picture list. In some examples, a video coder may construct first and second reference picture lists from frame number values, and construct the combined reference picture list from the frame number values of the first and second reference picture lists. In some examples, a video coder may construct first and second reference picture lists from picture order count (POC) values, and construct the combined reference picture list from the POC values of the first and second reference picture lists. In some examples, a video coder may construct a combined reference picture list from received information for the construction, and map the pictures of the combined reference picture list to one of a first or second reference picture list.
Abstract:
In general, techniques are described for lookup table coding. A device comprising one or more processors and a memory may be configured to perform the techniques. The processors are configured to receive at least one difference table including a set of values, each value of the set being included or not included in the reference lookup table and generate a current lookup table based on the reference lookup table and the difference table. The current lookup table may include at least one of a value from the difference table that is not included in the reference table or a value from the reference table that is not included in the difference table. The one or more processors may then decode the video data based on a set of values of the current lookup table. The memory may be configured to store the current lookup table.
Abstract:
In accordance with one or more techniques of this disclosure, a video coder may divide a current prediction unit (PU) into a plurality of sub-PUs. Each of the sub-PUs may have a size smaller than a size of the PU. Furthermore, the current PU may be in a depth view of the multi-view video data. For each respective sub-PU from the plurality of sub-PUs, the video coder may identify a reference block for the respective sub-PU. The reference block may be co-located with the respective sub-PU in a texture view corresponding to the depth view. The video coder may use motion parameters of the identified reference block for the respective sub-PU to determine motion parameters for the respective sub-PU.
Abstract:
In one example, a device for coding video data includes a video coder configured to code motion information for a block of multiview video data, wherein the motion information includes a reference index that identifies a reference picture comprising a source for backward-warping view synthesis prediction (BVSP), perform BVSP on a portion of the reference picture to produce a BVSP reference block, and predict the block using the BVSP reference block.
Abstract:
Systems, methods, and devices for coding multilayer video data are disclosed that may include, encoding, decoding, transmitting, or receiving a non-entropy encoded set of profile, tier, and level syntax structures, potentially at a position within a video parameter set (VPS) extension. The systems, methods, and devices may refer to one of the profile, tier, and level syntax structures for each of a plurality of output layer sets. The systems, methods, and devices may encode or decode video data of one of the output layer sets based on information from the profile, tier, and level syntax structure referred to for the output layer set.