Abstract:
A device for processing video data includes a memory configured to store at least a portion of a bitstream of multi-layer video data and one or more processors configured to generate a first video coding layer (VCL) network abstraction layer (NAL) unit for a first picture of an access unit, the first VCL NAL unit comprising a first slice type; generate a second VCL NAL unit for a second picture of the access unit, the second VCL NAL unit comprising a second slice type; and generate an access unit delimiter (AUD) NAL unit based on the first and second slice types.
Abstract:
Techniques and systems are provided for encoding video data. For example, a method of encoding video data includes generating an encoded video bitstream comprising multiple layers. The encoded video bitstream including a parameter set defining parameters of the encoded video bitstream. The method further includes determining one or more parameters of the parameter set that include information describing a first sub-bitstream of the encoded video bitstream that includes one or more layers with video data and information describing a second sub-bitstream of the encoded video bitstream that includes one or more layers with no video data. The method further includes performing a bitstream conformance check on the first sub-bitstream or the second sub-bitstream based on whether at least one layer of the first sub-bitstream or the second sub-bitstream includes video data.
Abstract:
A multi-layer video decoder is configured to determine, based on a list of triplet entries, whether the multi-layer video decoder is capable of decoding a bitstream that comprises an encoded representation of the multi-layer video data. The number of triplet entries in the list is equal to a number of single-layer decoders in the multi-layer video decoder. Each respective triplet entry in the list of triplet entries indicates a profile, a tier, and a level for a respective single-layer decoder in the multi-layer video decoder. The multi-layer video decoder is configured such that, based on the multi-layer video decoder being capable of decoding the bitstream, the multi-layer video decoder decodes the bitstream.
Abstract:
In one example, a device for coding video data includes a memory comprising a decoded picture buffer (DPB) configured to store video data, and a video coder configured to code data representative of a value for a picture order count (POC) resetting period identifier, wherein the data is included in a slice segment header for a slice associated with a coded picture of a layer of video data, and wherein the value of the POC resetting period identifier indicates a POC resetting period including the coded pictureslice, and reset at least part of a POC value for the codeda picture in the POC resetting period in the layer and POC values for one or more pictures in the layer that are currently stored in the DPB.
Abstract:
Systems, methods, and devices for coding multilayer video data are disclosed that may include, encoding, decoding, transmitting, or receiving a non-entropy encoded set of profile, tier, and level syntax structures, potentially at a position within a video parameter set (VPS) extension. The systems, methods, and devices may refer to one of the profile, tier, and level syntax structures for each of a plurality of output layer sets. The systems, methods, and devices may encode or decode video data of one of the output layer sets based on information from the profile, tier, and level syntax structure referred to for the output layer set.
Abstract:
A method of coding video data includes upsampling at least a portion of a reference layer picture to an upsampled picture having an upsampled picture size. The upsampled picture size has a horizontal upsampled picture size and a vertical upsampled picture size. At least one of the horizontal or vertical upsampled picture sizes may be different than a horizontal picture size or vertical picture size, respectively, of an enhancement layer picture. In addition, position information associated with the upsampled picture may be signaled. An inter-layer reference picture may be generated based on the upsampled picture and the position information.
Abstract:
A device for decoding encoded mesh data is configured to receive, in a bitstream of the encoded mesh data, one or more syntax elements; determine an offset value based on the one or more syntax elements; determine a set of transform coefficients; apply the offset to the set of transform coefficients to determine a set of updated transform coefficients; inverse transform the set of updated transform coefficients to determine a set of displacement vectors; and determine a decoded mesh based on the set of displacement vectors.
Abstract:
A device to code a point cloud data that includes a memory configured to store data representing points of a point cloud, and one or more processors implemented in circuitry and configured to: determine height values of points in a point cloud; code a data structure including data that represents a top threshold and a bottom threshold; classify points having height values between the top threshold and the bottom threshold into the set of ground points; classify points having height values above the top threshold or below the bottom threshold into the set of object points. The one or more processors code the ground points and the object points according to the classifications. The one or more processors code a geometry data unit header that includes data that overrides or refines the data of the data structure for the at least one of the top threshold or the bottom threshold.
Abstract:
Techniques are described for decoding video data. A video decoder may determine chroma blocks in a chroma quantization group (QG) of the video data, determine a quantization parameter predictor that is the same for each of the chroma blocks of the chroma QG, determine an offset value that is the same for two or more of the chroma blocks of the chroma QG, determine a quantization parameter value for each of the two or more of the chroma blocks in the chroma QG based on the quantization parameter predictor and the offset value inverse quantize coefficients of one or more residual blocks for the chroma blocks based on the determined quantization parameter value, generate the one or more residual blocks based on the inverse quantized coefficients, and reconstruct the chroma blocks based on the one or more residual blocks.
Abstract:
A method of processing a point cloud includes determining that angular mode is enabled for encoding or decoding a current point of points in the point cloud, and parsing or signaling information for an azimuthal angle residual value for the current point independent of a radius value of the current point or a radius value of a previous point of the point cloud that is previous to the current point in decoding order. The azimuthal angle residual value is based on a difference between an azimuthal angle value of the current point and a predictor azimuthal angle value of the current point, and the azimuthal angle value of the current point is indicative an azimuthal angle of the current point based on a laser used to capture the points of the point cloud.