Abstract:
A video processing device can be configured to process one or more initial syntax elements for a parameter set associated with a video bitstream; receive in the parameter set an offset syntax element for the parameter set that identifies syntax elements to be skipped within the parameter set; and based on the offset syntax element, skip the syntax elements within the parameter set and process one or more additional syntax elements in the parameter set that are after the skipped syntax elements in the parameter set.
Abstract:
A video processing device can be configured to process one or more initial syntax elements for a parameter set associated with a video bitstream; receive in the parameter set an offset syntax element for the parameter set that identifies syntax elements to be skipped within the parameter set; and based on the offset syntax element, skip the syntax elements within the parameter set and process one or more additional syntax elements in the parameter set that are after the skipped syntax elements in the parameter set.
Abstract:
In one example, a device for coding video data includes a video coder configured to code an intra random access point (IRAP) picture of a partially aligned IRAP access unit of video data, and code data that indicates, when performing random access from the partially aligned IRAP access unit, at least one picture of a video coding layer that is not correctly decodable. When the video coder comprises a video decoder, the video decoder may skip decoding of the pictures that are not correctly decodable, assuming random access has been performed starting from the partially aligned IRAP access unit.
Abstract:
Techniques described herein for coding video data include techniques for coding pictures partitioned into tiles, in which each of the plurality of tiles in a picture is assigned to one of a plurality of tile groups. One example method for coding video data comprising a picture that is partitioned into a plurality tiles comprises coding video data in a bitstream, and coding, in the bitstream, information that indicates one of a plurality of tile groups to which each of the plurality of tiles is assigned. The techniques for grouping tiles described herein may facilitate improved parallel processing for both encoding and decoding of video bitstreams, improved error resilience, and more flexible region of interest (ROI) coding.
Abstract:
In an example, a method of coding video data includes determining a location of a reference sample associated with a reference picture of video data based on one or more scaled offset values, where the reference picture is included in a first layer of a multi-layer bitstream and the one or more scaled offset values indicate a difference in scale between the first layer and a second, different layer. The method also includes determining a location of a collocated reference block of video data in the first layer based on the location of the reference sample, and coding a current block of video data in the second layer relative to the collocated reference block.
Abstract:
Techniques are described for signaling decoding unit identifiers for decoding units of an access unit. The video decoder determines which network abstraction layer (NAL) units are associated with which decoding units based on the decoding unit identifiers. Techniques are also described for including one or more copies of supplemental enhancement information (SEI) messages in an access unit.
Abstract:
This disclosure describes techniques for signaling deblocking filter parameters for a current slice of video data with reduced bitstream overhead. Deblocking filter parameters may be coded in one or more of a picture layer parameter set and a slice header. The techniques reduce a number of bits used to signal the deblocking filter parameters by coding a first syntax element that indicates whether deblocking filter parameters are present in both the picture layer parameter set and the slice header, and only coding a second syntax element in the slice header when both sets of deblocking filter parameters are present. Coding the second syntax element is eliminated when deblocking filter parameters are present in only one of the picture layer parameter set or the slice header. The second syntax element indicates which set of deblocking filter parameters to use to define a deblocking filter applied to a current slice.
Abstract:
Provided are techniques and systems for generating an output file for multi-layer video data, where the output file is generated according to a file format. Techniques and systems for processing an output file generated according to the file format are also provided. The multi-layer video data may be, for example, video data encoded using a L-HEVC video encoding algorithm. The file format may be based on the ISO base media file format (ISOBMFF). The output file may include a plurality of tracks. Generating the output file may include generating the output file in accordance with a restriction. The restriction may be that each track of the plurality of tracks comprises at most one layer from the multi-layer video data. The output file may also be generated according a restriction that each of the plurality of tracks does not include at least one of an aggregator or an extractor.
Abstract:
A video coding device, such as a video decoder, may be configured to derive at least one of a coded picture buffer (CPB) arrival time and a CPB nominal removal time for an access unit (AU) at both an access unit level and a sub-picture level regardless of a value of a syntax element that defines whether a decoding unit (DU) is the entire AU. The video coding device may further be configured to determine a removal time of the AU based at least in part on one of the CPB arrival time and a CPB nominal removal time and decode video data of the AU based at least in part on the removal time.
Abstract:
In one example, a device includes a video coder (e.g., a video encoder or a video decoder) configured to determine that a block of video data is to be coded in accordance with a three-dimensional extension of High Efficiency Video Coding (HEVC), and, based the determination that the block is to be coded in accordance with the three-dimensional extension of HEVC, disable temporal motion vector prediction for coding the block. The video coder may be further configured to, when the block comprises a bi-predicted block (B-block), determine that the B-block refers to a predetermined pair of pictures in a first reference picture list and a second reference picture list, and, based on the determination that the B-block refers to the predetermined pair, equally weight contributions from the pair of pictures when calculating a predictive block for the block.