Abstract:
In an example, a method of decoding video data includes decoding data that indicates a picture order count (POC) reset for a POC value of a first picture of a first layer of multi-layer video data, wherein the first picture is included in an access unit. The example method also includes, based on the data that indicates the POC reset for the POC value of the first picture and prior to decoding the first picture, decrementing POC values of all pictures stored to a decoded picture buffer (DPB) that precede the first picture in coding order including at least one picture of a second layer of the multi-layer video data.
Abstract:
A video coder can be configured to receive in a video parameter set, one or more syntax elements that include information related to hypothetical reference decoder (HRD) parameters; receive in the video data a first sequence parameter set that includes a first syntax element identifying the video parameter set; receive in the video data a second sequence parameter set that includes a second syntax element identifying the video parameter set; and, code, based on the one or more syntax elements, a first set of video blocks associated with the first parameter set and second set of video blocks associated with the second parameter set.
Abstract:
An apparatus obtains an operation point reference track in the file and one or more additional tracks in the file. No operation point information sample group is signaled in any of the additional tracks. For each respective sample of each respective additional track of the one or more additional tracks, the apparatus determines whether to consider the respective sample part of the operation point information sample group. Based on the operation point reference track not containing a sample that is temporally collocated with the respective sample in the respective additional track, the respective sample in the respective additional track is considered part of an operation point information sample group of the last sample in the operation point reference track before the respective sample of the respective additional track.
Abstract:
This disclosure describes techniques that may enable a video coder to simultaneously implement multiple parallel processing mechanisms, including two or more of wavefront parallel processing (WPP), tiles, and entropy slices. This disclosure describes signaling techniques that are compatible both with coding standards that only allow one parallel processing mechanism to be implemented at a time, but that are also compatible with potential future coding standards that may allow for more than one parallel processing mechanism to be implemented simultaneously. This disclosure also describes restrictions that may enable WPP and tiles to be implemented simultaneously.
Abstract:
This disclosure describes techniques for signaling deblocking filter parameters for a current slice of video data with reduced bitstream overhead. Deblocking filter parameters may be coded in one or more of a picture layer parameter set and a slice header. The techniques reduce a number of bits used to signal the deblocking filter parameters by coding a first syntax element that indicates whether deblocking filter parameters are present in both the picture layer parameter set and the slice header, and only coding a second syntax element in the slice header when both sets of deblocking filter parameters are present. Coding the second syntax element is eliminated when deblocking filter parameters are present in only one of the picture layer parameter set or the slice header. The second syntax element indicates which set of deblocking filter parameters to use to define a deblocking filter applied to a current slice.
Abstract:
An apparatus configured to code video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a reference layer (RL) and an enhancement layer, the RL comprising an RL picture having an output region that includes a portion of the RL picture. The processor is configured to determine whether a condition indicates that information outside of the output region is available to predict a current block in the enhancement layer. The processor may encode or decode the video information.
Abstract:
As one example, a method of coding video data includes storing one or more decoding units of video data in a coded picture buffer (CPB). The method further includes obtaining a respective buffer removal time for the one or more decoding units. The method further includes removing the decoding units from the CPB in accordance with the obtained buffer removal time for each of the decoding units. The method further includes determining whether the CPB operates at access unit level or sub-picture level. The method further includes coding video data corresponding to the removed decoding units. If the CPB operates at access unit level, coding the video data comprises coding access units comprised in the decoding units. If the CPB operates at sub-picture level, coding the video data comprises coding subsets of access units comprised in the decoding units.
Abstract:
In some examples, a video encoder includes multiple sequence parameter set (SPS) IDs in an SEI message, such that multiple active SPSs can be indicated to a video decoder. In some examples, a video decoder activates a video parameter set (VPS) and/or one or more SPSs through referencing an SEI message, e.g., based on the inclusion of the VPS ID and one or more SPS IDs in the SEI message. The SEI message may be, as examples, an active parameter sets SEI message or a buffering period SEI message.
Abstract:
An apparatus for coding video information according to certain aspects includes computing hardware. The computing hardware is configured to: identify a current picture to be predicted using at least one type of inter layer prediction (ILP), the type of ILP comprising one or more of inter layer motion prediction (ILMP) or inter layer sample prediction (ILSP); and control: (1) a number of pictures that may be resampled and used to predict the current picture using ILMP and (2) a number of pictures that may be resampled and used to predict the current picture using ILSP, wherein the computing hardware is configured to control the number of pictures that may be resampled and used to predict the current picture using ILMP independent of the number of pictures that may be resampled and used to predict the current picture using ILSP.
Abstract:
A video coding device, such as a video encoder or a video decoder, may be configured to code a sub-picture timing supplemental enhancement information (SEI) message associated with a first decoding unit (DU) of an access unit (AU). The video coding device may further code a duration between coded picture buffer (CPB) removal time of a second DU of the AU in decoding order and CPB removal time of the first DU in the sub-picture SEI message. The coding device may also derive a CPB removal time of the first DU based at least in part on the sub-picture timing SEI message.