Abstract:
A method of coding video data includes receiving video information associated with a first layer and a second layer and determining whether at least one of the first layer and the second layer is a default layer. The method can include at least partially restricting inter-layer prediction when neither the first layer nor the second layer is the default layer. A default layer can be a base layer or an enhancement layer. A flag can be received that indicates that inter-layer prediction is to be restricted. In addition, the method can include determining whether inter-layer prediction is allowed for the video information associated with the first layer, and determining whether inter-layer prediction is partially allowed for the video information associated with the second layer such that motion compensation is not used with the second layer video information.
Abstract:
In an example, a video coder may determine a first layer component of a first layer of video data, wherein the first layer of video data is associated with a layer identifier. The video coder may generate at least one filtered layer component by filtering the first layer component, and assign the layer identifier of the first layer and a filtered layer component index to the at least one filtered layer component, where the filtered layer component index is different than a layer component index of the first layer component. The video coder may also add the at least one filtered layer component to a reference picture set for performing inter-layer prediction of a layer other than the first layer of video data.
Abstract:
A device generates a file that comprises a plurality of samples that contain coded pictures. In addition, the file contains a box that identifies a sample group that contains one or more samples from among the plurality of samples, wherein the box further indicates that each sample in the sample group is a step-wise temporal sub-layer access (STSA) sample. The same or different device identifies, based on data in the box that identifies the sample group, STSA samples from among the samples in the file that contains the box.
Abstract:
A device generates a file that stores coded samples that contain coded pictures of the video data. The file also includes a sample entry that includes an element that indicates whether all sequence parameter sets (SPSs) that are activated when a stream to which the sample entry applies is decoded have syntax elements that indicate that temporal sub-layer up-switching to any higher temporal sub-layer can be performed at any sample associated with the SPSs. The same or different device determines, based on the element in the sample entry, that all SPSs that are activated when the stream to which the sample entry applies is decoded have syntax elements that indicate that temporal sub-layer up-switching to any higher temporal sub-layer can be performed at any sample associated with the SPSs.
Abstract:
Techniques are described for modal sub-bitstream extraction. For example, a network entity may select a sub-bitstream extraction mode from a plurality of sub-bitstream extraction modes. Each sub-bitstream extraction mode may define a particular manner in which to extract coded pictures from views or layers to allow a video decoder to decode target output views or layers for display. In this manner, the network entity may adaptively select the appropriate sub-bitstream extraction technique, rather than a rigid, fixed sub-bitstream extraction technique.
Abstract:
A video coder can be configured to receive in a video parameter set, one or more syntax elements that include information related to hypothetical reference decoder (HRD) parameters; receive in the video data a first sequence parameter set comprising a first syntax element identifying the video parameter set; receive in the video data a second sequence parameter set comprising a second syntax element identifying the video parameter set; and, code, based on the one or more syntax elements, a first set of video blocks associated with the first parameter set and second set of video blocks associated with the second parameter set.
Abstract:
A device for processing video data can be configured to receive in a video parameter set, one or more syntax elements that include information related to session negotiation; receive in the video data a first sequence parameter set comprising a first syntax element identifying the video parameter set; receive in the video data a second sequence parameter set comprising a second syntax element identifying the video parameter set; process, based on the one or more syntax elements, a first set of video blocks associated with the first parameter set and a second set of video blocks associated with the second parameter set.
Abstract:
Systems, devices, and methods for capturing and displaying picture data including picture orientation information are described. In one innovative aspect, a method for transmitting media information is provided. The method includes obtaining picture or video information, said picture or video information including image data and orientation information of a media capture unit when the picture or video information is obtained. The method further includes encoding said picture or video information, wherein the orientation information is included in a first portion and the image data is included in a second portion, the second portion being encoded and the first portion being distinct from the second portion. The method also includes transmitting the first portion and the second portion.
Abstract:
This disclosure describes techniques for signaling deblocking filter parameters for a current slice of video data with reduced bitstream overhead. Deblocking filter parameters may be coded in one or more of a picture layer parameter set and a slice header. The techniques reduce a number of bits used to signal the deblocking filter parameters by coding a first syntax element that indicates whether deblocking filter parameters are present in both the picture layer parameter set and the slice header, and only coding a second syntax element in the slice header when both sets of deblocking filter parameters are present. Coding the second syntax element is eliminated when deblocking filter parameters are present in only one of the picture layer parameter set or the slice header. The second syntax element indicates which set of deblocking filter parameters to use to define a deblocking filter applied to a current slice.
Abstract:
This disclosure proposes techniques for motion vector scaling. In particular, this disclosure proposes that both an implicit motion vector scaling process (e.g., the POC-based motion vector scaling process described above), as well as an explicit motion vector (e.g., a motion vector scaling process using scaling weights) may be used to perform motion vector scaling. This disclosure also discloses example signaling methods for indicating the type of motion vector scaling used.