Abstract:
A video coding device, such as a video encoder or a video decoder, may be configured to code a duration between coded picture buffer (CPB) removal time of a first decoding unit (DU) in an access unit (AU) and a second DU, wherein the second DU is subsequent to the first DU in decoding order and in the same AU as the first DU. The video coding device may further determine a removal time of the DU based at least on the coded duration. The coding device may also code a sub-picture timing supplemental enhancement information (SEI) message associated with the first DU. The video coding device may further determine a removal time of the DU based at least in part on the sub-picture timing SEI message.
Abstract:
A video processing device can receive in an encoded bitstream of video data a network abstraction layer (NAL) unit and parse a first syntax element in a header of the NAL unit to determine a temporal identification (ID) for the NAL unit, wherein a value of the first syntax element is one greater than the temporal identification.
Abstract:
A video encoder generates a syntax element that indicates whether a video unit of a current picture is predicted from an external picture. The external picture is in a different layer than the current picture. Furthermore, the video encoder outputs a video data bitstream that includes a representation of the syntax element. The video data bitstream may or may not include a coded representation of the external picture. A video decoder obtains the syntax element from the video data bitstream. The video decoder uses the syntax element in a process to reconstruct video data of a portion of the video unit.
Abstract:
An apparatus is configured to store coded video data including a number of sequences of coded video pictures in an electronic file. The apparatus includes at least one processor configured to determine whether a sample description associated with at least one sample includes all parameter sets of a particular type associated with the at least one sample. The at least one sample comprises at least a portion of the plurality of sequences of coded video pictures. The particular type is one of a plurality of different particular types of parameter sets. The at least one processor is also configured to provide, in the electronic file, an indication indicating whether the sample description includes all parameter sets of the particular type based on the determination.
Abstract:
A device comprising a video file creation module is configured to obtain a plurality of slices of coded video content. Parameter sets are associated with the coded video content. The video creation module encapsulates the plurality of slices of coded video content within one or more access units of a video stream. A first type of parameter set may be encapsulated within one or more access units of the video stream. A second type of parameter set may be encapsulated within a sample description. The sample description may include stream properties associated with the video stream.
Abstract:
A video encoder generates a bitstream that includes a syntax element that indicates whether a picture is encoded according either a first coding mode or a second coding mode. In the first coding mode, the picture is entirely encoded using wavefront parallel processing (WPP). In the second coding mode, each tile of the picture is encoded without using WPP and the picture may have one or more tiles. A video decoder may parse the syntax element from the bitstream. In response to determining that the syntax element has a particular value, the video decoder decodes the picture entirely using WPP. In response to determining that the syntax element does not have the particular value, the video decoder decodes each tile of the picture without using WPP.
Abstract:
In one example, a video coder, such as a video encoder or video decoder, is configured to code a video parameter set (VPS) for one or more layers of video data, wherein each of the one or more layers of video data refer to the VPS, and code the one or more layers of video data based at least in part on the VPS. The video coder may code the VPS for video data conforming to High-Efficiency Video Coding, Multiview Video Coding, Scalable Video Coding, or other video coding standards or extensions of video coding standards. The VPS may include data specifying parameters for corresponding sequences of video data within various different layers (e.g., views, quality layers, or the like). The parameters of the VPS may provide indications of how the corresponding video data is coded.
Abstract:
In general, techniques are described for coding picture order count values identifying long-term reference pictures. A video decoding device comprising a processor may perform the techniques. The processor may determine least significant bits (LSBs) of a picture order count (POC) value that identifies a long-term reference picture (LTRP). The LSBs do not uniquely identify the POC value with respect to the LSBs of any other POC value identifying any other picture in a decoded picture buffer (DPB). The processor may determine most significant bits (MSBs) of the POC value. The MSBs combined with the LSBs is sufficient to distinguish the POC value from any other POC value that identifies any other picture in the DPB. The processor may retrieve the LTRP from the decoded picture buffer based on the LSBs and MSBs of the POC value, and decode a current picture of the video data using the retrieved LTRP.
Abstract:
In general, techniques are described for coding picture order count values identifying long-term reference pictures. A video decoding device comprising a processor may perform the techniques. The processor may be configured to determine a number of bits used to represent least significant bits of the picture order count value that identifies a long-term reference picture to be used when decoding at least a portion of a current picture and parse the determined number of bits from a bitstream representative of the encoded video data. The parsed bits represent the least significant bits of the picture order count value. The processor retrieves the long-term reference picture from a decoded picture buffer based on the least significant bits, and decodes at least the portion of the current picture using the retrieved long-term reference picture.
Abstract:
A video coder can control in-picture prediction across slice boundaries within a picture. In one example, a first syntax element can control if in-picture prediction across slice boundaries for slices of a picture. If in-picture prediction across slice boundaries is enabled for the picture, then a second syntax element can control, for an individual slices, if in-picture prediction across slice boundaries is enabled for the slice.