Abstract:
An example method of decoding video data includes receiving, in a message associated with a picture, information indicating a refreshed region of the picture, determining whether the picture comprises a last picture in a gradual decoder refresh (GDR) set, determining whether the picture comprises a recovery point picture, and responsive to determining that the picture comprises the last picture in the GDR set and the recovery point picture, determining that the message indicates that the entire picture belongs to the refreshed region of the picture.
Abstract:
In an example, the present disclosure provides for receiving in a video bitstream an access unit having a first random access point (RAP) picture and receiving in the video bitstream, after the access unit in the bitstream, a subsequent access unit having a second RAP picture. In a case that one or more random access skipped leading (RASL) pictures for the subsequent access unit are not present in the received video bitstream, shifting a picture buffer removal time earlier based on a picture buffer removal delay offset. Another example provides for receiving an access unit after an earlier initialization of the hypothetical reference decoder (HRD), the access unit having a RAP picture, wherein associated access units containing RASL pictures are not received and initializing a picture buffer removal time and a picture buffer removal delay offset in response to receiving the access unit and not receiving the associated access units containing RASL pictures.
Abstract:
In an example, a method of decoding video data includes determining whether a reference index for a current block corresponds to an inter-view reference picture, and when the reference index for the current block corresponds to the inter-view reference picture, obtaining, from an encoded bitstream, data indicating a view synthesis prediction (VSP) mode of the current block, where the VSP mode for the reference index indicates whether the current block is predicted with view synthesis prediction from the inter-view reference picture.
Abstract:
A device obtains, from a bitstream that includes an encoded representation of the video data, a non-nested Supplemental Enhancement Information (SEI) message that is not nested within another SEI message in the bitstream. Furthermore, the device determines a layer of the bitstream to which the non-nested SEI message is applicable. The non-nested SEI message is applicable to layers for which video coding layer (VCL) network abstraction layer (NAL) units of the bitstream have layer identifiers equal to a layer identifier of a SEI NAL unit that encapsulates the non-nested SEI message. A temporal identifier of the SEI NAL unit is equal to a temporal identifier of an access unit containing the SEI NAL unit. Furthermore, the device processes, based in part on one or more syntax elements in the non-nested SEI message, video data of the layer of the bitstream to which the non-nested SEI message is applicable.
Abstract translation:设备从包含视频数据的编码表示的比特流中获得未嵌套在比特流中另一SEI消息内的非嵌套补充增强信息(SEI)消息。 此外,设备确定非嵌套SEI消息可应用到的比特流层。 非嵌套SEI消息适用于层的视频编码层(VCL)网络抽象层(NAL)单元具有层标识符等于封装非嵌套SEI消息的SEI NAL单元的层标识符。 SEI NAL单元的时间标识符等于包含SEI NAL单元的访问单元的时间标识符。 此外,该设备部分地基于非嵌套SEI消息中的一个或多个语法元素处理非嵌套SEI消息可应用于的位流层的视频数据。
Abstract:
A video coding device, such as a video decoder, may be configured to derive at least one of a coded picture buffer (CPB) arrival time and a CPB nominal removal time for an access unit (AU) at both an access unit level and a sub-picture level regardless of a value of a syntax element that defines whether a decoding unit (DU) is the entire AU. The video coding device may further be configured to determine a removal time of the AU based at least in part on one of the CPB arrival time and a CPB nominal removal time and decode video data of the AU based at least in part on the removal time.
Abstract:
Information for a video stream indicating whether the video stream includes stereoscopic three-dimensional video data can be provided to a display device. This information allows the device to determine whether to accept the video data and to properly decode and display the video data. This information can be made available for video data regardless of the codec used to encode the video. Systems, devices, and methods for transmission and reception of compatible video communications including stereoscopic three-dimensional picture information are described.
Abstract:
A video coder may, in some cases, signal whether one or more initial reference picture lists are to be modified. When an initial list is to be modified, the video coder can signal information indicating a starting position in the initial reference picture list. When the starting position signaled by the video coder is less than a number of pictures included in the initial reference picture list, then the video coder signals the number of pictures to be inserted into the initial reference picture list, and a reference picture source from which a picture can be retrieved to insert into the initial reference picture list to construct a modified reference picture list.
Abstract:
A device comprising a video file creation module is configured to obtain a plurality of slices of coded video content. Parameter sets are associated with the coded video content. The video creation module encapsulates the plurality of slices of coded video content within one or more access units of a video stream. A first type of parameter set may be encapsulated within one or more access units of the video stream. A second type of parameter set may be encapsulated within a sample description. The sample description may include an indicator identifying a number of temporal layers of the video stream.
Abstract:
In general, techniques are described for performing motion vector prediction for video coding. A video coding device comprising a processor may perform the techniques. The processor may be configured to determine a plurality of candidate motion vectors for a current block of the video data so as to perform the motion vector prediction process and scale one or more of the plurality of candidate motion vectors determined for the current block of the video data to generate one or more scaled candidate motion vectors. The processor may then be configured to modify the scaled candidate motion vectors to be within a specified range.
Abstract:
In general, techniques are described for separately coding depth and texture components of video data. A video coding device for coding video data that includes a view component comprised of a depth component and a texture component may perform the techniques. The video coding device may comprise, as one example, a processor configured to activate a parameter set as a texture parameter set for the texture component of the view component, and code the texture component of the view component based on the activated texture parameter set.