Abstract:
A video coder may, in some cases, signal whether one or more initial reference picture lists are to be modified. When an initial list is to be modified, the video coder can signal information indicating a starting position in the initial reference picture list. When the starting position signaled by the video coder is less than a number of pictures included in the initial reference picture list, then the video coder signals the number of pictures to be inserted into the initial reference picture list, and a reference picture source from which a picture can be retrieved to insert into the initial reference picture list to construct a modified reference picture list.
Abstract:
When coding multiview video data, a video encoder and video decoder may select a candidate picture from one of one or more random access point view component (RAPVC) pictures and one or more pictures having a lowest temporal identification value. The video encoder and video decoder may determine whether a block in the selected candidate picture is inter-predicted with a disparity motion vector and determine a disparity vector for a current block of a current picture based on the disparity motion vector. The video encoder and video decoder may inter-prediction encode or decode, respectively, the current block based on the determined disparity vector.
Abstract:
A video coder signals, in a bitstream, a syntax element that indicates whether inter-view/layer reference pictures are ever included in a reference picture list for a current view component/layer representation. A video decoder obtains, from the bitstream, the syntax element that indicates whether inter-view/layer reference pictures are ever included in a reference picture list for a current view component/layer representation. The video decoder decodes the current view component/layer representation.
Abstract:
As one example, techniques for decoding video data include receiving a bitstream that includes one or more pictures of a coded video sequence (CVS), decoding a first picture according to a decoding order, wherein the first picture is a random access point (RAP) picture that is not an instantaneous decoding refresh (IDR) picture, and decoding at least one other picture following the first picture according to the decoding order based on the decoded first picture. As another example, techniques for encoding video data include generating a bitstream that includes one or more pictures of a CVS, wherein a first picture according to the decoding order is a RAP picture that is not an IDR picture, and avoiding including at least one other picture, other than the first picture, that corresponds to a leading picture associated with the first picture, in the bitstream.
Abstract:
A device may encapsulate video data such that Supplemental Enhancement Information (SEI) messages are stored separately from a sequence of coded video pictures described by the SEI messages. An example device includes a control unit configured to generate one or more SEI messages separate from the coded video pictures, wherein the SEI messages describe respective ones of the sequence of coded video pictures and include elements common to more than one of the coded video pictures, and an output interface configured to output the SEI messages separately from the sequence of coded video pictures. An example destination device may receive the SEI messages separately from the coded video pictures and render the coded video pictures using the SEI messages.
Abstract:
A video processing device may obtain, from a descriptor for a program comprising one or more elementary streams, a plurality of profile, tier, level (PTL) syntax element sets. Each respective PTL syntax element set of the plurality of PTL syntax element sets comprises syntax elements may specify respective PTL information. The video processing device obtains, from the descriptor for the program, a plurality of operation point syntax element sets. Each respective operation point syntax element set of the plurality of operation point syntax element sets may specify a respective operation point of a plurality of operation points. The video processing device may determine, for each respective layer of respective operation point specified by respective operation point syntax element sets, based on a respective syntax element in the respective operation point syntax element set, which of the PTL syntax element sets specifies the PTL information assigned to the respective layer.
Abstract:
Techniques are described for deriving a disparity vector for a current block based on a disparity motion vector of a neighboring block in a 3D-AVC video coding process. The disparity vector derivation allows for texture-first coding where a depth view component of a dependent view is coded subsequent to the coding of the corresponding texture component of the dependent view.
Abstract:
A video coding device may encode and/or decode video data. The video coding device encodes a first video block in a first picture by predicting values of the first video block based on a previously encoded video block in a second picture different than the first picture. The video coding device filters the first video block according to a deblocking filtering process. The video coding device encodes a second video block in the first picture by predicting values of the second video block based on a previously encoded video block in the first picture. The video coding device filters the second video block according to the deblocking filtering process. The video coding device decodes the first video block, filters the first video block according to the deblocking filtering process, decodes the second video block, and filters the second video block according to the deblocking filtering process.
Abstract:
During a coding process, systems, methods, and apparatus may code data representative of the positions of elements of a chain that partitions a prediction unit of video data. Some examples may include generating the data representative of the positions of elements of a chain that partitions a prediction unit of video data. Each of the positions of the elements except for a last element may be within the prediction unit. The position of the last element may be outside the prediction unit. This can indicate that the penultimate element is the last element of the chain. Some examples may code the partitions of the prediction unit based on the chain.
Abstract:
Techniques for advanced residual prediction (ARP) in video coding may include receiving a first encoded block of video data in a first access unit, wherein the first encoded block of video data was encoded using advanced residual prediction and bi-directional prediction, determining temporal motion information for a first prediction direction of the first encoded block of video data, and identifying reference blocks for a second prediction direction, different than the first prediction direction, using the temporal motion information determined for the first prediction direction, wherein the reference blocks are in a second access unit.