Abstract:
In one example, a device for coding video data includes a video coder configured to code, for a bitstream, information representative of which of a plurality of video coding dimensions are enabled for the bitstream, and code values for each of the enabled video coding dimensions, without coding values for the video coding dimensions that are not enabled, in a network abstraction layer (NAL) unit header of a NAL unit comprising video data coded according to the values for each of the enabled video coding dimensions. In this manner, NAL unit headers may have variable lengths, while still providing information for scalable dimensions to which the NAL units correspond.
Abstract:
An improved system and method for implementing efficient decoding of scalable video bitstreams is provided. A virtual decoded picture buffer is provided for each lower layer of the scalable video bitstream. The virtual decoded picture buffer stores decoded lower layer pictures for reference. The decoded lower layer pictures used for reference are compiled to create a reference picture list for each layer. The reference picture list generated by the virtual decoded picture buffer is used during a direct prediction process instead of a target reference list to correctly decode a current macroblock.
Abstract:
Techniques are described related to deriving a reference picture set. A reference picture set may identify reference pictures that can potentially be used to inter-predict a current picture and picture following the current picture in decoding order. In some examples, deriving the reference picture set may include constructing a plurality of reference picture subsets that together form the reference picture set.
Abstract:
Embodiments of the present invention relate to video coding for multi-view video content. It provides a coding system enabling scalability for the multi-view video content. In one embodiment, a method is provided for encoding at least two views representative of a video scene, each of the at least two views being encoded in at least two scalable layers, wherein one of the at least two scalable layers representative of one view of the at least two views is encoded with respect to a scalable layer representative of the other view of the at least two views.
Abstract:
Techniques are described related to constructing reference picture lists. The reference picture lists may be constructed from reference picture subsets of a reference picture set. In some examples, the techniques may repeatedly list reference pictures identified in the reference picture subsets until the number of entries in the reference picture list is equal to the maximum number of allowable entries in the reference picture list.
Abstract:
Techniques are described related to constructing reference picture lists. The reference picture lists may be constructed from reference picture subsets of a reference picture set. In some examples, the reference picture subsets may be ordered in a particular manner to form the reference picture lists.
Abstract:
In one example, a device for coding video data includes a video coder configured to code, for a bitstream, information representative of which of a plurality of video coding dimensions are enabled for the bitstream, and code values for each of the enabled video coding dimensions, without coding values for the video coding dimensions that are not enabled, in a network abstraction layer (NAL) unit header of a NAL unit comprising video data coded according to the values for each of the enabled video coding dimensions. In this manner, NAL unit headers may have variable lengths, while still providing information for scalable dimensions to which the NAL units correspond.
Abstract:
A method comprises encoding a first view component of a first view of a multiview bitstream; and encoding a second view component of a second view; wherein the encoding of the second view component enables generating of a reference picture list for the second view component to include at least one of the following: (a) a first field view component based on the first view component or (b) a first complementary field view component pair including the first view component.
Abstract:
A system, method and computer program tangibly embodied in a memory medium for implementing motion skip and single-loop decoding for multi-view video coding. In various embodiments, a more efficient motion skip is used for the current JMVM arrangement by 8×8 or 4×4 pel disparity motion vector accuracy, while maintaining the motion compensation process that is compliant with the H.264/AVC design regarding hierarchical macroblock partitioning. Adaptive referencing merging may be used in order achieve a more accurate motion skip from one inter-view reference picture. In order to indicate whether a picture is to be used for motion skip, a new syntax element or syntax modification in the NAL unit header may be used.
Abstract:
In one example, a video coder is configured to code information indicative of whether view synthesis prediction is enabled for video data. When the information indicates that view synthesis prediction is enabled for the video data, the video coder may generate a view synthesis picture using the video data and code at least a portion of a current picture relative to the view synthesis picture. The at least portion of the current picture may comprise, for example, a block (e.g., a PU, a CU, a macroblock, or a partition of a macroblock), a slice, a tile, a wavefront, or the entirety of the current picture. On the other hand, when the information indicates that view synthesis prediction is not enabled for the video data, the video coder may code the current picture using at least one of intra-prediction, temporal inter-prediction, and inter-view prediction without reference to any view synthesis pictures.