Abstract:
This disclosure describes techniques for signaling deblocking filter parameters for a current slice of video data with reduced bitstream overhead. Deblocking filter parameters may be coded in one or more of a picture layer parameter set and a slice header. The techniques reduce a number of bits used to signal the deblocking filter parameters by coding a first syntax element that indicates whether deblocking filter parameters are present in both the picture layer parameter set and the slice header, and only coding a second syntax element in the slice header when both sets of deblocking filter parameters are present. Coding the second syntax element is eliminated when deblocking filter parameters are present in only one of the picture layer parameter set or the slice header. The second syntax element indicates which set of deblocking filter parameters to use to define a deblocking filter applied to a current slice.
Abstract:
In one example, a device for coding video data includes a video coder configured to configured to code information representative of whether an absolute value of an x-component of a motion vector difference value for a current block of video data is greater than zero, code information representative of whether an absolute value of a y-component of the motion vector difference value is greater than zero, when the absolute value of the x-component is greater than zero, code information representative of the absolute value of the x-component, when the absolute value of the y-component is greater than zero, code information representative of the absolute value of the y-component, when the absolute value of the x-component is greater than zero, code a sign of the x-component, and when the absolute value of the y-component is greater than zero, code a sign of the y-component.
Abstract:
A video encoder is configured to determine a picture size for one or more pictures included in a video sequence. The picture size associated with the video sequence may be a multiple of an aligned coding unit size for the video sequence. In one example, the aligned coding unit size for the video sequence may comprise a minimum coding unit size where the minimum coding unit size is selected from a plurality of smallest coding unit sizes corresponding to different pictures in the video sequence. A video decoder is configured to obtain syntax elements to determine the picture size and the aligned coding unit size for the video sequence. The video decoder decodes the pictures included in the video sequence with the picture size, and stores the decoded pictures in a decoded picture buffer.
Abstract:
A device for coding three-dimensional video data includes a video coder configured to determine a first block of a first texture view is to be coded using a block-based view synthesis mode; locate, in a depth view, a first depth block that corresponds to the first block of the first texture view; determine depth values of two or more corner positions of the first depth block; based on the depth values, derive a disparity vector for the first block; using the disparity vector, locate a first block of a second texture view; and, inter-predict the first block of the first texture view using the first block of the second texture view.
Abstract:
This disclosure describes techniques for coding video data. In particular, this disclosure describes techniques for loop filtering for video coding. The techniques of this disclosure may apply to loop filtering and/or partial loop filtering across block boundaries in scalable video coding processes. Loop filtering may include, for example, one or more of adaptive loop filtering (ALF), sample adaptive offset (SAO) filtering, and deblocking filtering.
Abstract:
In general, techniques are described for implementing an 8-point discrete cosine transform (DCT). An apparatus comprising an 8-point discrete cosine transform (DCT) hardware unit may implement these techniques to transform media data from a spatial domain to a frequency domain. The 8-point DCT hardware unit includes an even portion comprising factors A, B that are related to a first scaled factor (μ) in accordance with a first relationship. The 8-point DCT hardware unit also includes an odd portion comprising third, fourth, fifth and sixth internal factors (G, D, E, Z) that are related to a second scaled factor (η) in accordance with a second relationship. The first relationship relates the first scaled factor to the first and second internal factors. The second relationship relates the second scaled factor to the third internal factor and a fourth internal factor, as well as, the fifth internal factor and a sixth internal factor.
Abstract:
This disclosure describes techniques for transforming residual blocks of video data. In particular, a plurality of different transforms selectively applied to the residual blocks based on the prediction mode of the video blocks. At least a portion of the plurality of transforms are separable directional transform specifically trained for a corresponding prediction mode to provide better energy compaction for the residual blocks of the given prediction mode. Using separable directional transforms offers the benefits of lower computation complexity and storage requirement than use of non-separable directional transforms. Additionally, a scan order used to scan the coefficients of the residual block may be adjusted when applying separable directional transforms. In particular, the scan order may be adjusted based on statistics associated with one or more previously coded blocks to better ensure that non-zero coefficients are grouped near the front of the one-dimensional coefficient vector to improve the effectiveness of entropy coding.
Abstract:
A device for coding video data includes a video coder configured to code first significance information for transform coefficients associated with residual data, wherein the first significance information indicates if a first sub-block comprises at least one non-zero coefficient, wherein the first sub-block is a sub-block of an entire transform block; and, code second significance information, wherein the second significance information indicates if a second sub-block comprises at least one non-zero coefficient, wherein the second sub-block is a sub-block of the first sub-block, wherein coding the second significance information comprises performing an arithmetic coding operation on the second significance information, wherein a context for the arithmetic coding operation is determined based on one or more neighboring sub-blocks of a same size as the first sub-block.
Abstract:
This disclosure describes techniques for improving coding efficiency of motion prediction in multiview and 3D video coding. In one example, a method of decoding video data comprises deriving one or more disparity vectors for a current block, the disparity vectors being derived from neighboring blocks relative to the current block, converting a disparity vector to one or more of inter-view predicted motion vector candidates and inter-view disparity motion vector candidates, adding the one or more inter-view predicted motion vector candidates and the one or more inter-view disparity motion vector candidates to a candidate list for a motion vector prediction mode, and decoding the current block using the candidate list.
Abstract:
An apparatus for coding video information includes a memory unit configured to store video information associated with a reference block; and a processor in communication with the memory unit, wherein the processor is configured to determine a value of a current video unit associated with the reference block based on, at least in part, a classification of the reference block and a scan order selected by the processor based upon the classification. The scan order indicates an order in which values within the reference block are processed to at least partially determine the value of the current video unit.