Abstract:
Techniques described herein are related to harmonizing the signaling of coding modes and filtering in video coding. In one example, a method of decoding video data is provided that includes decoding a first syntax element to determine whether PCM coding mode is used for one or more video blocks, wherein the PCM coding mode refers to a mode that codes pixel values as PCM samples. The method further includes decoding a second syntax element to determine whether in-loop filtering is applied to the one or more video blocks. Responsive to the first syntax element indicating that the PCM coding mode is used, the method further includes applying in-loop filtering to the one or more video blocks based at least in part on the second syntax element and decoding the one or more video blocks based at least in part on the first and second syntax elements.
Abstract:
In general, techniques are described for performing motion vector prediction in 3D video coding and, more particularly for managing a candidate list of motion vector predictors (MVPs) for a block of video data. In some examples, a video coder, such as video encoder or video decoder, includes at least three motion vector predictors (MVPs) in a candidate list of MVPs for a current block in a first view of a current access unit of the video data, wherein the at least three MVPs comprise an inter-view motion vector predictor (IVMP), which is a temporal motion vector derived from a block in a second view of the current access unit or a disparity motion vector derived from a disparity vector.
Abstract:
The techniques are generally related to the coding of weighted prediction parameters. A video coder may determine the weighted prediction parameters for a reference picture list based on coded weighted prediction parameters for another reference picture list. Examples of the reference picture list include reference picture lists constructed for coding purposes, including a combined reference picture list.
Abstract:
This disclosure describes techniques for signaling deblocking filter parameters for a current slice of video data with reduced bitstream overhead. Deblocking filter parameters may be coded in one or more of a picture layer parameter set and a slice header. The techniques reduce a number of bits used to signal the deblocking filter parameters by coding a first syntax element that indicates whether deblocking filter parameters are present in both the picture layer parameter set and the slice header, and only coding a second syntax element in the slice header when both sets of deblocking filter parameters are present. Coding the second syntax element is eliminated when deblocking filter parameters are present in only one of the picture layer parameter set or the slice header. The second syntax element indicates which set of deblocking filter parameters to use to define a deblocking filter applied to a current slice.
Abstract:
In one example, a device includes a video coder configured to code a first set of syntax elements for the coefficients of a residual block of video data, and code, using at least a portion of the first set of syntax elements as context data, a second set of syntax elements for the coefficients, wherein the first set of syntax elements each correspond to a first type of syntax element for the coefficients, and wherein the second set of syntax elements each correspond to a second, different type of syntax element for the coefficients. For example, the first set of syntax elements may comprise values indicating whether the coefficients are significant (that is, have non-zero level values), and the second set of syntax elements may comprise values indicating whether level values for the coefficients have absolute values greater than one.
Abstract:
A video encoder generates a first and a second candidate list. The first candidate list includes a plurality of motion vector (MV) candidates. The video encoder selects, from the first candidate list, a MV candidate for a first prediction unit (PU) of a coding unit (CU). The second MV candidate list includes each of the MV candidates of the first MV candidate list except the MV candidate selected for the first PU. The video encoder selects, from the second MV candidate list, a MV candidate for a second PU of the CU. A video decoder generates the first and second MV candidate lists in a similar way and generates predictive sample blocks for the first and second PUs based on motion information of the selected MV candidates.
Abstract:
The techniques described in this disclosure may be generally related to identifying when motion vector difference (MVD) is skipped for one or both reference picture lists. The techniques may further relate to contexts for signaling MVD values. The techniques may also be related to syntax that indicates when at least one of the MVD values is zero.
Abstract:
In an example, aspects of this disclosure relate to a method for coding video data that includes predicting a first non-square partition of a current block of video data using a first intra-prediction mode, where the first non-square partition has a first size. The method also includes predicting a second non-square partition of the current block of video data using a second intra-prediction mode, where the second non-square partition has a second size different than the first size. The method also includes coding the current block based on the predicted first and second non-square partitions.
Abstract:
In general, techniques are described for implementing an 8-point inverse discrete cosine transform (IDCT). An apparatus comprising an 8-point inverse discrete cosine transform (IDCT) hardware unit may implement these techniques to transform media data from a frequency domain to a spatial domain. The 8-point IDCT hardware unit includes an even portion comprising factors A, B that are related to a first scaled factor (μ) in accordance with a first relationship. The 8-point IDCT hardware unit also includes an odd portion comprising third, fourth, fifth and sixth internal factors (G, D, E, Z) that are related to a second scaled factor (η) in accordance with a second relationship. The first relationship relates the first scaled factor to the first and second internal factors. The second relationship relates the second scaled factor to the third, fourth, fifth and sixth internal factors.
Abstract:
A video encoding device is configured to obtain an N by N array of residual values for a luma component and a corresponding N/2 by N array of residual values for a chroma component. The video encoding device may partition the N/2 by N array of residual values for the chroma component into two N/2 by N/2 sub-arrays of chroma residual values. The video encoding device may further partition the sub-arrays of chroma residual values based on the partitioning of the array of residual values for the luma component. Video encoding device may perform a transform on each of the sub-arrays of chroma residual values to generate transform coefficients. A video decoding device may use data defining sub-arrays of transform coefficients to perform a reciprocal process to generate residual values.