Abstract:
In one example, an apparatus for coding video data comprises a video coder configured to generate first and second lists of motion information candidates, respectively, for first and second video block using a common list construction process, wherein the common list construction process is common to at least a first motion information prediction mode and a second motion information prediction mode. The video coder is further configured to code the first video block using the first motion information prediction mode based on a first motion information candidate selected from the first list, and code the second video block using the second motion information prediction mode based on a second motion information candidate selected from the second list.
Abstract:
In one example, an apparatus for processing video data comprises a video coder configured to, for each of the one or more chrominance components, calculate a chrominance quantization parameter for a common edge between two blocks of video data based on a first luminance quantization parameter for the first block of video data, a second luminance quantization parameter for the second block of video data, and a chrominance quantization parameter offset value for the chrominance component. The video coder is further configured to determine a strength for a deblocking filter for the common edge based on the chrominance quantization parameter for the chrominance component, and apply the deblocking filter according to the determined strength to deblock the common edge.
Abstract:
A video coding device is configured to obtain an array of sample values. The sample values may be formatted according to a 4:2:0, 4:2:2, or 4:4:4 chroma format. The video coding device determines whether to apply a first filter to rows of chroma sample values associated with defined horizontal edges within the array. The video coding device determines whether to apply a second filter to columns of chroma sample values associated with defined vertical edges. The horizontal and vertical edges may be separated by a number of chroma samples according to a deblocking grid.
Abstract:
The techniques of this disclosure are generally related to parallel coding of video units that reside along rows or columns of blocks in largest coding units. For example, the techniques include removing intra-prediction dependencies between two video units in different rows or columns to allow for parallel coding of rows or columns of the video units.
Abstract:
Techniques for coding data, such as, e.g., video data, include coding a first syntax element, conforming to a particular type of syntax element, of a first slice of video data, conforming to a first slice type, using an initialization value set. The techniques further include coding a second syntax element, conforming to the same type of syntax element, of a second slice of video data, conforming to a second slice type, using the same initialization value set. In this example, the first slice type may be different from the second slice type. Also in this example, at least one of the first slice type and the second slice type may be a temporally predicted slice type. For example, the at least one of the first and second slice types may be a unidirectional inter-prediction (P) slice type, or a bi-directional inter-prediction (B) slice type.
Abstract:
In one example, a video coding device is configured to decode four blocks of video data, wherein the four blocks are non-overlapping and share one common point such that four edge segments are formed by the four blocks, for each of the four edge segments, determine whether to deblock the respective edge segment based on a first analysis of at least one line of pixels that is perpendicular to the respective edge segment and that intersects the respective edge segment, for each of the four edge segments that was determined to be deblocked, determine whether to apply a strong filter or a weak filter to the respective edge segment based on a second analysis of the at least one line of pixels for the respective edge, and deblock the four edge segments based on the determinations.
Abstract:
This disclosure describes techniques for coding significant coefficient information for a video block in a transform skip mode. The transform skip mode may provide a choice of a two-dimensional transform mode, a horizontal one-dimensional transform mode, a vertical one-dimensional transform mode, or a no transform mode. In other cases, the transform skip mode may provide a choice between a two-dimensional transform mode and a no transform mode. The techniques include selecting a transform skip mode for a video block, and coding significant coefficient information for the video block using a coding procedure defined based at least in part on the selected transform skip mode. Specifically, the techniques include using different coding procedures to code one or more of a position of a last non-zero coefficient and a significance map for the video block in the transform skip mode.
Abstract:
In generating a candidate list for inter prediction video coding, a video coder can perform pruning operations when adding spatial candidates and temporal candidates to a candidate list while not performing pruning operations when adding an artificially generated candidate to the candidate list. The artificially generated candidate can have motion information that is the same as motion information of a spatial candidate or temporal candidate already in the candidate list.
Abstract:
In one example, an apparatus for context adaptive entropy coding a video unit comprises a coder configured to code a syntax element, wherein a first value of the syntax element indicates that one or more of a plurality of context states are initialized using an adaptive initialization mode for the video unit, and a second value of the syntax element indicates that each of the plurality of context states is initialized using a default initialization mode for the video unit. In some examples, when the syntax element has the first value, the coder is further configured to code a map that indicates which of the context states are initialized using the adaptive initialization mode, and to further code either an initial state value for those contexts, or information from which the initial state values of those adaptively initialized context may be derived.
Abstract:
In one example, an apparatus for context adaptive entropy coding may include a coder configured to determine one or more initialization parameters for a context adaptive entropy coding process based on one or more initialization parameter index values. The coder may be further configured to determine one or more initial context states for initializing one or more contexts of the context adaptive entropy coding process based on the initialization parameters. The coder may be still further configured to initialize the contexts based on the initial context states. In some examples, the initialization parameters may be included in one or more tables, wherein, to determine the initialization parameters, the coder may be configured to map the initialization parameter index values to the initialization parameters in the tables. Alternatively, the coder may be configured to calculate the initialization parameters using the initialization parameter index values and one or more formulas.