Abstract:
A filter unit of a video encoder or video decoder can determine a first metric for a group of pixels within a block of pixels, determine a second metric for the group of pixels, determine a filter based on the first metric and the second metric, and generate a filtered image by applying the filter to the group of pixels. The first metric and second metric can be an activity metric and a direction metric, respectively, or can be other metrics such as an edge metric, horizontal activity metric, vertical activity metric, or diagonal activity metric.
Abstract:
In one aspect of this disclosure, rounding adjustments to bi-directional predictive data may be purposely eliminated to provide predictive data that lacks any rounding bias. In this case, rounded and unrounded predictive data may both be considered in a rate-distortion analysis to identify the best data for prediction of a given video block. In another aspect of this disclosure, techniques are described for selecting among default weighted prediction, implicit weighted prediction, and explicit weighted prediction. In this context, techniques are also described for adding offset to prediction data, e.g., using the format of explicit weighted prediction to allow for offsets to predictive data that is otherwise determined by implicit or default weighted prediction.
Abstract:
Video data may comprise one or more blocks, each block being associated with a block palette comprising one or more palette entries specifying pixel values used in the block. A block is further divided into a plurality of sub-blocks. A sub-block scanning order for the block and pixel scanning orders for the sub-blocks are adaptively selected, based upon a distribution of pixel values within the block and sub-blocks. Sub-blocks may be associated with sub-block palettes, specifying pointers to palette entries of the block palette. Some sub-blocks may be encoded based upon pixel values of neighboring sub-blocks.
Abstract:
This disclosure describes techniques for determining transform partitions in video encoding processes that allow for non-square transform partitions in intra-coded blocks. According to one example of the disclosure, a video coding method comprise partitioning a coding unit into multiple prediction units, and determining a transform partition for each of the prediction units, wherein at least one transform partition is a non-square partition.
Abstract:
A method for decoding video data provided in a bitstream, where the bitstream includes a coding unit (CU) coded in palette mode, includes: parsing a palette associated with the CU provided in the bitstream; parsing one or more run lengths provided in the bitstream that are associated with the CU; parsing one or more index values provided in the bitstream that associated with the CU; and parsing one or more escape pixel values provided in the bitstream that are associated with the CU. The escape pixel values may be parsed from consecutive positions in the bitstream, the consecutive positions being in the bitstream after all of the run lengths and the index values associated with the CU. The method may further include decoding the CU based on the parsed palette, parsed run lengths, parsed index values, and parsed escape values.
Abstract:
A device for decoding video data includes a memory configured to store video data and one or more processors configured to: receive a first block of the video data; determine a quantization parameter for the first block; in response to determining that the first block is coded using a color-space transform mode for residual data of the first block, modify the quantization parameter for the first block; perform a dequantization process for the first block based on the modified quantization parameter for the first block; receive a second block of the video data; receive a difference value indicating a difference between a quantization parameter for the second block and the quantization parameter for the first block; determine the quantization parameter for the second block based on the received difference value and the quantization parameter for the first block; and decode the second block based on the determined quantization parameter.
Abstract:
Techniques are described for palette-based video coding. In palette-based coding, a video coder may form a so-called “palette” as a table of colors for representing video data of a given block of video data. Rather than coding actual pixel values or their residuals for the given block, the video coder may code index values for one or more of the pixels. The index values map the pixels to entries in the palette representing the colors of the pixels. Techniques are described for determining the application of deblocking filtering for pixels of palette coded blocks at a video encoder or a video decoder. In addition, techniques are described for determining quantization parameter (QP) values and delta QP values used to quantize escape pixel values of palette coded blocks at the video encoder or the video decoder.
Abstract:
A video coder decodes a coding unit (CU) of video data. In decoding the video data, the video coder determines that the CU was encoded using the color space conversion. The video coder determines the initial quantization parameter (QP), determines the final QP that is equal to a sum of the initial QP and a QP offset, and inverse quantizes, based on the final QP, a coefficient block, then reconstructs the CU based on the inverse quantized coefficient blocks.
Abstract:
In an example, a method of processing video data includes determining a value of a block-level syntax element that indicates, for all samples of a block of video data, whether at least one respective sample of the block is coded based on a color value of the at least one respective sample not being included in a palette of colors for coding the block of video data. The method also includes coding the block of video data based on the value.
Abstract:
This disclosure describes devices and methods for coding transform coefficients associated with a block of residual video data in a video coding process. Aspects of this disclosure include the selection of a scan order for both significance map coding and level coding, as well as the selection of contexts for entropy coding consistent with the selected scan order. This disclosure proposes a harmonization of the scan order to code both the significance map of the transform coefficients as well as to code the levels of the transform coefficient. It is proposed that the scan order for the significance map should be in the inverse direction (i.e., from the higher frequencies to the lower frequencies). This disclosure also proposes that transform coefficients be scanned in sub-sets as opposed to fixed sub-blocks. In particular, transform coefficients are scanned in a sub-set consisting of a number of consecutive coefficients according to the scan order.