Abstract:
An example device for accessing image data includes a memory configured to store image data, the memory comprising a first region and a second region, and one or more processing units implemented in circuitry and configured to code most significant bits (MSBs) of a plurality of residuals of samples of a block of an image, each of the residuals representing a respective difference value between a respective raw sample value and a respective predicted value for the respective raw sample value, access the coded MSBs in the first region of the memory, determine whether to represent the residuals using both the MSBs and least significant bits (LSBs) of the plurality of residuals of the samples, and in response to determining not to represent the residuals using the LSBs, prevent access of the LSBs in a second region of the memory.
Abstract:
Provided are techniques for low complexity video coding. For example, a video coder may be configured to calculate a first sum of absolute difference (SAD) value between a CU block and a corresponding block in a reference frame for the largest coding unit (LCU). The video coder may define conditions (e.g., background and/or homogeneous conditions) for the branching based at least in part on the first SAD value. The video coder may also determine the branching based on detecting the background or homogeneous condition, the branching including a first branch corresponding to both a first CU size of the CU block and a second CU size of a sub-block of the CU block. The video coder may then set the first branch to correspond to the first CU size, if the first CU size or the second CU size satisfies the background condition.
Abstract:
At least one processor is configured to encode samples of a largest coding unit (LCU) of a picture using a sample adaptive offset (SAO) mode. To encode the samples of the LCU using SAO, the at least one processor is configured to: calculate differences between corresponding reconstructed samples of the LCU and original samples of the LCU, clip a number of bits from each of the differences to form clipped differences, sum the clipped differences to form a sum of differences, clip the sum of differences to form a clipped sum of differences, calculate a number of the reconstructed samples, clip a number of bits from the number of reconstructed samples to form a dipped number of samples, and divide the clipped sum of differences by the clipped number of samples to produce an offset for the LCU.
Abstract:
This disclosure describes techniques for coding video data. In particular, this disclosure describes techniques for loop filtering for video coding. The techniques of this disclosure may apply to loop filtering and/or partial loop filtering across block boundaries in scalable video coding processes. Loop filtering may include, for example, one or more of adaptive loop filtering (ALF), sample adaptive offset (SAO) filtering, and deblocking filtering.
Abstract:
Provided are techniques for low complexity video coding. For example, a video coder may be configured to calculate a first sum of absolute difference (SAD) value between a CU block and a corresponding block in a reference frame for the largest coding unit (LCU). The video coder may define conditions (e.g., background and/or homogeneous conditions) for the branching based at least in part on the first SAD value. The video coder may also determine the branching based on detecting the background or homogeneous condition, the branching including a first branch corresponding to both a first CU size of the CU block and a second CU size of a sub-block of the CU block. The video coder may then set the first branch to correspond to the first CU size, if the first CU size or the second CU size satisfies the background condition.
Abstract:
A filter unit of a video encoder or video decoder can determine a first metric for a group of pixels within a block of pixels, determine a second metric for the group of pixels, determine a filter based on the first metric and the second metric, and generate a filtered image by applying the filter to the group of pixels. The first metric and second metric can be an activity metric and a direction metric, respectively, or can be other metrics such as an edge metric, horizontal activity metric, vertical activity metric, or diagonal activity metric.
Abstract:
A video coding device configured according to some aspects of this disclosure includes a memory configured to store a plurality of motion vector candidates. Each motion vector candidate can corresponding to at least one of a plurality of prediction units (PUs) partitioned in a parallel motion estimation region (MER). The video coding device also includes a processor in communication with the memory. The processor is configured to select a subset of the plurality of motion vector candidates to include in a merge candidate list. The selection can be based on a priority level of each motion vector candidate. The processor can be further configured to generate the merge candidate list to include the selected motion vector candidates.
Abstract:
This disclosure describes techniques for coding video data. In particular, this disclosure describes techniques for loop filtering for video coding. The techniques of this disclosure may apply to loop filtering and/or partial loop filtering across block boundaries in scalable video coding processes. Loop filtering may include, for example, one or more of adaptive loop filtering (ALF), sample adaptive offset (SAO) filtering, and deblocking filtering.
Abstract:
A video coder configured to perform sample adaptive offset filtering can determine a center value for a set of pixels based on values of pixels in the set, divide bands of pixels values into groups based on the center value, and determine offset values for the bands based on the groups.
Abstract:
A video coding device may be configured to estimate, based on a combination of a first parameter and a number of non-zero coefficients in a frame, a number of bits for non-zero coefficients of the frame, to encode the frame based on the estimated number of bits for the non-zero coefficients, to collect an actual number of bits used to encode the non-zero coefficients of the frame and an actual number of the non-zero coefficients in the frame, to update, based on the actual number of bits used to encode the non-zero coefficients of the frame and the actual number of non-zero coefficients in the frame, only the first parameter to form an updated first parameter, to form a rate estimation model using the updated first parameter and a second parameter, and to select, based on the rate estimation model, a coding mode for each block in the frame.