Abstract:
This disclosure describes techniques for coding significant coefficient information for a video block in a transform skip mode. The transform skip mode may provide a choice of a two-dimensional transform mode, a horizontal one-dimensional transform mode, a vertical one-dimensional transform mode, or a no transform mode. In other cases, the transform skip mode may provide a choice between a two-dimensional transform mode and a no transform mode. The techniques include selecting a transform skip mode for a video block, and coding significant coefficient information for the video block using a coding procedure defined based at least in part on the selected transform skip mode. Specifically, the techniques include using different coding procedures to code one or more of a position of a last non-zero coefficient and a significance map for the video block in the transform skip mode.
Abstract:
In general, techniques are described for performing transform dependent de-blocking filtering, which may be implemented by a video encoding device. The video encoding device may apply a transform to a video data block to generate a block of transform coefficients, apply a quantization parameter to quantize the transform coefficients and reconstruct the block of video data from the quantized transform coefficients. The video encoding device may further determine at least one offset used in controlling de-blocking filtering based on the size of the applied transform, and perform de-blocking filtering on the reconstructed block of video data based on the determined offset. Additionally, the video encoder may specify a flag in a picture parameter set (PPS) that indicates whether the offset is specified in one or both of the PPS and a header of an independently decodable unit.
Abstract:
In one example, a video coder, such as a video encoder or video decoder, is configured to determine a number of least significant bits of picture identifying information for a picture of video data, determine a value of the picture identifying information for the picture, and code information indicative of the determined number of least significant bits of the value of the picture identifying information for the picture.
Abstract:
A video encoder may transform residual data by using a transform selected from a group of transforms. The transform is applied to the residual data to create a two-dimensional array of transform coefficients. A scanning mode is selected to scan the transform coefficients in the two-dimensional array into a one-dimensional array of transform coefficients. The combination of transform and scanning mode may be selected from a subset of combinations that is based on an intra-prediction mode. The scanning mode may also be selected based on the transform used to create the two-dimensional array. The transforms and/or scanning modes used may be signaled to a video decoder.
Abstract:
A video encoder may transform residual data by using a transform selected from a group of transforms. The transform is applied to the residual data to create a two-dimensional array of transform coefficients. A scanning mode is selected to scan the transform coefficients in the two-dimensional array into a one-dimensional array of transform coefficients. The combination of transform and scanning mode may be selected from a subset of combinations that is based on an intra-prediction mode. The scanning mode may also be selected based on the transform used to create the two-dimensional array. The transforms and/or scanning modes used may be signaled to a video decoder.
Abstract:
In an example, a method of processing video data may include inferring a pixel scan order for a first palette mode encoded block of video data without receiving a block-level syntax element having a value representative of the pixel scan order for the first palette mode encoded block. The method may include decoding the first palette mode encoded block of video data using the inferred pixel scan order. The method may include receiving a block-level syntax element having a value representative of a pixel scan order for a second palette mode encoded block of video data. The method may include determining the pixel scan order for the second palette mode encoded block of video data based on the received block-level syntax element. The method may include decoding the second palette mode encoded block of video data using the determined pixel scan order.
Abstract:
A method of content compression including receiving a first block of samples including at least a first sample and a second sample, calculating a predictor value for the first block of samples, calculating a residual between the predictor value and the first sample, quantizing the residual to generate a quantized residual, de-quantizing the quantized residual to generate a de-quantized residual, reconstructing the first sample using the de-quantized residual and the predictor value to generate a first reconstructed sample, calculating an error value based on the first sample and the first reconstructed sample, and modifying the second sample by the error value.
Abstract:
Provided are systems and methods for used fixed-point instead of floating point techniques in order to calculate various parameters for coding video data, including target rate, QP adjustment, buffer fullness, a Lagrangian parameters for a bitrate, and/or a Lagrangian parameter for the fullness of the buffer. By determining one or more of the parameters using fixed-point, hardware implementation costs may be decreased.
Abstract:
In one example, a device includes a memory configured to store video data and a video decoder configured to decode an exponential Golomb codeword representative of at least a portion of a value for an escape pixel of a palette-mode coded block of video data, the video decoder is configured to decode the exponential Golomb codeword using exponential Golomb with parameter 3 decoding, and decode the block using the value for the escape pixel.
Abstract:
Techniques and systems are provided for encoding video data. For example, restrictions on certain prediction modes can be applied for video coding. A restriction can be imposed that prevents inter-prediction bi-prediction from being performed on video data when certain conditions are met. For example, bi-prediction restriction can be based on whether intra-block copy prediction is enabled for one or more coding units or blocks of the video data, whether a value of a syntax element indicates that one or more motion vectors are in non-integer accuracy, whether both motion vectors of a bi-prediction block are in non-integer accuracy, whether the motion vectors of a bi-prediction block are not identical and/or are not from the same reference index, or any combination thereof. If one or more of these conditions are met, the restriction on bi-prediction can be applied, preventing bi-prediction from being performed on certain coding units or blocks.