Abstract:
An image compression method has at least the following steps: receiving source pixel data of a plurality of blocks of a frame; when a lossless compression mode is enabled for the frame, bypassing a source quantization operation and applying a lossless compression kernel to source pixel data of each of the blocks; and when a lossy compression mode is enabled for the frame, applying the source quantization operation to the source pixel data of each of the blocks to generate input pixel data of each of the blocks, and applying the lossless compression kernel to the input pixel data of each of the blocks. For example, the source quantization operation employs an adaptive quantization parameter for each of the blocks such that a size of compressed data of the frame generated under the lossy compression mode does not exceed a bit budget.
Abstract:
A method and apparatus for processing transform coefficients for a video coder or encoder is disclosed in the present invention. Embodiments according to the present invention reduce the storage requirement for sign bit hiding (SBH), improve the parallelism of SBH processing or simplify parity checking. Partial quantized transform coefficients (QTCs) of a transform block may be processed before all QTCs of the transform block are received. Zero and non-zero QTCs of a scan block may be processed concurrently and the QTCs of multiple scan blocks in a transform block may also be processed concurrently when computing cost function for SBH compensation. The range for searching for a value-modification QTC may be less than the scan block to be processed. Parity checking on QTCs may be based on least significant bits (LSBs) of all QTCs or all non-zero QTCs of a scan block.
Abstract:
A method and apparatus of image data compression and decompression are disclosed. According to an embodiment of the present invention, the compression method partitions the image data into access units and encodes each access unit into a bitstream according to a target bit budget. Each access unit is encoded using first data compression to generate a first bitstream and the residual data is further encoded using second data compression to generate a second bitstream if the first bitstream is smaller than the target bit budget. In one example, the second data compression comprises bit plane coding applied to bit plane-ordered data, wherein the bit plane-ordered data is generated by scanning from a most significant bit to a least significant bit of the residual data in a bit plane-wise order. The decompression method comprises steps to recover reconstructed data from the first and second bitstreams.
Abstract:
A video encoding apparatus includes a content activity analyzer circuit and a video encoder circuit. The content activity analyzer circuit applies a content activity analysis process to consecutive frames, to generate content activity analysis results. The consecutive frames are derived from input frames of the video encoding apparatus. The content activity analysis process includes: deriving a first content activity analysis result according to a first frame and a second frame in the consecutive frames, wherein the first content activity analysis result includes a processed frame distinct from the second frame; and deriving a second content activity analysis result according to a third frame included in the consecutive frames and the processed frame. The video encoder circuit performs a video encoding process to generate a bitstream output of the video encoding apparatus, wherein information derived from the content activity analysis results is referenced by the video encoding process.
Abstract:
A video encoding method includes: during a first period, performing an encoding process upon a first block group of a current frame to generate a first block group bitstream; and during a second period, transmitting a second block group bitstream derived from encoding a second block group of the current frame, wherein the second period overlaps the first period. The encoding process includes: during a first time segment of the first period, performing a first in-loop filtering process upon a first group of pixels; and during a second time segment of the first period, performing a second in-loop filtering process upon a second group of pixels, wherein the second time segment overlaps the first time segment, and a non-zero pixel distance exists between a first edge pixel of the first group of pixels and a second edge pixel of the second group of pixels in a filter direction.
Abstract:
A video encoding apparatus includes a data buffer and a video encoding circuit. Encoding of a first frame includes: deriving reference pixels of a reference frame from reconstructed pixels of the first frame, respectively, and storing reference pixel data into the data buffer for inter prediction, wherein the reference pixel data include information of pixel values of the reference pixels. Encoding of a second frame includes performing prediction upon a coding unit in the second frame to determine a target predictor for the coding unit. The prediction performed upon the coding unit includes: determining the target predictor for the coding unit according to whether a search range on the reference frame for finding a predictor of the coding unit under an inter prediction mode includes at least one reference pixel of the reference frame that is not accessible to the video encoding circuit.
Abstract:
A video encoding apparatus includes a data buffer and a video encoding circuit. Encoding of a first frame includes: deriving reference pixels of a reference frame from reconstructed pixels of the first frame, respectively, and storing reference pixel data into the data buffer for inter prediction, wherein the reference pixel data include information of pixel values of the reference pixels. Encoding of a second frame includes performing prediction upon a coding unit in the second frame to determine a target predictor for the coding unit. The prediction performed upon the coding unit includes: checking if a search range on the reference frame for finding a predictor of the coding unit under an inter prediction mode includes at least one reference pixel of the reference frame that is not accessible to the video encoding circuit, and determining the target predictor for the coding unit according to a checking result.
Abstract:
A block prediction search method includes at least following steps: utilizing a data buffer to store bit-depth reduced sample values of a plurality of samples in a first pixel line; detecting occurrence of an edge in the first pixel line according to restored sample values derived from stored sample values in the data buffer; and determining a block prediction vector for a pixel group in a second pixel line different from the first pixel line, wherein the block prediction vector is determined based at least partly on a last edge count value indicative of a number of samples in the first pixel line that have gone by since the edge occurs.
Abstract:
A video encoding apparatus includes a data buffer and a video encoding circuit. Encoding of a first frame includes: deriving reference pixels of a reference frame from reconstructed pixels of the first frame, respectively, and storing reference pixel data into the data buffer for inter prediction, wherein the reference pixel data include information of pixel values of the reference pixels. Encoding of a second frame includes performing prediction upon a coding unit in the second frame to determine a target predictor for the coding unit. The prediction performed upon the coding unit includes: checking if a search range on the reference frame for finding a predictor of the coding unit under an inter prediction mode includes at least one reference pixel of the reference frame that is not accessible to the video encoding circuit, and determining the target predictor for the coding unit according to a checking result.
Abstract:
A video encoder has a processing circuit and a universal binary entropy (UBE) syntax encoder. The processing circuit processes pixel data of a video frame to generate encoding-related data, wherein the encoding-related data comprise at least quantized transform coefficients. The UBE syntax encoder processes a plurality of syntax elements to generate UBE syntax data. The encoding-related data are represented by the syntax elements. The processing circuit operates according to a video coding standard. The video coding standard supports arithmetic encoding. The UBE syntax data contain no arithmetic-encoded syntax data.