Abstract:
A video processing apparatus includes a first processing circuit, a second processing circuit, and a control circuit. The first processing circuit performs a first processing operation. The second processing circuit performs a second processing operation different from the first processing operation. The control circuit generates at least one output coding unit to the second processing circuit according to an input coding unit generated from the first processing circuit, wherein the control circuit checks a size of the input coding unit to selectively split the input coding unit into a plurality of output coding units.
Abstract:
Method and apparatus for affine CPMV or ALF refinement are mentioned. According to this method, statistical data associated with the affine CPMV or ALF refinement are collected over a picture area. Updated parameters for the affine CPMV refinement or the ALF refinement are then derived based on the statistical data, where a process to derive the updated parameters includes performing multiplication using a reduced-precision multiplier for the statistical data. The reduced-precision multiplier truncates at least one bit of the mantissa part. In another embodiment, the process to derive the updated parameters includes performing reciprocal for the statistical data using a lookup table with (m−k)-bit input by truncating k bits from the m-bit mantissa part, and contents of the lookup table includes m-bit outputs. m and k are positive integers.
Abstract:
Low-latency video coding methods and apparatuses include receiving input data associated with a current Intra slice composed of Coding Tree Units (CTU), where each CTU includes luma and chroma Coding Tree Blocks (CTBs), partitioning each CTB into non-overlapping pipeline units, and encoding or decoding the CTUs in the current Intra slices by performing processing of chroma pipeline units after beginning processing of luma pipeline units in at least one pipeline stage. Each of the pipeline units is processed by one pipeline stage after another pipeline stage, and different pipeline stages process different pipeline units simultaneously. The pipeline stage in the low-latency video coding methods and apparatuses simultaneously processes one luma pipeline unit and at least one previous chroma pipeline unit within one pipeline unit time interval.
Abstract:
Various schemes for realizing JCCR mode decision in frequency domain are described. An apparatus receives first and second pixel data of a current block of a picture and transform the pixel data into first and second transformed data in frequency domain. The apparatus generates joint pixel data comprising a pixelwise linear combination of the first and second transformed data. The apparatus generates reconstructed joint pixel data based on the joint pixel data by quantization and inverse quantization operations. The apparatus derives first and second reconstructed pixel data based on the reconstructed joint pixel data. The apparatus accordingly calculates first and second distortion values in frequency domain, based on which a preferred mode may be determined to code the current block.
Abstract:
Video encoding methods and apparatuses for Sum of Absolute Transformed Difference (SATD) computation by folded Hadamard transform circuits include splitting a current block into SATD blocks, receiving input data associated with a first block of a first SATD block in a first cycle and receiving input data associated with a second block of the first SATD block in a second cycle, and performing calculations for the first block by shared Hadamard transform circuits in the first cycle and performing calculations for the second block by the shared Hadamard transform circuits in the second cycle. Each shared Hadamard transform circuit is a first part of each folded Hadamard transform circuit. The video encoding methods and apparatuses further perform calculations for the entire SATD block by a final part of each folded Hadamard transform circuit to generate a final SATD result of the first SATD block for encoding.
Abstract:
For each prediction candidate of a set of one or more prediction candidates of the current block, a video coder computes a matching cost between a set of reference pixels of the prediction candidate in a reference picture and a set of neighboring pixels of a current block in a current picture. The video coder identifies a subset of the reference pictures as major reference pictures based on a distribution of the prediction candidates among the reference pictures of the current picture. A bounding block is defined for each major reference picture, the bounding block encompassing at least portions of multiple sets of reference pixels for multiple prediction candidates. The video coder assigns an index to each prediction candidate based on the computed matching cost of the set of prediction candidates. A selection of a prediction candidate is signaled by using the assigned index of the selected prediction candidate.
Abstract:
A video coder that implicitly signals a transform setting for coding a block of pixels is provided. The video coder derives a transform setting for a block of pixels based on a block processing setting. The video coder processes the block of pixels according to the block processing setting. For encoding, the video coder transforms a set of residual pixels to generate a set of transform coefficients according to the transform setting. For decoding, the video coder inverse transforms the transform coefficients to generate a set of residual pixels according to the transform setting.
Abstract:
A video decoder determines whether the current block is coded by using intra block copy mode. The video decoder identifies a list of one or more prediction candidates for the current block. When the current block is not coded by using intra block copy mode, one or more spatial neighbors of the current block that are positioned in a same MER as the current block are excluded from the list of prediction candidates. When the current block is coded by using intra block copy mode and the list of prediction candidates belongs to a predefined subset of multiple different candidate lists, at least one of the identified prediction candidates is a spatial neighbor of the current block that is positioned in the MER. The video decoder reconstructs the current block by using a prediction candidate selected from the list of prediction candidates to generate a prediction of the current block.
Abstract:
The present invention provides a control method of a receiver. The control method includes the steps of: when the receiver enters a sleep/standby mode, continually detecting an auxiliary signal from an auxiliary channel to generate a detection result; and if the detection result indicates that the auxiliary signal has a preamble or a specific pattern, generating a wake-up control signal to wake up the receiver before successfully receiving the auxiliary signal having a wake-up command.
Abstract:
A methods and apparatus for block partition in video encoding and decoding are disclosed. According to one method, a current data unit is partitioned into initial blocks using inferred splitting without split-syntax signalling. The initial blocks comprises multiple initial luma blocks and multiple initial chroma blocks, and size of the initial luma block is M×N, M and N are positive integers and the current data unit is larger than M×N for the luma component. A partition structure is determined for partitioning each initial luma block and each initial chroma block into one or more luma CUs (coding units) and one or more chroma CUs respectively. The luma syntaxes and the chroma syntaxes associated with one initial block in the current data unit are signalled or parsed, and then the luma syntaxes and the chroma syntaxes associated with one next initial block in the current data unit are signalled or parsed.