Abstract:
A video processing method includes: receiving a motion vector of a prediction block in a current frame; performing a first motion vector scaling operation upon the motion vector to generate a first scaled motion vector; after the first scaled motion vector is generated, utilizing a motion vector clamping circuit for performing a first motion vector clamping operation upon the first scaled motion vector to generate a first clamped motion vector; and determining a position of a reference block of a reference frame according to at least the first clamped motion vector.
Abstract:
An exemplary data arrangement method for a picture includes at least the following steps: obtaining pixel data of a plurality of first N-bit pixels of the picture; and storing the obtained pixel data of the first N-bit pixels in a plurality of M-bit storage units of a first buffer based on a raster-scan order of the picture, wherein M and N are positive integers, and M is not divisible by N. Besides, at least one of the M-bit storage units is filled with part of the obtained pixel data of the first N-bit pixels, and the first N-bit pixels include at least one pixel divided into a first part stored in one of the M-bit storage units in the first buffer and a second part stored in another of the M-bit storage units in the first buffer.
Abstract:
Video encoding or decoding methods and apparatuses include receiving input data associated with a current block in a current picture, determining a preload region in a reference picture shared by two or more coding configurations of affine prediction or motion compensation or by two or more affine refinement iterations, loading reference samples in the preload region, generating predictors for the current block, and encoding or decoding the current block according to the predictors. The predictors associated with the affine refinement iterations or coding configurations are generated based on some of the reference samples in the preload region.
Abstract:
A method for specifying layout of subpictures in video pictures is provided. A video decoder receives data from a bitstream to be decoded as a current picture of a video. For a current subpicture of a set of subpictures of the current picture, the video decoder determines a position of the current subpicture based on a width and a height of the current picture and a previously determined width and height of a particular subpicture in the set of subpictures. The video decoder reconstructs the current picture and the current subpicture based on the determined position.
Abstract:
Various schemes pertaining to pre-encoding processing of a video stream with motion compensated temporal filtering (MCTF) are described. An apparatus determines a filtering interval for a received raw video stream having pictures in a temporal sequence. The apparatus selects from the pictures a plurality of target pictures based on the filtering interval, as well as a group of reference pictures for each target picture to perform pixel-based MCTF, which generates a corresponding filtered picture for each target picture. The apparatus subsequently transmits the filtered pictures as well as non-target pictures to an encoder for encoding the video stream. Subpictures of natural images and screen content images are separately processed by the apparatus.
Abstract:
The present invention provides a control method of a receiver. The control method includes the steps of: when the receiver enters a sleep/standby mode, continually detecting an auxiliary signal from an auxiliary channel to generate a detection result; and if the detection result indicates that the auxiliary signal has a preamble or a specific pattern, generating a wake-up control signal to wake up the receiver before successfully receiving the auxiliary signal having a wake-up command.
Abstract:
Video encoding methods and apparatuses include receiving reconstructed video samples, determining an initial clipping setting for ALF coefficients, deriving clipping setting candidates from the initial clipping setting. ALF coefficients for the initial clipping setting and the clipping setting candidates are derived by solving inverse matrices, where partial intermediate results of solving ALF coefficients are shared by two or more clipping settings. A distortion value corresponds to the derived ALF coefficients for each clipping setting is computed, and final clipping indices for final ALF coefficients are determined according to the distortion values. ALF filtering is applied to the reconstructed video samples based on the final ALF coefficients and the final clipping indices.
Abstract:
A video encoder receives raw pixel data to be encoded as a current block of a current picture of a video into a bitstream. The video encoder identifies multiple candidate bi-prediction positions for the current block, including a center position, a first set of offset positions, and a second set of offset positions. The first set of offset positions and the second set of offset positions interleave each other. The encoder computes distortion values for each of the candidate bi-prediction positions based on several possible weighting parameter values. The distortion values for the center position are based on each of the several possible weighting parameter values. The distortion values for the first set of offset positions are based on a first subset of the possible weighting parameter values. The distortion values for the second set of offset positions are based on a second subset of the possible weighting parameter values.
Abstract:
A video coder that implements illumination compensation is provided. The video coder receives a first block of pixels in a first video picture to be coded as a current block, wherein the current block is associated with a motion vector that references a second block of pixels in a second video picture as a reference block. The video coder performs inter-prediction for the current block by using the motion vector to generate a set of motion-compensated pixels for the current block. The video coder modifies the set of motion-compensated pixels of the current block by applying a linear model that is computed based on neighboring samples of the reference block and of the current block. The neighboring samples are identified based on a position of the current block within a larger block.
Abstract:
Exemplary video processing methods and apparatuses for coding a current block. One implementation operates by receiving input video data associated with a current block in a current picture; determining one or more Motion Vectors (MVs) for generating an OBMC region; generating one or more converted MVs by changing said one or more MVs to one or more integer MVs or changing a MV component of said one or more MVs to an integer component; deriving the OBMC region by motion compensation using said one or more converted MVs; applying OBMC by blending an OBMC predictor in the OBMC region with an original predictor; and encoding or decoding the current block.