BLOCK-BASED Optical Flow Estimation FOR MOTION COMPENSATED PREDICTION IN VIDEO CODING

    公开(公告)号:US20190158843A1

    公开(公告)日:2019-05-23

    申请号:US15817369

    申请日:2017-11-20

    Applicant: GOOGLE LLC

    Abstract: An optical flow reference frame portion (e.g., a block or an entire frame) is generated that can be used for inter prediction of blocks of a current frame in a video sequence. A forward reference frame and a backward reference frame are used in an optical flow estimation that produces a respective motion field for pixels of a current frame. The motion fields are used to warp some or all pixels of the reference frames to the pixels of the current frame. The warped reference frame pixels are blended to form the optical flow reference frame portion. The inter prediction may be performed as part of encoding or decoding portions of the current frame.

    CODING VIDEO SYNTAX ELEMENTS USING A CONTEXT TREE

    公开(公告)号:US20190020900A1

    公开(公告)日:2019-01-17

    申请号:US15648500

    申请日:2017-07-13

    Applicant: GOOGLE LLC

    Abstract: Video syntax elements are coded using a context tree. Context information used for coding previously-coded syntax elements is identified. A context tree is produced by separating the previously-coded syntax elements into data groups based on the context information. The context tree includes nodes representing the data groups. Separating the previously-coded syntax elements can include applying separation criteria against values of the context information to produce at least some of the nodes. Context information is then identified for another set of syntax elements to be coded. One of the nodes of the context tree is identified based on values of the context information associated with one of the other set of syntax elements. That syntax element is then coded according to a probability model associated with the identified node. The context tree can be used to encode or decode syntax elements.

    Overlapped Filtering For Temporally Interpolated Prediction Blocks

    公开(公告)号:US20250142050A1

    公开(公告)日:2025-05-01

    申请号:US18927278

    申请日:2024-10-25

    Applicant: Google LLC

    Abstract: Filtering an interpolated reference frame is described. The interpolated reference frame is generated by determining, from a motion field, a motion vector pointing towards a forward reference frame and a motion vector pointing towards a backward reference frame. Expanded prediction blocks, compared to the size of the block of the interpolated reference frame, are determined using the motion vectors and reference frames. The expanded prediction blocks form overlapping areas with adjacent blocks of the interpolated reference frame. The overlapping areas are filtered to mitigate discontinuities.

    ADAPTIVE CODING OF PREDICTION MODES USING PROBABILITY DISTRIBUTIONS

    公开(公告)号:US20230232001A1

    公开(公告)日:2023-07-20

    申请号:US18188364

    申请日:2023-03-22

    Applicant: GOOGLE LLC

    CPC classification number: H04N19/122 H04N19/13 H04N19/61

    Abstract: A system, apparatus, and method for encoding and decoding a video image having a plurality of frames is disclosed. Encoding and decoding the video image can include selecting, for a current block, a prediction mode from a plurality of prediction modes; identifying, for the current block, a quantization value; selecting, for the current block, a probability distribution from a plurality of probability distributions based on the identified quantization value using a processor; and entropy encoding the selected prediction mode using the selected probability distribution.

    Dynamic motion vector referencing for video coding

    公开(公告)号:US11647223B2

    公开(公告)日:2023-05-09

    申请号:US17132065

    申请日:2020-12-23

    Applicant: GOOGLE LLC

    CPC classification number: H04N19/573 H04N19/52 H04N19/567 H04N19/70

    Abstract: Dynamic motion vector referencing is used to predict motion within video blocks. A motion trajectory is determined for a current frame including a video block to encode or decode based on a reference motion vector used for encoding or decoding one or more reference frames of the current frame. One or more temporal motion vector candidates are then determined for predicting motion within the video block based on the motion trajectory. A motion vector is selected from a motion vector candidate list including the one or more temporal motion vector candidates and used to generate a prediction block. The prediction block is then used to encode or decode the video block. The motion trajectory is based on an order of video frames indicated by frame offset values encoded to a bitstream. The motion vector candidate list may include one or more spatial motion vector candidates.

    Efficient context model computation design in transform coefficient coding

    公开(公告)号:US11405646B2

    公开(公告)日:2022-08-02

    申请号:US17106898

    申请日:2020-11-30

    Applicant: GOOGLE LLC

    Abstract: A processor is configured to maintain, for encoding current values related to the transform coefficients a first line buffer and a second line buffer. The current values are arranged along a current scan-order anti-diagonal line. The first line buffer includes first values of a first scan-order anti-diagonal line. The second line buffer includes second values of a second scan-order anti-diagonal line. The processor is further configured to interleave the first values and the second values in a destination buffer; select, using the destination buffer, a probability distribution for coding a current value of the current values; entropy encode, in a compressed bitstream, the current value using the probability distribution; and replace, for coding values of an immediately subsequent scan-order anti-diagonal line to the current scan-order anti-diagonal line, one of the second line buffer or the first line buffer with the current scan-order anti-diagonal line.

    ADAPTIVE CODING OF PREDICTION MODES USING PROBABILITY DISTRIBUTIONS

    公开(公告)号:US20220014744A1

    公开(公告)日:2022-01-13

    申请号:US17340293

    申请日:2021-06-07

    Applicant: GOOGLE LLC

    Abstract: Generating, by a processor in response to instructions stored on a non-transitory computer readable medium, a reconstructed frame, may include generating a reconstructed block of the reconstructed frame by decoding from an encoded bitstream. Decoding may include decoding a value from the encoded bitstream, identifying, in accordance with the value, a probability distribution for generating the reconstructed block, wherein the value indicates the probability distribution among a plurality of probability distributions determined independently of generating the reconstructed frame, entropy decoding an encoded prediction mode from the encoded bitstream using the probability distribution to identify a prediction mode for generating the reconstructed block, generating a prediction block in accordance with the prediction mode; combining the prediction block and a reconstructed residual block to obtain the reconstructed block, and including the reconstructed block in the reconstructed frame.

    Deblocking filtering
    30.
    发明授权

    公开(公告)号:US10750171B2

    公开(公告)日:2020-08-18

    申请号:US16016768

    申请日:2018-06-25

    Applicant: GOOGLE LLC

    Abstract: Systems and methods are disclosed for encoding and decoding video. For example, methods may include: accessing an encoded bitstream; reconstructing an image including multiple color planes based on data from the encoded bitstream; decoding a first filter level from the encoded bitstream, wherein the first filter level specifies one or more thresholds that are used to select a length for a deblocking filter; decoding a second filter level from the encoded bitstream, wherein the second filter level specifies one or more thresholds that are used to select a length for a deblocking filter; after reconstruction of the image, applying a deblocking filter to a first color plane of the image using the first filter level; and, after reconstruction of the image, applying a deblocking filter to a second color plane of the image using the second filter level.

Patent Agency Ranking