Abstract:
Techniques and systems are provided for deriving one or more sets of affine motion parameters at a decoder. For example, the decoder can obtain video data from an encoded video bitstream. The video data includes at least a current picture and a reference picture. The decoder can determine a set of affine motion parameters for a current block of the current picture. The set of affine motion parameters can be used for performing motion compensation prediction for the current block. The set of affine motion parameters can be determined using a current affine template of the current block and a reference affine template of the reference picture. In some cases, an encoder can determine a set of affine motion parameters for a current block using a current affine template of the current block and a reference affine template of the reference picture, and can generate an encoded video bitstream that includes a syntax item indicating template matching based affine motion derivation mode is to be used by a decoder for the current block. The encoded video bitstream may not include any affine motion parameters for determining the set of affine motion parameters.
Abstract:
A method of coding video data can include receiving video information associated with a reference layer, an enhancement layer, or both, and generating a plurality of inter-layer reference pictures using a plurality of inter-layer filters and one or more reference layer pictures. The generated plurality of inter-layer reference pictures may be inserted into a reference picture list. A current picture in the enhancement layer may be coded using the reference picture list. The inter-layer filters may comprise default inter-layer filters or alternative inter-layer filters signaled in a sequence parameter set, video parameter set, or slice header.
Abstract:
An apparatus configured to code (e.g., encode or decode) video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a base layer and an enhancement layer. The processor is configured to up-sample a base layer reference block by using an up-sampling filter when the base and enhancement layers have different resolutions; perform motion compensation interpolation by filtering the up-sampled base layer reference block; determine base layer residual information based on the filtered up-sampled base layer reference block; determine weighted base layer residual information by applying a weighting factor to the base layer residual information; and determine an enhancement layer block based on the weighted base layer residual information. The processor may encode or decode the video information.
Abstract:
An apparatus for coding video information according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit stores video information of a reference layer. The processor determines a value of a video unit based at least in part on a prediction value and an adjusted residual prediction value associated with the reference layer. The adjusted residual prediction value is equal to a residual prediction from the reference layer multiplied by a weighting factor that is different from 1.
Abstract:
In general, this disclosure describes techniques for improved inter-view residual prediction (IVRP) in three-dimensional video coding. These techniques include determining IVRP availability based on coded block flags and coding modes of residual reference blocks, disallowing IVRP coding when a block is inter-view predicted, using picture order count (POC) values to determine whether IVRP is permitted, applying IVRP to prediction units (PUs) rather than coding units (CUs), inferring values of IVRP flags when a block is skip or merge mode coded, using an IVRP flag of a neighboring block to determine context for coding an IVRP flag of a current block, and avoiding resetting of samples of a residual reference block to zeros during generation.