Abstract:
This disclosure describes techniques for managing filter information for use with adaptive loop filter or any other in-loop filter in video encoder or decoder. In particular, a temporal buffer is managed to store filter coefficients for pictures of a group of pictures based on whether a picture is the starting point the group of pictures.
Abstract:
A method of decoding video data comprising parsing a sub-prediction unit motion flag from received encoded video data, deriving a list of sub-prediction unit level motion prediction candidates if the sub-prediction unit motion flag is active, deriving a list of prediction unit level motion prediction candidates if the sub-prediction unit motion flag is not active, and decoding the encoded video data using a selected motion vector predictor.
Abstract:
Techniques are described in which a decoder is configured to receive an input data block and apply an inverse non-separable transform to at least part of the input data block to generate an inverse non-separable transform output coefficient block. The applying the inverse non-separable transform comprises assigning a window, assigning a weight for each position inside the assigned window, and determining the inverse non-separable transform output coefficient block based on the assigned weights. The decoder is further configured to forming a decoded video block based on the determined inverse non-separable transform output coefficient block, wherein forming the decoded video block comprises summing the residual video block with one or more predictive blocks.
Abstract:
According to certain aspects, an apparatus for coding video information includes a memory and a processor configured to determine whether a first syntax element is present in a bitstream, the first syntax element associated with a sequence parameter set (SPS) and a first flag indicative of whether a temporal identifier (ID) of a reference picture for pictures that refer to the SPS can be nested; and in response to determining that the first syntax element is not present in the bitstream: obtain a second syntax element indicative of a maximum number of temporal sub-layers in a particular layer of the plurality of layers; and determine whether to set the first flag equal to a second flag indicative of whether a temporal ID of a reference picture for any pictures can be nested based at least in part on a value of the second syntax element.
Abstract:
An apparatus for coding video information according to certain aspects includes a memory and a processor. The memory is configured to store video information associated with one or more layers. The processor is configured to code a current access unit (AU) in a bitstream including a plurality of layers, the plurality of layers including a reference layer and at least one corresponding enhancement layer. The processor is further configured to code a first end of sequence (EOS) network abstraction layer (NAL) unit associated with the reference layer in the current AU, the first EOS NAL unit having the same layer identifier (ID) as the reference layer. The processor is also configured to code a second EOS NAL unit associated with the enhancement layer in the current AU, the second EOS NAL unit having the same layer ID as the enhancement layer.
Abstract:
An apparatus configured to code video information includes a memory and a processor in communication with the memory. The memory is configured to store video information associated with a reference layer and an enhancement layer, the reference layer comprising a reference layer (RL) picture having a first slice and a second slice, and the enhancement layer comprising an enhancement layer (EL) picture corresponding to the RL picture. The processor is configured to generate an inter-layer reference picture (ILRP) by upsampling the RL picture, the ILRP having a single slice associated therewith, set slice information of the single slice of the ILRP equal to slice information of the first slice, and use the ILRP to code at least a portion of the EL picture. The processor may encode or decode the video information.
Abstract:
An apparatus configured to code video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a reference layer (RL) and an enhancement layer, the RL comprising an RL picture having an output region that includes a portion of the RL picture. The processor is configured to determine whether a condition indicates that information outside of the output region is available to predict a current block in the enhancement layer. The processor may encode or decode the video information.
Abstract:
An apparatus for coding video information according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a first layer having a first spatial resolution and a corresponding second layer having a second spatial resolution, wherein the first spatial resolution is less than the second spatial resolution. The video information includes at least motion field information associated with the first layer. The processor upsamples the motion field information associated with the first layer. The processor further adds an inter-layer reference picture including the upsampled motion field information in association with an upsampled texture picture of the first layer to a reference picture list to be used for inter prediction. The processor may encode or decode the video information.
Abstract:
Systems and methods for determining information about an enhancement layer of digital video based on information included in a base layer of digital video are described. In one innovative aspect, an apparatus for coding digital video is provided. The apparatus includes a memory for storing a base layer of digital video information and an enhancement layer of digital video information. The apparatus determines a syntax element value for a portion of the enhancement layer based on a syntax element value for a corresponding portion of the base layer. Decoding devices and methods as well as corresponding encoding devices and methods are described.
Abstract:
A first reference index value indicates a position, within a reference picture list associated with a current prediction unit (PU) of a current picture, of a first reference picture. A reference index of a co-located PU of a co-located picture indicates a position, within a reference picture list associated with the co-located PU of the co-located picture, of a second reference picture. When the first reference picture and the second reference picture belong to different reference picture types, a video coder sets a reference index of a temporal merging candidate to a second reference index value. The second reference index value is different than the first reference index value.