Abstract:
Techniques are described for adaptation parameter sets (APS) for adaptive loop filter (ALF) parameters. One example involves obtaining an APS ID value and an APS type value associated with a NAL unit from a bitstream. A first APS associated with at least a portion of at least one picture is identified, with the first APS being uniquely identified by a combination of the APS type value and the APS identifier value, and the APS identifier value of the first APS is in a range based on the APS type value. The portion of the at least one picture is then reconstructed using an adaptive loop filter with parameters defined by the first APS uniquely identified by the APS type value and the APS identifier value.
Abstract:
Techniques are described herein for processing video data. For instance, a current block of a picture of the video data can be obtained, and it can be determined that the current block includes more than one virtual pipeline data unit (VPDU). Current neighbor samples for the current block, reference neighbor samples for the current block, and additional neighbor samples for the current block can be obtained for illumination compensation. One or more illumination compensation parameters can be determined for the current block using the current neighbor samples, the reference neighbor samples, and the additional neighbor samples. The additional neighbor samples are used for determining the one or more illumination compensation parameters based on the current block covering more than one VPDU. Illumination compensation can be performed for the current block using the one or more illumination compensation parameters.
Abstract:
An apparatus for coding video information according to certain aspects includes computing hardware. The computing hardware is configured to: identify a current picture to be predicted using at least one type of inter layer prediction (ILP), the type of ILP comprising one or more of inter layer motion prediction (ILMP) or inter layer sample prediction (ILSP); and control: (1) a number of pictures that may be resampled and used to predict the current picture using ILMP and (2) a number of pictures that may be resampled and used to predict the current picture using ILSP, wherein the computing hardware is configured to control the number of pictures that may be resampled and used to predict the current picture using ILMP independent of the number of pictures that may be resampled and used to predict the current picture using ILSP.
Abstract:
An apparatus for coding video information according to certain aspects includes a memory and a processor. The memory unit is configured to store video information associated with a reference layer picture and an enhancement layer picture. The processor is configured to: store video information associated with a reference layer picture and an enhancement layer picture; receive a scale factor that indicates a proportion of scaling between the reference layer picture and the enhancement layer picture in a first direction; determine, without performing a division operation, a rounding offset value using the scale factor; and determine a coordinate in the first direction of a first sample located in the reference layer picture that corresponds to a second sample located in the enhancement layer picture using the scale factor and the rounding offset value.
Abstract:
An apparatus configured to code (e.g., encode or decode) video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a reference layer and an enhancement layer, the enhancement layer comprising an enhancement layer (EL) picture and the reference layer comprising a reference layer (RL) picture. The processor is configured to generate an inter-layer reference picture (ILRP) by resampling the RL picture; and determine whether, at a predetermined time, a reference picture of the ILRP was a short-term or long-term reference picture with respect to the ILRP. The processor may encode or decode the video information.
Abstract:
In one implementation, an apparatus is provided for encoding or decoding video information. The apparatus comprises a memory unit configured to store video information associated with a base layer and/or an enhancement layer. The apparatus further comprises a processor operationally coupled to the memory unit. In one embodiment, the processor is configured to determine a scaling factor based on spatial dimension values associated with the base and enhancement layers such that the scaling factor is constrained within a predetermined range. The processor is also configured to spatially scale an element associated with the base layer or enhancement layer using the scaling factor and a temporal motion vector scaling process.
Abstract:
An apparatus for coding video information according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a base layer and an enhancement layer. The processor is configured to, in response to determining that the video information associated with the enhancement layer is to be determined based upon the video information associated with the base layer, select between a first transform and a second transform based at least in part on at least one of a transform unit (TU) size and a color component type of the enhancement layer video information.
Abstract:
In one embodiment, an apparatus configured to code video data includes a processor and a memory unit. The memory unit stores video data associated with a first layer having a first spatial resolution and a second layer having a second spatial resolution. The video data associated with the first layer includes at least a first layer block and first layer prediction mode information associated with the first layer block, and the first layer block includes a plurality of sub-blocks where each sub-block is associated with respective prediction mode data of the first layer prediction mode information. The processor derives the predication mode data associated with one of the plurality of sub-blocks based at least on a selection rule, upsamples the derived prediction mode data and the first layer block, and associates the upsampled prediction mode data with each upsampled sub-block of the upsampled first layer block.
Abstract:
A method of decoding video data includes receiving syntax elements extracted from an encoded video bitstream, determining a candidate list for an enhancement layer block, and selectively pruning the candidate list. The syntax elements include information associated with a base layer block of a base layer of the video data. The candidate list is determined at least in part on motion information associated with the base layer block. The enhancement layer block is in an enhancement layer of the video data. The candidate list includes at least one motion information candidate that includes the motion information associated with the base layer block. The candidate list includes a merge list or an AMVP list. Pruning includes comparing one or more motion information candidates and at least one motion information candidate associated with the base layer block that is in the candidate list.
Abstract:
A method of decoding data indicative of a subset of transform coefficients is described. The coefficients are indicative of a block of video data. The method may include determining that no transform coefficient in the subset of transform coefficients has an absolute value greater than one, and, based on the determining, skipping one or more decoding passes on the subset of transform coefficients, the decoding passes relating to decoding level information associated with the subset of transform coefficients.