Abstract:
Systems and techniques are described for processing video data. For example, an apparatus can determine, for a sample of a first block of video data, histogram of gradient (HoG) information based on at least one sample from a second block neighboring the first block. The apparatus can determine, based on the HoG information, an angle associated with a direction of a gradient for the sample and the at least one sample from the second block neighboring the first block. The apparatus can further compare the angle to one or more predefined values and determine an index associated with the angle based on the comparison of the angle to the one or more predefined values. The apparatus can then determine, based on the index, an intra-prediction mode for coding the first block of video data.
Abstract:
Techniques are described herein for processing video data using enhanced interpolation filters for intra-prediction. For instance, a device can determine an intra-prediction mode for predicting a block of video data. The device can determine a type of smoothing filter to use for the block of video data, wherein the type of the smoothing filter is determined based at least in part on comparing at least one of a width of the block of video data and a height of the block of video data to a first threshold. The device can further perform intra-prediction for the block of video data using the determined type of smoothing filter and the intra-prediction mode.
Abstract:
A device capable of compressing video data includes a memory configured to store a luma new filter value, a chroma new filter value, a cross component Cb new filter value, and a cross component Cr new filter value. The device may also include one or more processors, coupled to the memory, configured to set a joint constraint on the luma new filter value, the chroma new filter value, the cross component Cb new filter value, and the cross component Cr new filter value, such that each of the luma new filter value, the chroma new filter value, the cross component Cb new filter value, and the cross component Cr new filter value are not disabled in a unit associated with an adaptation parameter set having a first adaptation parameter set identification (APS ID).
Abstract:
A rectangular block of video data is obtained, and the lengths of first and second sides of the block are determined. Intra-coded samples may be excluded from the first and/or second sides, or replaced with intra-coded samples from a reference block. Lengths of the first and second sides are determined based on non-excluded samples. Based on these lengths, a shortest or greatest side is selected. In some cases, additional samples may be excluded so that the total number of samples is a power of two. Illumination compensation parameters are determined based on remaining (non-excluded) samples neighboring the current block.
Abstract:
An apparatus configured to code video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a reference layer (RL) and an enhancement layer (EL), the RL having an RL picture in a first access unit, and the EL having a first EL picture in the first access unit, wherein the first EL picture is associated with a first set of parameters. The processor is configured to determine whether the first EL picture is an intra random access point (IRAP) picture, determine whether the first access unit immediately follows a splice point where first video information is joined with second video information including the first EL picture, and perform, based on the determination of whether the first EL picture is an intra random access point (IRAP) picture and whether the first access unit immediately follows a splice point, one of (1) refraining from associating the first EL picture with a second set of parameters that is different from the first set of parameters, or (2) associating the first EL picture with a second set of parameters that is different from the first set of parameters. The processor may encode or decode the video information.
Abstract:
An apparatus for coding video information may include computing hardware configured to: when a current picture is to be predicted using at least inter layer motion prediction (ILMP): process a collocated reference index value associated with the current picture, wherein the collocated reference index value indicates a first reference picture that is used in predicting the current picture using inter layer prediction (ILP); and determine whether the first reference picture indicated by the collocated reference index value is enabled for ILMP; when the current picture is to be predicted using at least inter layer sample prediction (ILSP): process a reference index value associated with a block in the current picture, wherein the reference index value indicates a second reference picture that is used in predicting the block in the current picture using ILP; and determine whether the second reference picture indicated by the reference index value is enabled for ILSP.
Abstract:
An apparatus for coding video information according to certain aspects includes a memory and a processor. The memory unit is configured to store video information associated with an interlayer reference picture for a current picture to be coded. The processor is configured to: receive information relating to a plurality of interlayer reference offsets that are configured to define a region of a resampled version of the interlayer reference picture, wherein the region is used to generate a prediction of the current picture, and wherein the plurality of interlayer reference offsets include a left offset, a top offset, a right offset, and a bottom offset that are each specified relative to the current picture; determine based at least in part on the plurality of interlayer reference offsets whether to resample the interlayer reference picture; and in response to determining to resample the interlayer reference picture, resample the interlayer reference picture.
Abstract:
An apparatus configured to code (e.g., encode or decode) video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a base layer and an enhancement layer, the enhancement layer comprising an enhancement layer (EL) block and the base layer comprising a base layer (BL) block that is co-located with the enhancement layer block. The processor is configured to determine predicted pixel information of the EL block by applying a prediction function to pixel information of the BL block, and to determine the EL block using the predicted pixel information. The processor may encode or decode the video information.
Abstract:
An apparatus configured to code video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store a candidate list generated for coding the video information. The candidate list comprises at least one base layer motion vector candidate. The processor is configured to determine a behavior for generating said at least one base layer motion vector candidate, generate said at least one base layer motion vector candidate for a current prediction unit (PU) in a particular coding unit (CU) according to the determined behavior, wherein the particular CU has one or more PUs, and add said at least one base layer motion vector candidate to the candidate list. The processor may encode or decode the video information.
Abstract:
An apparatus for coding video data according to certain aspects includes a memory and a processor in communication with the memory. The memory is configured to store video information, such as base layer video information and enhancement layer video information. The processor is configured to determine a value of a current video unit of enhancement layer video information based at least on a weighted inter-layer predictor and a weighted intra-layer predictor of at least one color component of the current video unit.