Abstract:
An apparatus configured to code video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a base layer and an enhancement layer. The video information comprises at least one enhancement layer (EL) block and at least one co-located base layer (BL) block. The co-located BL block has motion information associated therewith. The processor is configured to, in response to determining that the size of the EL block is smaller than a threshold size, either (1) use less than all of the motion information associated with the co-located BL block to code the EL block, or (2) refrain from using any motion information associated with the co-located BL block to code the EL block. The processor may encode or decode the video information.
Abstract:
In one example, the disclosure is directed to techniques that include receiving a bitstream comprising at least a syntax element, a first network abstraction layer unit type, and a coded access unit comprising a plurality of pictures. The techniques further include determining a value of the syntax element which indicates whether the access unit was coded using cross-layer alignment. The techniques further include determining the first network abstraction layer unit type for a picture in the access unit and determining whether the first network abstraction layer unit type equals a value in a range of type values. The techniques further include setting a network abstraction layer unit type for all other pictures in the coded access unit to equal the value of the first network abstraction layer unit type if the first network abstraction layer unit type is equal to a value in the range of type values.
Abstract:
A video coder uses illumination compensation (IC) to generate a non-square predictive block of a current prediction unit (PU) of a current coding unit (CU) of a current picture of the video data. In doing so, the video coder sub-samples a first set of reference samples such that a total number of reference samples in the first sub-sampled set of reference samples is equal to 2m. Additionally, the video coder sub-samples a second set of view reference samples such that a total number of reference samples in the second sub-sampled set of reference samples is equal to 2m. The video coder determines a first IC parameter based on the first sub-sampled set of reference samples and the second sub-sampled set of reference samples. The video coder uses the first IC parameter to determine a sample of the non-square predictive block.
Abstract:
Techniques for coding data, such as, e.g., video data, include coding a first syntax element, conforming to a particular type of syntax element, of a first slice of video data, conforming to a first slice type, using an initialization value set. The techniques further include coding a second syntax element, conforming to the same type of syntax element, of a second slice of video data, conforming to a second slice type, using the same initialization value set. In this example, the first slice type may be different from the second slice type. Also in this example, at least one of the first slice type and the second slice type may be a temporally predicted slice type. For example, the at least one of the first and second slice types may be a unidirectional inter-prediction (P) slice type, or a bi-directional inter-prediction (B) slice type.
Abstract:
An apparatus for coding video information according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit stores video information of a reference layer and an enhancement layer. The processor determines a value of a current video unit of the enhancement layer based on, at least in part, explicit hypotheses and implicit hypotheses calculated from movement information from the reference layer.
Abstract:
Techniques are described for a video coder (e.g., video encoder or video decoder) that is configured to select a context pattern from a plurality of context patterns that are the same for a plurality of scan types. Techniques are also described for a video coder that is configured to select a context pattern that is stored as a one-dimensional context pattern and identifies contexts for two or more scan types.
Abstract:
Systems and methods are provided for video encoding and decoding using intra-block copy mode when constrained intra-prediction is enabled. In various implementations, a video encoding device can determine a current coding unit for a picture from a plurality of pictures. The video encoding device can further determine that constrained intra-prediction mode is enabled. The video encoding device can further encode the current coding unit using one or more reference samples. The one or more reference samples are determined based on whether a reference sample has been predicted using intra-block copy mode prediction without using any inter-predicted samples. When the reference sample is predicted using intra-block copy mode without using any inter-predicted samples, the reference sample is available for predicting the current coding unit. When the reference sample is predicted using intra-block copy mode using at least one inter-predicted sample, the reference sample is not available for predicting the coding unit.
Abstract:
Techniques are described to improved video intra prediction using position-dependent prediction combination in video coding. In High Efficiency Video Encoding a set of 35 linear predictors are used for doing intra coding and prediction can be computed from either a nonfiltered or a filtered set of neighboring “reference” pixels, depending on the selected predictor mode and block size. Techniques of this disclosure may use a weighted combination of both the nonfiltered and filtered set of reference pixels to achieve better compression via improved prediction and therefore small residual, enable effective parallel computation of all sets of prediction values, and maintain low complexity via applying filtering only to a set of reference pixels and not to predicted values themselves.
Abstract:
A device for decoding video data includes a memory configured to store the video data; and one or more processors configured to receive, in a picture parameter set (PPS), a first syntax element indicating that a palette predictor is to be generated using PPS-level palette predictor initializers; receive, in the PPS, a second syntax element indicating a number of the PPS-level palette predictor initializers included in the PPS is equal to zero; and decode a block of video data based on the first syntax element and the second syntax element.
Abstract:
A device for coding video data includes a video coder configured to code first significance information for transform coefficients associated with residual data, wherein the first significance information indicates if a first sub-block comprises at least one non-zero coefficient, wherein the first sub-block is a sub-block of an entire transform block; and, code second significance information, wherein the second significance information indicates if a second sub-block comprises at least one non-zero coefficient, wherein the second sub-block is a sub-block of the first sub-block, wherein coding the second significance information comprises performing an arithmetic coding operation on the second significance information, wherein a context for the arithmetic coding operation is determined based on one or more neighboring sub-blocks of a same size as the first sub-block.