Abstract:
In one example, a method of coding video data includes coding, from an encoded video bitstream, a syntax element that indicates a number of lines of video data that are in one or more of a plurality of sub-PUs of a current prediction unit (PU) of a current coding unit (CU) of video data. In this example, the method further includes determining, for each respective sub-PU of the plurality of sub-PUs, a respective vector that represents a displacement between the respective sub-PU and a respective predictor block from a plurality of previously decoded blocks of video data. In this example, the method further includes reconstructing each sub-PU of the plurality of sub-PUs based on the respective predictor blocks of video data.
Abstract:
An apparatus configured to code (e.g., encode or decode) video information includes a memory unit and a processor in communication with the memory unit. The memory unit is configured to store video information associated with a base layer and an enhancement layer. The processor is configured to up-sample a base layer reference block by using an up-sampling filter when the base and enhancement layers have different resolutions; perform motion compensation interpolation by filtering the up-sampled base layer reference block; determine base layer residual information based on the filtered up-sampled base layer reference block; determine weighted base layer residual information by applying a weighting factor to the base layer residual information; and determine an enhancement layer block based on the weighted base layer residual information. The processor may encode or decode the video information.
Abstract:
In an example, a method of processing video data includes determining a value of a block-level syntax element that indicates, for all samples of a block of video data, whether at least one respective sample of the block is coded based on a color value of the at least one respective sample not being included in a palette of colors for coding the block of video data. The method also includes coding the block of video data based on the value.
Abstract:
A method for motion estimation for screen and non-natural content coding is disclosed. In one aspect, the method may include selecting a candidate block of a first frame of the video data for matching with a current block of a second frame of the video data, calculating a first partial matching cost for matching a first subset of samples of the candidate block to the current block, and determining whether the candidate block has a lowest matching cost with the current block based at least in part on the first partial matching cost.
Abstract:
Bitstream restrictions or constraints on the partitioning of pictures across layers of video data are described. In some examples, the number of tiles per picture for each layer of a plurality of layers is constrained based on a maximum number of tiles per picture for the layer. In some examples, the number of tiles per picture for each layer of the plurality of layers is no greater than the maximum number of tiles per picture for the layer. In some examples, a sum of the numbers of tiles per picture for the plurality of layers is no greater than a sum of the maximum numbers of tiles per picture for the plurality of layers. In some examples, a second largest coding unit (LCU) or coding tree block (CTB) size for a second layer is constrained based on, e.g., to be equal to, a first LCU size for a first layer.
Abstract:
Techniques and systems are provided for encoding video data. For example, restrictions on certain prediction modes can be applied for video coding. A restriction can be imposed that prevents inter-prediction bi-prediction from being performed on video data when certain conditions are met. For example, bi-prediction restriction can be based on whether intra-block copy prediction is enabled for one or more coding units or blocks of the video data, whether a value of a syntax element indicates that one or more motion vectors are in non-integer accuracy, whether both motion vectors of a bi-prediction block are in non-integer accuracy, whether the motion vectors of a bi-prediction block are not identical and/or are not from the same reference index, or any combination thereof. If one or more of these conditions are met, the restriction on bi-prediction can be applied, preventing bi-prediction from being performed on certain coding units or blocks.
Abstract:
In an example, a method of coding video data includes determining, for a pixel associated with a palette index that relates a value of the pixel to a color value in a palette of colors used for coding the pixel, a run length of a run of palette indices being coded with the palette index of the pixel, the method also includes determining a maximum run length for a maximum run of palette indices able to be coded with the palette index of the pixel, and coding data that indicates the run length based on the determined maximum run length.
Abstract:
A device for encoding or decoding video data may clip first residual data based on a bit depth of the first residual data. The device may generate second residual data at least in part by applying an inverse Adaptive Color Transform (IACT) to the first residual data. Furthermore, the device may reconstruct, based on the second residual data, a coding block of a coding unit (CU) of the video data.
Abstract:
An apparatus for encoding video information according to certain aspects includes a memory and computing hardware. The memory is configured to store video information. The computing hardware is configured to determine a bit depth of one or more view identifiers to signal, wherein each of the one or more view identifiers is associated with a layer to be encoded. The computing hardware is further configured to signal the bit depth of the one or more view identifiers in a bitstream.
Abstract:
In an example, a method of processing video data includes determining a value of a block-level syntax element that indicates, for all samples of a block of video data, whether at least one respective sample of the block is coded based on a color value of the at least one respective sample not being included in a palette of colors for coding the block of video data. The method also includes coding the block of video data based on the value.