Abstract:
Techniques and systems are provided for coding video data. For example, a method of coding video data includes determining motion information for a current block and determining an illumination compensation status for the current block. The method further includes coding the current block based on the motion information and the illumination compensation status for the current block. In some examples, the method further includes determining the motion information for the current block based on motion information of a candidate block. In such examples, the method further includes determining an illumination compensation status of the candidate block and deriving the illumination compensation status for the current block based on the illumination compensation status of the candidate block.
Abstract:
A video coder determines a candidate for inclusion in a candidate list for a current prediction unit (PU). The candidate is based on motion parameters of a plurality of sub-PUs of the current PU. If a reference block corresponding to a sub-PU is not coded using motion compensated prediction, the video coder sets the motion parameters of the sub-PU to default motion parameters. For each respective sub-PU from the plurality of sub-PUs, if a reference block for the respective sub-PU is not coded using motion compensated prediction, the motion parameters of the respective sub-PU are not set in response to a subsequent determination that a reference block for any later sub-PU in an order is coded using motion compensated prediction.
Abstract:
An example method of entropy coding video data includes determining a window size of a plurality of window sizes for a context of a plurality of contexts used in a context-adaptive coding process to entropy code a value for a syntax element of the video data; entropy coding, based on a probability state of the context, a bin of the value for the syntax element; updating a probability state of the context based on the window size and the coded bin. The example method also includes entropy coding a next bin with the same context based on the updated probability state of the context.
Abstract:
This disclosure describes techniques for signaling and processing information indicating simplified depth coding (SDC) for depth intra-prediction and depth inter-prediction modes in a 3D video coding process, such as a process defined by the 3D-HEVC extension to HEVC. In some examples, the disclosure describes techniques for unifying the signaling of SDC for depth intra-prediction and depth inter-prediction modes in 3D video coding. The signaling of SDC can be unified so that a video encoder or video decoder uses the same syntax element for signaling SDC for both the depth intra-prediction mode and the depth inter-prediction mode. Also, in some examples, a video coder may signal and/or process a residual value generated in the SDC mode using the same syntax structure, or same type of syntax structure, for both the depth intra-prediction mode and depth inter-prediction mode.
Abstract:
In an example, a method of processing video data includes determining a candidate motion vector for deriving motion information of a current block of video data, where the motion information indicates motion of the current block relative to reference video data. The method also includes determining a derived motion vector for the current block based on the determined candidate motion vector, where determining the derived motion vector comprises performing a motion search for a first set of reference data that corresponds to a second set of reference data outside of the current block.
Abstract:
Example techniques are described to determine transforms to be used during video encoding and video decoding. A video encoder and a video decoder may select transform subsets that each identify one or more candidate transforms. The video encoder and the video decoder may determine transforms from the selected transform subsets.
Abstract:
In one example, a device for coding video data includes a memory configured to store video data and a video coder configured to form, for a current block of the video data, a merge candidate list including a plurality of merge candidates, the plurality of merge candidates including four spatial neighboring candidates from four neighboring blocks to the current block and, immediately following the four spatial neighboring candidates, an advanced temporal motion vector prediction (ATMVP) candidate, code an index into the merge candidate list that identifies a merge candidate of the plurality of merge candidates in the merge candidate list, and code the current block of video data using motion information of the identified merge candidate.
Abstract:
A device for decoding video data includes a memory configured to store video data and a video decoder comprising one or more processors configured to adaptively select motion vector precision for motion vectors used to encode blocks of video data.
Abstract:
An example method of entropy coding video data includes obtaining a pre-defined initialization value for a context of a plurality of contexts used in a context-adaptive entropy coding process to entropy code a value for a syntax element in a slice of the video data, wherein the pre-defined initialization value is stored with N-bit precision; determining, using a look-up table and based on the pre-defined initialization value, an initial probability state of the context for the slice of the video data, wherein a number of possible probability states for the context is greater than two raised to the power of N; and entropy coding, based on the initial probability state of the context, a bin of the value for the syntax element.
Abstract:
Example techniques are described to determine transforms to be used during video encoding and video decoding. A video encoder and a video decoder may select transform subsets that each identify one or more candidate transforms. The video encoder and the video decoder may determine transforms from the selected transform subsets.