Abstract:
A method of coding video data includes deriving prediction weights for illumination compensation of luma samples of a video block partition once for the video block partition such that the video block partition has a common set of prediction weights for performing illumination compensation of the luma samples regardless of a transform size for the video block partition, calculating a predicted block for the video block partition using the prediction weights using illumination compensation, and coding the video block partition using the predicted block.
Abstract:
When a current view is a dependent texture view, a current coding unit (CU) is not intra coded, and a partitioning mode of the current CU is equal to PART_2N×2N, a video coder obtains, from a bitstream that comprises an encoded representation of the video data, a weighting factor index for the current CU, wherein the current CU is in a picture belonging to a current view. When the current view is not a dependent texture view, or the current CU is intra coded, or the partitioning mode of the current CU is not equal to PART_2N×2N, the video decoder assumes that the weighting factor index is equal to a particular value that indicates that residual prediction is not applied with regard to the current CU.
Abstract:
An example device includes a memory device configured to store encoded video data, and processing circuitry coupled to the memory device. The processing circuitry is configured to determine that a rectangular transform unit (TU) of the stored video data includes a number of pixel rows denoted by a first integer value ‘K’ and a number of pixel columns denoted by a second integer value ‘L,’ where K has a value equal to an integer value ‘m’ left shifted by one, and where L has a value equal to an integer value ‘n’ left shifted by one, to determine that a sum of n and m is an odd number, and based on the sum of n and m being the odd number, to add a delta quantization parameter value to a quantization parameter (QP) value for the rectangular TU to obtain a modified QP value for the rectangular TU.
Abstract:
Techniques are described determining a partition pattern for intra-prediction encoding or decoding a depth block from a partition pattern of one or more partition patterns associated with smaller sized blocks. A video encoder may intra-prediction encode the depth block based on the determined partition pattern, and a video decoder may intra-prediction decode the depth block based on the determine partition pattern.
Abstract:
During a process to derive an inter-view predicted motion vector candidate (IPMVC) for an Advanced Motion Vector Prediction (AMVP) candidate list, a video coder determines, based on a disparity vector of a current prediction unit (PU), a reference PU for the current PU. Furthermore, when a first reference picture of the reference PU has the same picture order count (POC) value as a target reference picture of the current PU, the video coder determines an IPMVC based on a first motion vector of the reference PU. Otherwise, when a second reference picture of the reference PU has the same POC value as the target reference picture of the current PU, the video coder determines the IPMVC based on a second motion vector of the reference PU.
Abstract:
A device for decoding video data includes one or more processors configured to derive M most probable modes (MPMs) for intra prediction of a block of video data, wherein M is greater than 3. The one or more processors decode a syntax element that indicates whether a MPM index or a non-MPM index is used to indicate a selected intra prediction mode of the plurality of intra prediction modes for intra prediction of the block of video data. The one or more processors decode the indicated one of the MPM index or the non-MPM index. Furthermore, the one or more processors reconstruct the block of video data based on the selected intra prediction mode.
Abstract:
A device for decoding video data includes one or more processors configured to decode syntax information that indicates a selected intra prediction mode for a block of video data from among a plurality of intra prediction modes. The plurality of intra prediction modes includes greater than 33 angular intra prediction modes. The angular intra prediction modes defined such that interpolation is performed in 1/32 pel accuracy. The one or more processors reconstruct the block of video data based on the selected intra prediction mode.
Abstract:
A device includes one or more processors configured to derive M most probable modes (MPMs) for intra prediction of a block of video data. As part of deriving the M most probable modes, the one or more processors define a representative intra prediction mode for a left neighboring column and use the representative intra prediction mode for the left neighboring column as an MPM for the left neighboring column, and/or define a representative intra prediction mode for an above neighboring row and use the representative intra prediction mode for the above neighboring row as an MPM for the above neighboring row. A syntax element that indicates whether an MPM index or a non-MPM index is used to indicate a selected intra prediction mode for intra prediction of the block is decoded. The one or more processors reconstruct the block based on the selected intra prediction mode.
Abstract:
A video coding device includes processor(s) configured to determine, for each of a plurality of bins of a value for a syntax element of a current transform coefficient, contexts using respective corresponding bins of values for the syntax element of previously coded transform coefficients. The processor(s) are configured to determine a context for an ith bin of the value for the syntax element of the current transform coefficient using a corresponding ith bin of a value for the syntax element of a previously coded transform coefficient. To use the corresponding ith bin of the value for the syntax element of the previously coded transform coefficient, the processor(s) are configured to use only the ith bin, and no other bins, of the value for the syntax element of the previously coded transform coefficient. ‘i’ represents a non-negative integer.
Abstract:
In one example, a device includes a video coder (e.g., a video encoder or a video decoder) configured to determine that a block of video data is to be coded in accordance with a three-dimensional extension of High Efficiency Video Coding (HEVC), and, based the determination that the block is to be coded in accordance with the three-dimensional extension of HEVC, disable temporal motion vector prediction for coding the block. The video coder may be further configured to, when the block comprises a bi-predicted block (B-block), determine that the B-block refers to a predetermined pair of pictures in a first reference picture list and a second reference picture list, and, based on the determination that the B-block refers to the predetermined pair, equally weight contributions from the pair of pictures when calculating a predictive block for the block.