Abstract:
In an example, a method of decoding video data using palette mode may include receiving a palette mode encoded block of video data of a picture. The method may include receiving encoded palette mode information for the palette mode encoded block of video data. The encoded palette mode information may be encoded according to a kth order non-uniform truncated exponential-Golomb (TEGk) coding scheme and includes a unary prefix code word and a suffix code word. The method may include entropy decoding the encoded palette mode information using the kth order non-uniform truncated exponential-Golomb (TEGk) coding scheme. The kth order non-uniform TEGk coding scheme is different from a kth order exponential-Golomb (EGk) coding scheme and a kth order truncated exponential-Golomb (TEGk) coding scheme. The method may include decoding the palette mode encoded block of video data using the decoded palette mode information.
Abstract:
A device for decoding video data is configured to determine, based on a chroma sampling format for the video data, that adaptive color transform is enabled for one or more blocks of the video data; determine a quantization parameter for the one or more blocks based on determining that the adaptive color transform is enabled; and dequantize transform coefficients based on the determined quantization parameter. A device for decoding video data is configured to determine for one or more blocks of the video data that adaptive color transform is enabled; receive in a picture parameter set, one or more offset values in response to adaptive color transform being enabled; determine a quantization parameter for a first color component of a first color space based on a first of the one or more offset values; and dequantize transform coefficients based on the quantization parameter.
Abstract:
A device for decoding video data is configured to determine for one or more blocks of the video data that adaptive color transform is enabled; determine a quantization parameter for the one or more blocks; in response to a value of the quantization parameter being below a threshold, modify the quantization parameter to determine a modified quantization parameter; and dequantize transform coefficients based on the modified quantization parameter.
Abstract:
A reduction in the number of binarizations and/or contexts used in context adaptive binary arithmetic coding (CABAC) for video coding is proposed. In particular, this disclosure proposes techniques that may lower the number contexts used in CABAC by up to 56.
Abstract:
A video coder may include a current picture and a reference picture in a reference picture list. The video coder may determine a co-located block of the reference picture. The co-located block is co-located with a current block of the current picture. Furthermore, the video coder derives a temporal motion vector predictor from the co-located block and may determine the temporal motion vector predictor has sub-pixel precision. The video coder may right-shift the temporal motion vector predictor determined to have sub-pixel precision. In addition, the video coder may determine, based on the right-shifted temporal motion vector predictor, a predictive block within the current picture.
Abstract:
In an example, a process for coding video data includes coding, with a variable length code, a syntax element indicating depth modeling mode (DMM) information for coding a depth block of video data. The process also includes coding the depth block based on the DMM information.
Abstract:
In an example, aspects of this disclosure relate to a method for decoding a reference index syntax element in a video decoding process that includes decoding at least one bin of a reference index value with a context coding mode of a context-adaptive binary arithmetic coding (CABAC) process. The method also includes decoding, when the reference index value comprises more bins than the at least one bin coded with the context coded mode, at least another bin of the reference index value with a bypass coding mode of the CABAC process, and binarizing the reference index value.
Abstract:
In an example, a method of processing video data may include inferring a pixel scan order for a first palette mode encoded block of video data without receiving a block-level syntax element having a value representative of the pixel scan order for the first palette mode encoded block. The method may include decoding the first palette mode encoded block of video data using the inferred pixel scan order. The method may include receiving a block-level syntax element having a value representative of a pixel scan order for a second palette mode encoded block of video data. The method may include determining the pixel scan order for the second palette mode encoded block of video data based on the received block-level syntax element. The method may include decoding the second palette mode encoded block of video data using the determined pixel scan order.
Abstract:
A video coder can control in-picture prediction across slice boundaries within a picture. In one example, a first syntax element can control if in-picture prediction across slice boundaries for slices of a picture. If in-picture prediction across slice boundaries is enabled for the picture, then a second syntax element can control, for an individual slices, if in-picture prediction across slice boundaries is enabled for the slice.
Abstract:
A reduction in the number of binarizations and/or contexts used in context adaptive binary arithmetic coding (CABAC) for video coding is proposed. In particular, this disclosure proposes techniques that may lower the number contexts used in CABAC by up to 56.