Abstract:
In an example, a method of decoding video data using palette mode may include receiving a palette mode encoded block of video data of a picture. The method may include receiving encoded palette mode information for the palette mode encoded block of video data. The encoded palette mode information may be encoded according to a kth order non-uniform truncated exponential-Golomb (TEGk) coding scheme and includes a unary prefix code word and a suffix code word. The method may include entropy decoding the encoded palette mode information using the kth order non-uniform truncated exponential-Golomb (TEGk) coding scheme. The kth order non-uniform TEGk coding scheme is different from a kth order exponential-Golomb (EGk) coding scheme and a kth order truncated exponential-Golomb (TEGk) coding scheme. The method may include decoding the palette mode encoded block of video data using the decoded palette mode information.
Abstract:
A reduction in the number of binarizations and/or contexts used in context adaptive binary arithmetic coding (CABAC) for video coding is proposed. In particular, this disclosure proposes techniques that may lower the number contexts used in CABAC by up to 56.
Abstract:
A video coder may include a current picture and a reference picture in a reference picture list. The video coder may determine a co-located block of the reference picture. The co-located block is co-located with a current block of the current picture. Furthermore, the video coder derives a temporal motion vector predictor from the co-located block and may determine the temporal motion vector predictor has sub-pixel precision. The video coder may right-shift the temporal motion vector predictor determined to have sub-pixel precision. In addition, the video coder may determine, based on the right-shifted temporal motion vector predictor, a predictive block within the current picture.
Abstract:
In an example, a method of processing video data may include inferring a pixel scan order for a first palette mode encoded block of video data without receiving a block-level syntax element having a value representative of the pixel scan order for the first palette mode encoded block. The method may include decoding the first palette mode encoded block of video data using the inferred pixel scan order. The method may include receiving a block-level syntax element having a value representative of a pixel scan order for a second palette mode encoded block of video data. The method may include determining the pixel scan order for the second palette mode encoded block of video data based on the received block-level syntax element. The method may include decoding the second palette mode encoded block of video data using the determined pixel scan order.
Abstract:
A reduction in the number of binarizations and/or contexts used in context adaptive binary arithmetic coding (CABAC) for video coding is proposed. In particular, this disclosure proposes techniques that may lower the number contexts used in CABAC by up to 56.
Abstract:
A method for motion vector difference (MVD) coding of screen content video data is disclosed. In one aspect, the method includes determining an MVD between a predicted motion vector and a current motion vector and generating a binary string comprising n bins via binarizing the MVD. The method further includes determining whether an absolute value of the MVD is greater than a threshold value and encoding a subset of the n bins via an exponential Golomb code having an order that is greater than one in response to the absolute value of the MVD being greater than the threshold value.
Abstract:
Techniques and systems are provided for encoding and decoding video data. For example, a method of encoding video data including a plurality of pictures is described. The method includes performing intra-picture prediction on a block of one of the pictures to generate a prediction unit. Performing the intra-picture prediction includes selecting a reference block for intra-block copy prediction of a coding tree unit (CTU). The reference block is selected from a plurality of encoded blocks, and blocks within the CTU encoded with bi-prediction are excluded from selection as the reference block. Performing the intra-picture prediction further includes performing intra-block copy prediction with the selected reference block to generate the prediction unit. The method also includes generating syntax elements encoding the prediction unit based on the performed intra-picture prediction.
Abstract:
Techniques and systems are provided for encoding and decoding video data. For example, a method of encoding video data includes obtaining video data at an encoder, and determining to perform intra-picture prediction on the video data, using intra-block copy prediction, to generate the plurality of encoded video pictures. The method also includes performing the intra-picture prediction on the video data using the intra-block copy prediction, and, in response to determining to perform the intra-picture prediction on the video data using the intra-block copy prediction, disabling at least one of inter-picture bi-prediction or inter-picture uni-prediction for the plurality of encoded video pictures. The method also includes generating the plurality of encoded video pictures based on the received video data according to the performed intra-block copy prediction.
Abstract:
An example method of decoding video data includes determining a palette for decoding a block of video data, where the palette includes one or more palette entries each having a respective palette index, determining a first plurality of palette indices for first pixels of the block of video data, enabling a palette coding mode based on a run length of a run of a second plurality of palette indices for second pixels of the block of video data being decoded relative to the first plurality of palette indices meeting a run length threshold, and decoding the run of the second plurality of palette indices relative to the first plurality of palette indices using the palette coding mode.
Abstract:
A video encoder is configured to determine a first and second binary string for a value indicating the position of the last significant coefficient, within a video block of size T. A video decoder is configured to determine a value indicating the position of a last significant coefficient within a video block of size T based on a first and second binary string. In one example, the first binary string is based on a truncated unary coding scheme defined by a maximum bit length defined by 2 log2(T)−1 and the second binary string is based on a fixed length coding scheme defined by a maximum bit length defined by log2(T)−2.