Abstract:
A method for motion vector difference (MVD) coding of screen content video data is disclosed. In one aspect, the method includes determining an MVD between a predicted motion vector and a current motion vector and generating a binary string comprising n bins via binarizing the MVD. The method further includes determining whether an absolute value of the MVD is greater than a threshold value and encoding a subset of the n bins via an exponential Golomb code having an order that is greater than one in response to the absolute value of the MVD being greater than the threshold value.
Abstract:
Techniques and systems are provided for encoding and decoding video data. For example, a method of encoding video data including a plurality of pictures is described. The method includes performing intra-picture prediction on a block of one of the pictures to generate a prediction unit. Performing the intra-picture prediction includes selecting a reference block for intra-block copy prediction of a coding tree unit (CTU). The reference block is selected from a plurality of encoded blocks, and blocks within the CTU encoded with bi-prediction are excluded from selection as the reference block. Performing the intra-picture prediction further includes performing intra-block copy prediction with the selected reference block to generate the prediction unit. The method also includes generating syntax elements encoding the prediction unit based on the performed intra-picture prediction.
Abstract:
Techniques and systems are provided for encoding and decoding video data. For example, a method of encoding video data includes obtaining video data at an encoder, and determining to perform intra-picture prediction on the video data, using intra-block copy prediction, to generate the plurality of encoded video pictures. The method also includes performing the intra-picture prediction on the video data using the intra-block copy prediction, and, in response to determining to perform the intra-picture prediction on the video data using the intra-block copy prediction, disabling at least one of inter-picture bi-prediction or inter-picture uni-prediction for the plurality of encoded video pictures. The method also includes generating the plurality of encoded video pictures based on the received video data according to the performed intra-block copy prediction.
Abstract:
An example method of decoding video data includes determining a palette for decoding a block of video data, where the palette includes one or more palette entries each having a respective palette index, determining a first plurality of palette indices for first pixels of the block of video data, enabling a palette coding mode based on a run length of a run of a second plurality of palette indices for second pixels of the block of video data being decoded relative to the first plurality of palette indices meeting a run length threshold, and decoding the run of the second plurality of palette indices relative to the first plurality of palette indices using the palette coding mode.
Abstract:
This disclosure describes techniques relevant to HTTP streaming of media data. According to these techniques, a server device may signal an open decoding refresh (ODR) random access point (RAP) for a movie segmentation of a movie representation. At least one frame of the media segmentation following the ODR RAP frame in decoding order may not be correctly decoded, and wherein each frame of the media segmentation following the ODR RAP frame in display order can be correctly decoded without relying on content of frames prior to the ODR RAP in display order. According to the techniques of this disclosure, a client device may communicate a request to a server device for the streaming of media data based on signaling of the ODR RAP. Also according to the techniques of this disclosure, a client device may commence decoding and/or playback of the movie representation based on signaling of the ODR RAP.
Abstract:
This disclosure proposes various techniques for limiting the number of bins that are coded using an adaptive context model with context adaptive binary arithmetic coding (CABAC). In particular, this disclosure proposes to limit the number of bins that use CABAC for coding level information of transform coefficients in a video coding process.
Abstract:
In general, techniques are described for performing transform dependent de-blocking filtering, which may be implemented by a video encoding device. The video encoding device may apply a transform to a video data block to generate a block of transform coefficients, apply a quantization parameter to quantize the transform coefficients and reconstruct the block of video data from the quantized transform coefficients. The video encoding device may further determine at least one offset used in controlling de-blocking filtering based on the size of the applied transform, and perform de-blocking filtering on the reconstructed block of video data based on the determined offset. Additionally, the video encoder may specify a flag in a picture parameter set (PPS) that indicates whether the offset is specified in one or both of the PPS and a header of an independently decodable unit.
Abstract:
A video encoder is configured to determine a first and second binary string for a value indicating the position of the last significant coefficient, within a video block of size T. A video decoder is configured to determine a value indicating the position of a last significant coefficient within a video block of size T based on a first and second binary string. In one example, the first binary string is based on a truncated unary coding scheme defined by a maximum bit length defined by 2 log2(T)−1 and the second binary string is based on a fixed length coding scheme defined by a maximum bit length defined by log2(T)−2.
Abstract:
The techniques of this disclosure are generally related to parallel coding of video units that reside along rows or columns of blocks in largest coding units. For example, the techniques include removing intra-prediction dependencies between two video units in different rows or columns to allow for parallel coding of rows or columns of the video units.
Abstract:
In various aspects, this disclosure is directed to an example method for decoding video data. The example method includes determining candidate blocks for a block vector prediction process from a subset of candidate blocks used for an advanced motion vector prediction mode or a merge mode for motion vector prediction process; performing the block vector prediction process for a block of video data using the determined candidate blocks; and decoding the block of video data using intra block copy based on the block vector prediction process.