Abstract:
An apparatus for coding video data according to certain aspects includes a memory and a processor in communication with the memory. The memory stores video block information. The video block information includes reference layer block information. The processor determines, based on a parameter of the video block information, a transform function that may be used to code the video block information. The processor may encode or decode the video block information. The transform function may be an alternative transform when the parameter is a predetermined value and a primary transform when the parameter is not the predetermined value. The alternative transform includes one of: a discrete-sine-transform (DST), a Type-I DST, a Type-III DST, a Type-IV DST, a Type-VII DST, a discrete-cosine-transform (DCT), a DCT of different types, and a Karhunen-Loeve transform (KLT).
Abstract:
In palette-based coding, a video coder may form a so-called “palette” as a table of colors representing the video data of a given block. The video coder may code index values for one or more pixels values of a current block of video data, where the index values indicate entries in the palette that represent the pixel values of the current block. A method includes determining a number of entries in a palette, and determining whether a block of video data includes any escape pixels not associated with any entry in the palette. The method includes responsive to determining that the number of entries is one, and that the block does not include any escape pixels, bypassing decoding index values for the pixel values of the block, and determining the pixel values of the block to be equal to the one entry in the palette.
Abstract:
A method of decoding video data includes decoding a first block of video data to produce a block of reconstructed luma residual values and a block of predicted chroma residual values, wherein the block of video data has one of a 4:2:0 or a 4:2:2 chroma sub-sampling format. The method further includes performing a color residual prediction process to reconstruct a block of chroma residual values for the first block of video data using a subset of the reconstructed luma residual values as luma predictors for the block of predicted chroma residual values.
Abstract:
In some examples, a video coder employs a two-level technique to code information that identifies a position within the block of transform coefficients of one of the coefficients that is a last significant coefficient (LSC) for the block according to a scanning order associated with the block of transform coefficients. For example, a video coder may code a sub-block position that identifies a position of one of the sub-blocks that includes the LSC within the block, and code a coefficient position that identifies a position of the LSC within the sub-block that includes the LSC.
Abstract:
In general, techniques are described for coding a current video block within a current picture based on a predictor block within the current picture, the predictor block identified by a block vector. The techniques include identifying an unavailable pixel of the predictor block, obtaining a value for the unavailable pixel based on at least one neighboring reconstructed pixel of the unavailable pixel, and coding the current video block based on a version of the predictor block that includes the obtained value for the unavailable pixel. The unavailable pixel may be located outside of a reconstructed region of the current picture.
Abstract:
In an example, a method of decoding video data includes generating a residual block of a picture based on a predicted residual block including reconstructing one or more residual values of the residual block based on one or more predicted residual values of the residual block. The method also includes generating a current block of the picture based on a combination of the residual block and a prediction block of the picture.
Abstract:
In one example, a device includes a video coder configured to code a first set of syntax elements for the coefficients of a residual block of video data, and code, using at least a portion of the first set of syntax elements as context data, a second set of syntax elements for the coefficients, wherein the first set of syntax elements each correspond to a first type of syntax element for the coefficients, and wherein the second set of syntax elements each correspond to a second, different type of syntax element for the coefficients. For example, the first set of syntax elements may comprise values indicating whether the coefficients are significant (that is, have non-zero level values), and the second set of syntax elements may comprise values indicating whether level values for the coefficients have absolute values greater than one.
Abstract:
In an example, aspects of this disclosure relate to a method for coding video data that includes predicting a first non-square partition of a current block of video data using a first intra-prediction mode, where the first non-square partition has a first size. The method also includes predicting a second non-square partition of the current block of video data using a second intra-prediction mode, where the second non-square partition has a second size different than the first size. The method also includes coding the current block based on the predicted first and second non-square partitions.
Abstract:
A video encoding device is configured to obtain an N by N array of residual values for a luma component and a corresponding N/2 by N array of residual values for a chroma component. The video encoding device may partition the N/2 by N array of residual values for the chroma component into two N/2 by N/2 sub-arrays of chroma residual values. The video encoding device may further partition the sub-arrays of chroma residual values based on the partitioning of the array of residual values for the luma component. Video encoding device may perform a transform on each of the sub-arrays of chroma residual values to generate transform coefficients. A video decoding device may use data defining sub-arrays of transform coefficients to perform a reciprocal process to generate residual values.
Abstract:
During a video encoding or decoding process, a predicted prediction block is generated for a CU. The CU may have two or more prediction units (PUs). A computing device selects a neighbor region size. After the computing device selects the neighbor region size, samples in a transition zone of the prediction block are identified. Samples associated with a first PU are in the transition zone if neighbor regions that contain the samples also contain samples associated with a second PU. Samples associated with the second PU may be in the transition zone if neighbor regions that contain the samples also contain samples associated with the first PU. The neighbor regions have the selected neighbor region size. A smoothing operation is then performed on the samples in the transition zone.