Abstract:
In an example, a method of coding video data includes determining, for a first block of video data in a first layer of video data, a temporal motion vector and associated temporal reference picture for predicting the first block, where the temporal reference picture has a picture order count value. The method also includes determining a disparity reference block in a disparity reference picture indicated by a disparity vector associated with the first block, and determining whether a decoded picture buffer contains a temporal-disparity reference picture in the second view and having the picture order count value of the temporal reference picture. When the decoded picture buffer does not contain a temporal-disparity reference picture in the second view and having the picture order count value of the temporal reference picture, the method includes modifying an inter-view residual prediction process for predicting residual data of the first block.
Abstract:
A video encoder generates, based on a reference picture set of a current view component, a reference picture list for the current view component. The reference picture set includes an inter-view reference picture set. The video encoder encodes the current view component based at least in part on one or more reference pictures in the reference picture list. In addition, the video encoder generates a bitstream that includes syntax elements indicating the reference picture set of the current view component. A video decoder parses, from the bitstream, syntax elements indicating the reference picture set of the current view component. The video decoder generates, based on the reference picture set, the reference picture list for the current view component. In addition, the video decoder decodes at least a portion of the current view component based on one or more reference pictures in the reference picture list.
Abstract:
The techniques of this disclosure may be generally related to using motion information for a corresponding block from a texture view component that corresponds with a block in a depth view component in coding the block in the depth view component. In some examples, for coding purposes, the techniques may use motion information when the spatial resolution of the texture view component is different than the spatial resolution of the depth view component.
Abstract:
A video coder may determine a motion vector of a non-adjacent block of a current picture of the video data. The non-adjacent block is non-adjacent to a current block of the current picture. Furthermore, the video coder determines, based on the motion vector of the non-adjacent block, a motion vector predictor (MVP) for the current block. The video coder may determine a motion vector of the current block. The video coder may also determine a predictive block based on the motion vector of the current block.
Abstract:
A video decoder including one or more processors configured to receive one or more bits, in a bitstream, that indicate the encoded current block of video data was encoded based on a unified candidate list that includes motion vector candidates based on one or more translational motion vectors, and motion vector candidates based on one or more affine motion vectors. A merge index represented in the bitstream may indicate which candidate in the unified candidate list is associated with the motionvector of the encoded current block of video data. Based on the merge index, the one or more processors are configured to select one or more motion vectors of a candidate from the unified candidate list, based on the merge index, where the candidate has one or more of the motion vectors corresponding to the translational motion vectors or affine motion vectors within the unified candidate list.
Abstract:
A method of decoding video data includes constructing, by a video decoder implemented in processing circuitry, a candidate list of motion vector information for a portion of a current frame. The method includes receiving, by the video decoder, signaling information indicating starting motion vector information of the candidate list of motion vector information, the starting motion vector information indicating an initial position in a reference frame. The method includes refining, by the video decoder, based on one or more of bilateral matching or template matching, the starting motion vector information to determine refined motion vector information indicating a refined position in the reference frame that is within a search range from the initial position. The method includes generating, by the video decoder, a predictive block based on the refined motion vector information and decoding, by the video decoder, the current frame based on the predictive block.
Abstract:
An example device for coding video data is configured to determine that a block of the video data includes a plurality of sub-blocks, each of the sub-blocks having respective motion information referring to respective reference blocks in a reference picture in a memory, determine a single reference block of the reference picture, the single reference block including each of the respective reference blocks, wherein determining the single reference block comprises: determine four corner sub-blocks of the block included in the plurality of sub-blocks; and determine the single reference block according to the respective motion information for the four corner sub-blocks such that corners of the single reference block correspond to corners of the respective reference blocks of the four corner sub-blocks, retrieve data of the single reference block from the reference picture, and predict the sub-blocks from the respective reference blocks using the data of the single reference block.
Abstract:
Methods, systems, and devices for wireless communications are described. In some systems, a first device may transmit a signal to a second device including a number of error detection bits interleaved with a number of information bits. The second device may use the error detection bits to determine if the signal was received correctly, where each error detection bit may be associated with a set of information bits. The second device may progressively decode the signal and continuously perform an error detection calculation based on a first set of information bits associated with a first error detection bit. Based on the error detection calculation, the second device may calculate an expected error detection bit corresponding to the first error detection bit. The second device may compare the first error detection bit to the expected error detection bit. Other aspects and features are also claimed and described.
Abstract:
A video decoder selects a source affine block. The source affine block is an affine-coded block that spatially neighbors a current block. Additionally, the video decoder extrapolates motion vectors of control points of the source affine block to determine motion vector predictors for control points of the current block. The video decoder inserts, into an affine motion vector predictor (MVP) set candidate list, an affine MVP set that includes the motion vector predictors for the control points of the current block. The video decoder also determines, based on an index signaled in a bitstream, a selected affine MVP set in the affine MVP set candidate list. The video decoder obtains, from the bitstream, motion vector differences (MVDs) that indicate differences between motion vectors of the control points of the current block and motion vector predictors in the selected affine MVP set.
Abstract:
In general, this disclosure describes techniques for coding video blocks using a color-space conversion process. A video coder, such as a video encoder or a video decoder, may determine a bit-depth of a luma component of the video data and a bit-depth of a chroma component of the video data. In response to the bit-depth of the luma component being different than the bit depth of the chroma component, the video coder may modify one or both of the bit depth of the luma component and the bit depth of the chroma component such that the bit depths are equal. The video coder may further apply the color-space transform process in encoding the video data.