Abstract:
A video coder searches a set of neighbor blocks to generate a plurality of disparity vector candidates. Each of the neighbor blocks is a spatial or temporal neighbor of a current block. The video coder determines, based at least in part on the plurality of disparity vector candidates, a final disparity vector for the current block.
Abstract:
A parent block is partitioned into the plurality of blocks and a disparity vector derivation process is performed to derive a disparity vector for a representative block in the plurality of blocks. A video encoder generates a bitstream that includes a coded representation of the video data in part by performing, based on the derived disparity vector and without separately deriving disparity vectors for any block in the plurality of blocks other than the representative block, inter-view prediction for two or more blocks in the plurality of blocks. A video decoder reconstructs sample blocks for two or more blocks in the plurality of blocks in part by performing, based on the derived disparity vector and without separately deriving disparity vectors for any block in the plurality of blocks other than the representative block, inter-view prediction for the two or more blocks in the plurality of blocks.
Abstract:
In an example, a method of coding video data includes determining a location of a temporal reference block indicated by a temporal motion vector to a current block of video data, where the current block and the temporal reference block are located in a first layer of video data. The method also includes interpolating, with a first type of interpolation, a location of a disparity reference block indicated by a disparity vector of the current block, where the disparity reference block is located in a second, different layer, and where the first type of interpolation comprises a bi-linear filter. The method also includes determining a temporal-disparity reference block of the disparity reference block indicated by a combination of the temporal motion vector and the disparity vector, and coding the current block based on the temporal reference block, the disparity reference block, and the temporal-disparity reference block.
Abstract:
When coding multiview video data, a video encoder and video decoder may select a candidate picture from one of one or more random access point view component (RAPVC) pictures and one or more pictures having a lowest temporal identification value. The video encoder and video decoder may determine whether a block in the selected candidate picture is inter-predicted with a disparity motion vector and determine a disparity vector for a current block of a current picture based on the disparity motion vector. The video encoder and video decoder may inter-prediction encode or decode, respectively, the current block based on the determined disparity vector.
Abstract:
A video coder determines a first disparity vector using a first disparity vector derivation process. In addition, the video coder determines a second disparity vector using a second disparity vector derivation process. The first disparity vector derivation process is different than the second disparity vector derivation process. The video coder uses the first disparity vector to determine a motion vector prediction (MVP) candidate in a set of MVP candidates for a current prediction unit (PU). The video coder uses the second disparity vector to determine residual data.
Abstract:
An example device for filtering a decoded block of video data includes one or more processing units configured to construct a plurality of filters for classes of blocks of a current picture of video data. To construct the plurality of filters for each of the classes, the processing units are configured to determine a value of a flag that indicates whether a fixed filter is used to predict a set of filter coefficients of the class, and in response to the fixed filter being used to predict the set of filter coefficients, determine an index value into a set of fixed filters and predict the set of filter coefficients of the class using a fixed filter of the set of fixed filters identified by the index value.
Abstract:
A video coder may determine a motion vector of a non-adjacent block of a current picture of the video data. The non-adjacent block is non-adjacent to a current block of the current picture. Furthermore, the video coder determines, based on the motion vector of the non-adjacent block, a motion vector predictor (MVP) for the current block. The video coder may determine a motion vector of the current block. The video coder may also determine a predictive block based on the motion vector of the current block.
Abstract:
Methods, systems, and devices for wireless communications are described. A shared channel may include a set of tones carrying multiple types of control information multiplexed together. A wireless device may perform a tone classification method to determine individual subsets of tones from the set of tones for each type of control information to extract each type of control information. The wireless device may determine multiple subsets of tones correspond to multiple types of control information simultaneously based on different extraction parameters. For example, the wireless device may determine a total number of tones in a set of tones, a distance between any given two tones in the set of tones, an offset value, or any combination thereof for the extraction parameters. The wireless device may then use these extraction parameters to determine or extract two or more subsets of tones for two or more corresponding types of control information.
Abstract:
In one example, a device for coding video data includes a memory configured to store video data and a video coder configured to form, for a current block of the video data, a merge candidate list including a plurality of merge candidates, the plurality of merge candidates including four spatial neighboring candidates from four neighboring blocks to the current block and, immediately following the four spatial neighboring candidates, an advanced temporal motion vector prediction (ATMVP) candidate, code an index into the merge candidate list that identifies a merge candidate of the plurality of merge candidates in the merge candidate list, and code the current block of video data using motion information of the identified merge candidate.
Abstract:
Techniques are described for using an inter-intra-prediction block. A video coder may generate a first prediction block according to an intra-prediction mode and generate a second prediction block according to an inter-prediction mode. The video coder may weighted combine, such as based on the intra-prediction mode, the two prediction blocks to generate an inter-intra-prediction block (e.g., final prediction block). In some examples, an inter-intra candidate is identified in a list of candidate motion vector predictors, and an inter-intra-prediction block is used based on identification of the inter-intra candidate in the list of candidate motion vector predictors.