Abstract:
A video decoder including one or more processors configured to receive one or more bits, in a bitstream, that indicate the encoded current block of video data was encoded based on a unified candidate list that includes motion vector candidates based on one or more translational motion vectors, and motion vector candidates based on one or more affine motion vectors. A merge index represented in the bitstream may indicate which candidate in the unified candidate list is associated with the motionvector of the encoded current block of video data. Based on the merge index, the one or more processors are configured to select one or more motion vectors of a candidate from the unified candidate list, based on the merge index, where the candidate has one or more of the motion vectors corresponding to the translational motion vectors or affine motion vectors within the unified candidate list.
Abstract:
An example device includes a memory and processing circuitry in communication with the memory. The processing circuitry of a device is configured to form a most probable mode (MPM) candidate list for a chroma block of the video data stored to the memory, such that the MPM candidate list includes one or more derived modes (DMs) associated with a luma block of the video data associated with the chroma block, and a plurality of luma prediction modes that can be used for coding luminance components of the video data. The processing circuitry is further configured to select a mode from the MPM candidate list, and to code the chroma block according to the mode selected from the MPM candidate list.
Abstract:
Certain aspects of the present disclosure generally relate to techniques for combining a plurality of decision metrics of a scrambled payload in a 5G wireless communications system. For example, in some cases, combining decision metrics of a scrambled payload may generally involve receiving a first payload at a receiver that was scrambled both before and after encoding, generating a second payload at the receiver with selectively set payload mask bits, and using the selectively-set payload mask bits in the second payload to descramble the first payload.
Abstract:
A device for video decoding a current block of video data, the device including one or more processors configured to compute a horizontal component of a motion vector and to compute a vertical component of a motion vector in an affine model. The affine model may be a four-parameter affine model which includes two control point motion vectors, or a six-parameter affine model which includes three control point motion vectors. The horizontal and vertical components may include differences between control point motion vectors based on first-bit shift operations and second bit-shift operations.
Abstract:
Improved systems and methods related to decoder-side motion vector derivation (DMVD), for example, in applying one or more constraints to motion information, such as a MV derived by DMVD, and/or a MV difference between an initial MV and an MV derived by DMVD. These techniques may be applied to any of the existing video codecs, such as HEVC (High Efficiency Video Coding), and/or may be an efficient coding tool in any future video coding standards. In one example, the block size used for DMVD can be restricted. In another example, FRUC bilateral matching can be simplified by not searching outside reference blocks indicated by the original motion vector.
Abstract:
Techniques are directed to a device for decoding a current block of video data in a current coding picture. The device may include a memory configured to store video data. The device may also include a processor configured to generate a first prediction block for the current block of the video data in the current picture according to an intra-prediction mode and generate a second prediction block for the current block of the video data in the current picture the picture according to an inter-prediction mode. The processor may be configured to generate motion information propagated from the second prediction block of the picture to the first prediction block, and use the motion information to obtain a final prediction block, then generate a reconstructed block based on a combination of the final prediction block and a residual block.
Abstract:
A video coder may perform a simplified depth coding (SDC) mode, including simplified residual coding, to code a depth block according to any of a variety of, e.g., at least three, depth intra prediction modes. For example, the video coder may perform the SDC mode for coding a depth block according to depth modeling mode (DMM) 3, DMM 4, or a region boundary chain coding mode. In such examples, the video coder may partition the depth block, and code respective DC residual values for each partition. In some examples, the video coder may perform the SDC mode for coding a depth block according to an intra prediction mode, e.g., an HEVC base specification intra prediction mode, such as a DC intra prediction mode or one of the directional intra prediction modes. In such examples, the video coder may code a single DC residual value for the depth block.
Abstract:
In one example, a video coder (e.g., a video encoder or a video decoder) is configured to determine that a current block of video data is coded using a disparity motion vector, wherein the current block is within a containing block, based on a determination that a neighboring block to the current block is also within the containing block, substitute a block outside the containing block and that neighbors the containing block for the neighboring block in a candidate list, select a disparity motion vector predictor from one of a plurality of blocks in the candidate list, and code the disparity motion vector based on the disparity motion vector predictor. In this manner, the techniques of this disclosure may allow blocks within the containing block to be coded in parallel.
Abstract:
Methods, systems, and devices for wireless communications are described. A first wireless device may communicate an allocation of a set of time-frequency resources of a carrier for transmission of a first codeword. The first wireless device may map a first portion of the first codeword to resource elements of a first subband of the set of time-frequency resources in a frequency-first, time-second manner and a second portion of the first codeword to resource elements of a second subband of the set of time-frequency resources in the frequency-first, time-second manner. The first portion of the first codeword may include a contiguous portion of the first codeword preceding the second portion of the first codeword. The first wireless device may transmit, within the set of time-frequency resources, the first codeword based on the mapping.
Abstract:
The present disclosure provides various techniques related to adaptive loop filtering (ALF), and particular to geometry transformation-based ALF (GALF). In an aspect, a method for decoding video data includes receiving an encoded bitstream having coded video data from which reconstructed video units are generated, identifying multiple filter supports for the reconstructed video units, and filtering the reconstructed video units using the respective multiple filter supports to produce a decoded video output. Another method includes enabling block-level control of ALF of chroma components for the reconstructed video units, performing, for the reconstructed video units, the block-level ALF for the chroma components when ALF is enabled for one video block and skip performing the block-level ALF for the chroma components when ALF is disabled for another video block, and generating, based on the enabled block-level control of ALF, a decoded video output. Related devices, means, and computer-readable medium are also described.