Abstract:
Methods, systems, and devices for motion analysis are described. Generally, the described techniques provide for computationally efficient and accurate motion analysis. A device may identify frames of a video frame sequence having a defined resolution. The device may downscale the frames to generate a plurality of downsampled images each having a resolution lower than the defined resolution. The device may generate a respective histogram vector for each pixel of each downsampled image and each pixel of the original frames. The device may determine a motion vector candidate based at least in part on the histogram vectors. The device may apply a filter to the motion vector candidates to determine a final motion vector and output an indication of motion between the frames of the video frame sequence based at least in part on the final motion vector for each pixel of the second frame.
Abstract:
Video pixel line buffers are widely used for data processing in video codecs. Video data may be packed into buffers configured to store a plurality of words, each word comprising a series of bits. The video data may be associated with two or more channels. In order to reduce realization costs, data blocks from two different channels may be packed from opposite sides of a word in the buffer in opposite directions. In some embodiments, data blocks from two or more physical channels may be mapped to two or more virtual channels, the virtual channels having balanced data block sizes. The data blocks associated with the virtual channels may then be packed to one or more buffers.
Abstract:
Implementations include video image processing systems, methods, and apparatus for integrated video downscale in a video core. The downscaler computes and writes a display frame to an external memory. This frame may have the same resolution as a target display device (e.g., mobile device). The target display device then reads this display frame, rather than the original higher resolution frame. By enabling downscale during encoding/decoding, the device can conserve resources such as memory bandwidth, memory access, bus bandwidth, and power consumption associated with separately downscaling a frame of video data.
Abstract:
The techniques of this disclosure are generally related to parallel coding of video units that reside along rows or columns of blocks in largest coding units. For example, the techniques include removing intra-prediction dependencies between two video units in different rows or columns to allow for parallel coding of rows or columns of the video units.
Abstract:
In one example, a method of encoding video data includes allocating, based on a complexity of a reference frame and a quantity of bits allocated to a current frame, a quantity of bits to a current largest coding unit (LCU) included in the current frame. In this example, the method also includes determining, based on the quantity of bits allocated to the current LCU, a quantization parameter (QP) for the current LCU, and encoding the current LCU with the determined QP.
Abstract:
Implementations include video image processing systems, methods, and apparatus for integrated video downscale in a video core. The downscaler computes and writes a display frame to an external memory. This frame may have the same resolution as a target display device (e.g., mobile device). The target display device then reads this display frame, rather than the original higher resolution frame. By enabling downscale during encoding/decoding, the device can conserve resources such as memory bandwidth, memory access, bus bandwidth, and power consumption associated with separately downscaling a frame of video data.
Abstract:
Methods and systems for efficient searching of candidate blocks for inter-coding and/or intra coding are provided. In one innovative aspect, an apparatus for performing motion estimation is provided. The apparatus includes a processor configured to identify a number of candidate blocks of a frame of video data to be searched, at least one candidate block corresponding to a block of another frame of the video data. The processor is further configured to select one or more of the candidate blocks to search based on a distance between the candidate blocks. The processor is also configured to select a method for searching the selected candidate blocks based on a format of the video data. The processor is also configured to estimate the motion for the block of the another frame based on the selected method and the selected candidate blocks.
Abstract:
A device includes a first bitstream engine and a second bitstream engine. The first bitstream engine is configured to decode a first portion of a first video frame of a plurality of video frames to generate first decoded portion data. The first bitstream engine is also configured to generate synchronization information based on completion of decoding the first portion. The second bitstream engine is configured to, based on the synchronization information, initiate decoding of a second portion of a particular video frame to generate second decoded portion data. The second bitstream engine uses the first decoded portion data during decoding of the second portion of the particular video frame. The particular video frame includes the first video frame or a second video frame of the plurality of video frames.
Abstract:
An apparatus configured to filter video information according to certain aspects includes a memory unit and a processor in communication with the memory unit. The memory unit stores video information comprising at least two adjacent video blocks, each video block comprising a plurality of video samples, and each video sample having a bit depth. The processor determines a filtered video sample based at least in part on a video sample and an adjustment value. The processor determines the adjustment value at least in part from an input with a limited bit depth. The input is determined from a set of one or more video samples, and its bit depth is limited such that it is less than the bit depth of the one or more video samples.
Abstract:
Video pixel line buffers are widely used for data processing in video codecs. Video data may be packed into buffers configured to store a plurality of words, each word comprising a series of bits. The video data may be associated with two or more channels. In order to reduce realization costs, data blocks from two different channels may be packed from opposite sides of a word in the buffer in opposite directions. In some embodiments, data blocks from two or more physical channels may be mapped to two or more virtual channels, the virtual channels having balanced data block sizes. The data blocks associated with the virtual channels may then be packed to one or more buffers.