Abstract:
Implementations of the teachings herein include coding video data with an alternate reference frame generated using a temporal filter. The alternate reference frame is generated by determining a first weighting factor, for each corresponding block of a respective frame of a filter set, that represents a temporal correlation of the block with the corresponding block, determining a second weighting factor, for each pixel for each corresponding block of the respective frame of the filter set, that represents a temporal correlation of the pixel to a spatially-correspondent pixel in the block, determining a filter weight for each pixel in the block and for each spatially-correspondent pixel is each corresponding block based on the first weighting factor and the second weighting factor, and generating a weighted average pixel value for each pixel position in the block to form a block of the alternate reference frame based on the filter weights.
Abstract:
Implementations of the teachings herein include coding video data with an alternate reference frame generated using a temporal filter. The alternate reference frame is generated by determining a first weighting factor, for each corresponding block of a respective frame of a filter set, that represents a temporal correlation of the block with the corresponding block, determining a second weighting factor, for each pixel for each corresponding block of the respective frame of the filter set, that represents a temporal correlation of the pixel to a spatially-correspondent pixel in the block, determining a filter weight for each pixel in the block and for each spatially-correspondent pixel is each corresponding block based on the first weighting factor and the second weighting factor, and generating a weighted average pixel value for each pixel position in the block to form a block of the alternate reference frame based on the filter weights.
Abstract:
Encoding and decoding using advance coded reference prediction may include identifying a sequence of temporally adjacent frames from the plurality of frames, wherein each frame in the sequence of temporally adjacent frames is associated with a respective frame position indicating a temporal location the sequence, encoding a first frame from the sequence as an intra-coded frame, generating an alternate reference frame by reconstructing the first encoded frame, encoding a second frame from the sequence with reference to a reference frame, the second frame associated with a second frame position, including the first encoded frame in a compressed bitstream at a first bitstream position, and including the second encoded frame in the compressed bitstream at a second bitstream position, wherein the second bitstream position is later than the first bitstream position and wherein the first frame position is later than the second frame position.
Abstract:
A method and apparatus for performing implicit video location augmentation are provided. Implicit video location augmentation may include identifying a first geolocation for a first frame from a plurality of video frames based on a first image captured by the first frame, identifying a second geolocation for a second frame from the plurality of video frames based on a second image captured by the second frame, determining, by a processor, a third geolocation for a third frame from the plurality of video frames based on the first geolocation and the second geolocation, and storing an updated plurality of video frames such that the first frame is associated with the first geolocation, the second frame is associated with the second geolocation, and the third frame is associated with the third geolocation.