Abstract:
Techniques and systems are provided for performing predictive random access using a background picture. For example, a method of decoding video data includes obtaining an encoded video bitstream comprising a plurality of pictures. The plurality of pictures include a plurality of predictive random access pictures. A predictive random access picture is at least partially encoded using inter-prediction based on at least one background picture. The method further includes determining, for a time instance of the video bitstream, a predictive random access picture of the plurality of predictive random access pictures with a time stamp closest in time to the time instance. The method further includes determining a background picture associated with the predictive random access picture, and decoding at least a portion of the predictive random access picture using inter-prediction based on the background picture.
Abstract:
Embodiments include methods and systems for context-adaptive pixel processing based, in part, on a respective weighting-value for each pixel or a group of pixels. The weighting-values provide an indication as to which pixels are more pertinent to pixel processing computations. Computational resources and effort can be focused on pixels with higher weights, which are generally more pertinent for certain pixel processing determinations.
Abstract:
The example techniques of this disclosure are directed to generating a stereoscopic view from an application designed to generate a mono view. For example, the techniques may modify instructions for a vertex shader based on a viewing angle. When the modified vertex shader is executed, the modified vertex shader may generate coordinates for vertices for a stereoscopic view based on the viewing angle.
Abstract:
Embodiments include methods and systems for context-adaptive pixel processing based, in part, on a respective weighting-value for each pixel or a group of pixels. The weighting-values provide an indication as to which pixels are more pertinent to pixel processing computations. Computational resources and effort can be focused on pixels with higher weights, which are generally more pertinent for certain pixel processing determinations.
Abstract:
Techniques and systems are provided for generating a background picture. The background picture can be used for coding one or more pictures. For example, a method of generating a background picture includes generating a long-term background model for one or more pixels of a background picture. The long-term background model includes a statistical model for detecting long-term motion of the one or more pixels in a sequence of pictures. The method further includes generating a short-term background model for the one or more pixels of the background picture. The short-term background model detects short-term motion of the one or more pixels between two or more pictures. The method further includes determining a value for the one or more pixels of the background picture using the long-term background model and the short-term background model.
Abstract:
Embodiments include methods and systems for context-adaptive pixel processing based, in part, on a respective weighting-value for each pixel or a group of pixels. The weighting-values provide an indication as to which pixels are more pertinent to pixel processing computations. Computational resources and effort can be focused on pixels with higher weights, which are generally more pertinent for certain pixel processing determinations.
Abstract:
Techniques and systems are provided for performing predictive random access using a background picture. For example, a method of decoding video data includes obtaining an encoded video bitstream comprising a plurality of pictures. The plurality of pictures include a plurality of predictive random access pictures. A predictive random access picture is at least partially encoded using inter-prediction based on at least one background picture. The method further includes determining, for a time instance of the video bitstream, a predictive random access picture of the plurality of predictive random access pictures with a time stamp closest in time to the time instance. The method further includes determining a background picture associated with the predictive random access picture, and decoding at least a portion of the predictive random access picture using inter-prediction based on the background picture.
Abstract:
Techniques and systems are provided for generating a background picture. The background picture can be used for coding one or more pictures. For example, a method of generating a background picture includes generating a long-term background model for one or more pixels of a background picture. The long-term background model includes a statistical model for detecting long-term motion of the one or more pixels in a sequence of pictures. The method further includes generating a short-term background model for the one or more pixels of the background picture. The short-term background model detects short-term motion of the one or more pixels between two or more pictures. The method further includes determining a value for the one or more pixels of the background picture using the long-term background model and the short-term background model.
Abstract:
Embodiments include methods and systems for context-adaptive pixel processing based, in part, on a respective weighting-value for each pixel or a group of pixels. The weighting-values provide an indication as to which pixels are more pertinent to pixel processing computations. Computational resources and effort can be focused on pixels with higher weights, which are generally more pertinent for certain pixel processing determinations.
Abstract:
The example techniques of this disclosure are directed to generating a stereoscopic view from an application designed to generate a mono view. For example, the techniques may modify instructions for a vertex shader based on a viewing angle. When the modified vertex shader is executed, the modified vertex shader may generate coordinates for vertices for a stereoscopic view based on the viewing angle.