Abstract:
Video coders and decoders perform transform coding and decoding on blocks of video content according to an adaptively selected transform type. The transform types are organized into a hierarchy of transform sets where each transform set includes a respective number of transforms and each higher-level transform set includes the transforms of each lower-level transform set within the hierarchy. The video coders and video decoders may exchange signaling that establishes a transform set context from which a transform set that was selected for coding given block(s) may be identified. The video coders and video decoders may exchange signaling that establishes a transform decoding context from which a transform that was selected from the identified transform set to be used for decoding the transform unit. The block(s) may be coded and decoded by the selected transform.
Abstract:
The present disclosure describes techniques for efficient coding of motion vectors developed for multi-hypothesis coding applications. According to these techniques, when coding hypotheses are developed, each having a motion vector identifying a source of prediction for a current pixel block, a motion vector for a first one of the coding hypotheses may be predicted from the motion vector of a second coding hypothesis. The first motion vector may be represented by coding a motion vector residual, which represents a difference between the developed motion vector for the first coding hypothesis and the predicted motion vector for the first coding hypothesis, and outputting the coded residual to a channel. In another embodiment, a motion vector residual may be generated for a motion vector of a first coding hypothesis, and the first motion vector and the motion vector residual may be used to predict a second motion vector and a predicted motion vector residual. The second hypothesis's motion vector may be coded as a difference between the motion vector, the predicted second motion vector, and the predicted motion vector residual. In a further embodiment, a single motion vector residual may be output for the motion vectors of two coding hypotheses representing a difference between the motion vector of one of the hypotheses and a predicted motion vector for that hypothesis.
Abstract:
Methods and systems provide efficient sample adaptive offset (SAO) signaling by reducing a number of bits consumed for signaling SAO compared with conventional methods. In an embodiment, a single flag is used if a coding unit to a first scanning direction with respect to a given coding unit is off. In an embodiment, further bits may be saved if some neighboring coding units are not present, i.e. the given coding unit is an edge. For example, a flag may be skipped, e.g., not signaled, if the given coding unit does not have a neighbor. In an embodiment, a syntax element, one or more flags may signal whether SAO filtering is performed in a coding unit. Based on the syntax element, a merge flag may be skipped to save bits. In an embodiment, SAO syntax may be signaled at a slice level.
Abstract:
Techniques are disclosed for estimating quality of images in an automated fashion. According to these techniques, a source image may be downsampled to generate at least two downsampled images at different levels of downsampling. Blurriness of the images may be estimated starting with a most-heavily downsampled image. Blocks of a given image may be evaluated for blurriness and, when a block of a given image is estimated to be blurry, the block of the image and co-located blocks of higher resolution image(s) may be designated as blurry. Thereafter, a blurriness score may be calculated for the source image from the number of blocks of the source image designated as blurry.
Abstract:
Methods for organizing media data by automatically segmenting media data into hierarchical layers of scenes are described. The media data may include metadata and content having still image, video or audio data. The metadata may be content-based (e.g., differences between neighboring frames, exposure data, key frame identification data, motion data, or face detection data) or non-content-based (e.g., exposure, focus, location, time) and used to prioritize and/or classify portions of video. The metadata may be generated at the time of image capture or during post-processing. Prioritization information, such as a score for various portions of the image data may be based on the metadata and/or image data. Classification information such as the type or quality of a scene may be determined based on the metadata and/or image data. The classification and prioritization information may be metadata and may be used to organize the media data.
Abstract:
A system for processing media on a resource restricted device, the system including a memory to store data representing media assets and associated descriptors, and program instructions representing an application and a media processing system, and a processor to execute the program instructions, wherein the program instructions represent the media processing system, in response to a call from an application defining a plurality of services to be performed on an asset, determine a tiered schedule of processing operations to be performed upon the asset based on a processing budget associated therewith, and iteratively execute the processing operations on a tier-by-tier basis, unless interrupted.
Abstract:
An encoding system may include a video source that captures video image, a video coder, and a controller to manage operation of the system. The video coder may encode the video image into encoded video data using a plurality of subgroup parameters corresponding to a plurality of subgroups of pixels within a group. The controller may set the subgroup parameters for at least one of the subgroups of pixels in the video coder, based upon at least one parameters corresponding to the group. A decoding system may decode the video data based upon the motion prediction parameters.
Abstract:
Systems and methods are configured for accessing data representing video content, the data comprising a set of one or more symbols each associated with a syntax element; performing a probability estimation, for encoding the data, comprising: for each symbol, obtaining, based on the syntax element for that symbol, an adaptivity rate parameter value, the adaptivity rate parameter value being a function of a number of symbols in the set of one or more symbols; updating the adaptivity rate parameter value as a function of an adjustment parameter value; and generating, based on the updated adaptivity rate parameter value, a probability value; generating a probability estimation; and encoding, based on the CDF of the probability estimation, the data comprising the set of one or more symbols for transmission.
Abstract:
An encoding system may include a video source that captures video image, a video coder, and a controller to manage operation of the system. The video coder may encode the video image into encoded video data using a plurality of subgroup parameters corresponding to a plurality of subgroups of pixels within a group. The controller may set the subgroup parameters for at least one of the subgroups of pixels in the video coder, based upon at least one parameters corresponding to the group. A decoding system may decode the video data based upon the motion prediction parameters.
Abstract:
A method for processing media assets includes, given a first media asset, deriving characteristics from the first media asset, searching for other media assets having characteristics that correlate to the characteristics of the first media asset, when a match is found, deriving content corrections for the first media asset or a matching media asset from the other of the first media asset or the matching media asset, and correcting content of the first media asset or the matching media asset based on the content corrections.