Abstract:
Techniques are disclosed for coding video data predictively based on predictions made from spherical-domain projections of input pictures to be coded and reference pictures that are prediction candidates. Spherical projection of an input picture and the candidate reference pictures may be generated. Thereafter, a search may be conducted for a match between the spherical-domain representation of a pixel block to be coded and a spherical-domain representation of the reference picture. On a match, an offset may be determined between the spherical-domain representation of the pixel block to a matching portion of the of the reference picture in the spherical-domain representation. The spherical-domain offset may be transformed to a motion vector in a source-domain representation of the input picture, and the pixel block may be coded predictively with reference to a source-domain representation of the matching portion of the reference picture.
Abstract:
Frame packing techniques are disclosed for multi-directional images and video. According to an embodiment, a multi-directional source image is reformatted into a format in which image data from opposing fields of view are represented in respective regions of the packed image as flat image content. Image data from a multi-directional field of view of the source image between the opposing fields of view are represented in another region of the packed image as equirectangular image content. It is expected that use of the formatted frame will lead to coding efficiencies when the formatted image is processed by predictive video coding techniques and the like.
Abstract:
Techniques are disclosed for correcting artifacts in multi-view images that include a plurality of planar views. Image content the planar views may be projected from the planar representation to a spherical projection. Thereafter, a portion of the image content may be projected from the spherical projection to a planar representation. The image content of the planar representation may be used for display. Extensions are disclosed that correct artifacts that may arise during deblocking filtering of the multi-view images.
Abstract:
Techniques are described for implementing format configurations for multi-directional video and for switching between them. Source images may be assigned to formats that may change during a coding session. When a change occurs between formats, video coders and decoder may transform decoded reference frames from the first format to the second format. Thereafter, new frames in the second configuration may be coded or decoded predictively using transformed reference frame(s) as source(s) of prediction. In this manner, video coders and decoders may use intra-coding techniques and achieve high efficiency in coding.
Abstract:
Embodiments of the present disclosure provide systems and methods for perspective shifting in a video conferencing session. In one exemplary method, a video stream may be generated. A foreground element may be identified in a frame of the video stream and distinguished from a background element of the frame. Data may be received representing a viewing condition at a terminal that will display the generated video stream. The frame of the video stream may be modified based on the received data to shift of the foreground element relative to the background element. The modified video stream may be displayed at the displaying terminal.
Abstract:
Multi-directional image data often contains distortions of image content that cause problems when processed by video coders that are designed to process traditional, “flat” image content. Embodiments of the present disclosure provide techniques for coding multi-directional image data using such coders. For each pixel block in a frame to be coded, an encoder may transform reference picture data within a search window about a location of the input pixel block based on displacement respectively between the location of the input pixel block and portions of the reference picture within the search window. The encoder may perform a prediction search among the transformed reference picture data to identify a match between the input pixel block and a portion of the transformed reference picture and, when a match is identified, the encoder may code the input pixel block differentially with respect to the matching portion of the transformed reference picture. The transform may counter-act distortions imposed on image content of the reference picture data by the multi-directional format, which aligns the content with image content of the input picture. The techniques apply both for intra-coding and inter-coding.
Abstract:
Embodiments of the present disclosure provide systems and methods for background concealment in a video conferencing session. In one exemplary method, a video stream may be captured and provided to a first terminal participating in a video chat session. A background element and a foreground element may be determined in the video stream. A border region may additionally be determined in the video stream. The border region may define a boundary between the foreground element and the background element. The background region may be modified based, at least in part, on video content of the border region. The modified video stream may be transmitted to a second terminal participating in the video conferencing session.
Abstract:
Techniques are disclosed for selecting deblocking filter parameters in a video decoding system. According to these techniques, a boundary strength parameter may be determined based, at least in part, on a bit depth of decoded video data. Activity of a pair of decoded pixel blocks may be classified based, at least in part, on the determined boundary strength parameter, and when a level of activity indicates that deblocking filtering is to be applied to the pair of pixel blocks, pixel block content at a boundary between the pair of pixel blocks may be filtered using filtering parameters derived at least in part based on the bit depth of the decoded video data. The filtering parameters may decrease strength with increasing bit depth of the decoded video data, which improves quality of the decoded video data.
Abstract:
Coding techniques for image data may cause a still image to be converted to a “phantom” video sequence, which is coded by motion compensated prediction techniques. Thus, coded video data obtained from the coding operation may include temporal prediction references between frames of the video sequence. Metadata may be generated that identifies allocations of content from the still image to the frames of the video sequence. The coded data and the metadata may be transmitted to another device, whereupon it may be decoded by motion compensated prediction techniques and converted back to a still image data. Other techniques may involve coding an image in both a base layer representation and at least one coded enhancement layer representation. The enhancement layer representation may be coded predictively with reference to the base layer representation. The coded base layer representation may be partitioned into a plurality of individually-transmittable segments and stored. Prediction references of elements of the enhancement layer representation may be confined to segments of the base layer representation that correspond to a location of those elements. Meaning, when a pixel block of an enhancement layer maps to a given segment of the base layer representation, prediction references are confined to that segment and do not reference portions of the base layer representation that may be found in other segment(s).
Abstract:
Methods and systems provide efficient sample adaptive offset (SAO) signaling by reducing a number of bits consumed for signaling SAO compared with conventional methods. In an embodiment, a single flag is used if a coding unit to a first scanning direction with respect to a given coding unit is off. In an embodiment, further bits may be saved if some neighboring coding units are not present, i.e. the given coding unit is an edge. For example, a flag may be skipped, e.g., not signaled, if the given coding unit does not have a neighbor. In an embodiment, a syntax element, one or more flags may signal whether SAO filtering is performed in a coding unit. Based on the syntax element, a merge flag may be skipped to save bits. In an embodiment, SAO syntax may be signaled at a slice level.