Abstract:
Techniques for selecting a luminance value for color space conversion are disclosed. Techniques include determining values for Cb and Cr from values for R′, G′, and B′; producing a reconstructed Cb* value and a reconstructed Cr* value by processing the Cb and Cr values; and determining a plurality of Y′ value options from the values for Cb* and Cr*. A Y′ output value may be selected based on the plurality of Y′ value options.
Abstract:
In a communication system, parallel encoding and decoding of serially-coded data occurs in a manner that supports low latency communication. A plurality of data items may be coded as serially-coded data sequences and a transmission sequence may be built from them. An index table may be built having a plurality of entries representing respective start points of the serially-coded data sequences within the transmission sequence. The transmission sequence may be transmitted to a channel and, thereafter, the index table may be transmitted. Latencies otherwise involved in inserting an index table into the beginning of a transmission sequence may be avoided.
Abstract:
A new file format for coded video data is provided. A decoder may identify patterns in the coded video data in order to make the decoding process and/or display of data more efficient. Such patterns may be predefined and stored at the decoder, may be defined by each encoder and exchanged during terminal initialization, or may be transmitted and/or stored with the associated video data. Initialization information associated with the fragments of video data may also provide for carouseling initialization updates such that the initialization fragments may indicate either that the initialization information should be updated or that the decoder should be re-initialized. Additionally, media files or segments may be broken into fragments and each segment may have an index to provide for random access to the media data of the segment.
Abstract:
Predictive coding techniques may include resampling of reference pictures, where various coding parameters are determined based on the resolution(s) or pixel format(s) of the prediction references. In a first aspect, lists of weights for use in weighted prediction are based on the resolution(s) of prediction references. In a second aspect, resampling filter parameters are selected based on the resolutions of prediction references. In a third aspect, deblocking filter parameters are based on the resolution(s) of prediction references.
Abstract:
Techniques for coding and decoding video may include predicting picture regions defined by a time-varying tessellation and/or by a tessellation that varies spatially within a picture. These techniques improve decoded video quality, for example, by reducing block-based visual artifacts. Tessellation patterns may be irregular spatially to prevent alignment of some prediction region boundaries within a picture. Tessellation patterns may vary over time based on a spatial offset value, and the spatial offset value may be determined via a modulo function. Tessellation patterns may include overlapped shapes, for example when used in conjunction with overlapped block motion compensation.
Abstract:
Support for additional components may be specified in a coding scheme for image data. A layer of a coding scheme that specifies color components may also specify additional components. Characteristics of the components may be specified in the same layer or a different layer of the coding scheme. An encoder or decoder may identify the specified components and determine the respective characteristics to perform encoding and decoding of image data.
Abstract:
A system comprises an encoder configured to compress attribute information and/or spatial for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. To compress the attribute and/or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud. The encoder is configured project the point cloud on to patch planes to compress the point cloud, and supports multiple layered patch planes. For example, some point clouds may have a depth, and points at different depths may be assigned to different layered patch planes.
Abstract:
Predictive coding techniques may include resampling of reference pictures, where various coding parameters are determined based on the resolution(s) or pixel format(s) of the prediction references. In a first aspect, lists of weights for use in weighted prediction are based on the resolution(s) of prediction references. In a second aspect, resampling filter parameters are selected based on the resolutions of prediction references. In a third aspect, deblocking filter parameters are based on the resolution(s) of prediction references.
Abstract:
A system comprises an encoder configured to compress attribute information and/or spatial for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. To compress the attribute and/or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud. Additionally, an encoder is configured to signal and/or a decoder is configured to receive a supplementary message comprising volumetric tiling information that maps portions of 2D image representations to objects in the point. In some embodiments, characteristics of the object may additionally be signaled using the supplementary message or additional supplementary messages.
Abstract:
A system comprises an encoder configured to compress attribute information and/or spatial for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. To compress the attribute and/or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud. The encoder is configured project the point cloud on to patch planes to compress the point cloud, and supports multiple layered patch planes. For example, some point clouds may have a depth, and points at different depths may be assigned to different layered patch planes.