Abstract:
A system comprises an encoder configured to compress attribute information and/or spatial for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. To compress the attribute and/or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud. A block/sub-block organization scheme is used to encode blocks and sub-blocks of an occupancy map used in compressing the point cloud. Binary values are assigned to blocks/sub-blocks based on whether they contain patches projected on the point cloud. A traversal path is chosen that takes advantage of run-length encoding strategies to reduce a size of an encoded occupancy map. Also, auxiliary information is used to further improve occupancy map compression.
Abstract:
A system comprises an encoder configured to compress attribute information and/or spatial for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. To compress the attribute and/or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud. A closed-loop color conversion process is used to improve compression while taking into consideration distortion introduced throughout the point cloud compression process.
Abstract:
A system comprises an encoder configured to compress attribute and/or spatial information for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. To compress the attribute and/or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud. In some embodiments, an encoder performs downscaling of an image frame prior to video encoding and a decoder performs upscaling of an image frame subsequent to video decoding.
Abstract:
A new file format for coded video data is provided. A decoder may identify patterns in the coded video data in order to make the decoding process and/or display of data more efficient. Such patterns may be predefined and stored at the decoder, may be defined by each encoder and exchanged during terminal initialization, or may be transmitted and/or stored with the associated video data. Initialization information associated with the fragments of video data may also provide for carouseling initialization updates such that the initialization fragments may indicate either that the initialization information should be updated or that the decoder should be re-initialized. Additionally, media files or segments may be broken into fragments and each segment may have an index to provide for random access to the media data of the segment.
Abstract:
A system comprises an encoder configured to compress attribute information and/or spatial for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. To compress the attribute and/or spatial information, the encoder is configured to convert a point cloud into an image based representation. Also, the decoder is configured to generate a decompressed point cloud based on an image based representation of a point cloud. A closed-loop color conversion process is used to improve compression while taking into consideration distortion introduced throughout the point cloud compression process.
Abstract:
Techniques for selecting a luminance value for color space conversion are disclosed. Techniques include determining values for Cb and Cr from values for R′, G′, and B′; producing a reconstructed Cb* value and a reconstructed Cr* value by processing the Cb and Cr values; and determining a plurality of Y′ value options from the values for Cb* and Cr*. A Y′ output value may be selected based on the plurality of Y′ value options.
Abstract:
In a coding system, an encoder codes video data according to a predetermined protocol, which, when decoded causes an associated decoder to perform a predetermined sequence of decoding operations. The encoder may perform local decodes of the coded video data, both in the manner dictated by the coding protocol that is at work and also by one or more alternative decoding operations. The encoder may estimate relative performance of the alternative decoding operations as compared to a decoding operation that is mandated by the coding protocol. The encoder may provide identifiers in metadata that is associated with the coded video data to identify such levels of distortion and/or levels of resources conserved. A decoder may refer to such identifiers when determining when to engage alternative decoding operations as may be warranted under resource conservation policies.
Abstract:
Improved video coding and decoding techniques are described, including techniques to derive quantization step sizes adaptively with quantization step size table templates. Quantization techniques described provide finer-grained control over quantization with a more flexible quantization step size especially at higher degrees of quantization. This may result in improved overall compression quality. Other coding parameters, such as in-loop filtering parameters, may be derived based on the more flexible quantization parameters.
Abstract:
Techniques for encoding video with motion compensation include a compressed bitstream syntax that includes a list of all motion prediction reference frames without distinguishing between short-term reference frame and long-term reference frames. The list of reference frames may be provided in a slice header and may apply to encoded data video data within the corresponding slice. The list may be prefaced with a single number indicating the total number of reference frames. In an aspect delta POC reference numbers may be encoded with a flag indicating the sign of the delta POC when the absolute value of the POC is not equal to zero. In another aspect, a flag may be encoded for every reference frame indicating if POC information should be used when scaling prediction references, and a weighting parameter may be included when POC information should be used.
Abstract:
A method of adaptive chroma downsampling is presented. The method comprises converting a source image to a converted image in an output color format, applying a plurality of downsample filters to the converted image and estimating a distortion for each filter chose the filter that produces the minimum distortion. The distortion estimation includes applying an upsample filter, and a pixel is output based on the chosen filter. Methods for closed loop conversions are also presented.