Abstract:
HDR images are coded and distributed. An initial HDR image is received. Processing the received HDR image creates a JPEG-2000 DCI-compliant coded baseline image and an HDR-enhancement image. The coded baseline image has one or more color components, each of which provide enhancement information that allows reconstruction of an instance of the initial HDR image using the baseline image and the HDR-enhancement images. A data packet is computed, which has a first and a second data set. The first data set relates to the baseline image color components, each of which has an application marker that relates to the HDR-enhancement images. The second data set relates to the HDR-enhancement image. The data packets are sent in a DCI-compliant bit stream.
Abstract:
Novel methods and systems for color grading are disclosed. The color grading process for a visual dynamic range image can be guided by information relating to the color grading of other images such as the standard dynamic range image.
Abstract:
Representation and coding of multi-view images using tapestry encoding are described for standard and enhanced dynamic ranges compatibility. A tapestry comprises information on a tapestry image, a left-shift displacement map and a right-shift displacement map. Perspective images of a scene can be generated from the tapestry and the displacement maps. Different methods for achieving compatibility are described.
Abstract:
A display management processor receives an input image with enhanced dynamic range to be displayed on a target display which has a different dynamic range than a reference display. The input image is first transformed into a perceptually-quantized (PQ) color space. A non-linear mapping function generates a tone-mapped intensity image in response to the characteristics of the source and target display and a measure of the intensity of the PQ image. After a detail-preservation step which may generate a filtered tone-mapped intensity image, an image-adaptive intensity and saturation adjustment step generates an intensity adjustment factor and a saturation adjustment factor as functions of the measure of intensity and saturation of the PQ image, which together with the filtered tone-mapped intensity image are used to generate the output image. Examples of the functions to compute the intensity and saturation adjustment factors are provided.
Abstract:
A method for merging graphics and high dynamic range video data is disclosed. In a video receiver, a display management process uses metadata to map input video data from a first dynamic range into the dynamic range of available graphics data. The remapped video signal is blended with the graphics data to generate a video composite signal. An inverse display management process uses the metadata to map the video composite signal to an output video signal with the first dynamic range. To alleviate perceptual tone-mapping jumps during video scene changes, a metadata transformer transforms the metadata to transformed so that on a television (TV) receiver metadata values transition smoothly between consecutive scenes. The TV receiver receives the output video signal and the transformed metadata to generate video data mapped to the dynamic range of the TV's display.
Abstract:
A method for merging graphics and high dynamic range video data is disclosed. In a video receiver, a display management process uses metadata to map input video data from a first dynamic range into the dynamic range of available graphics data. The remapped video signal is blended with the graphics data to generate a video composite signal. An inverse display management process uses the metadata to map the video composite signal to an output video signal with the first dynamic range. To alleviate perceptual tone-mapping jumps during video scene changes, a metadata transformer transforms the metadata to transformed so that on a television (TV) receiver metadata values transition smoothly between consecutive scenes. The TV receiver receives the output video signal and the transformed metadata to generate video data mapped to the dynamic range of the TV's display.
Abstract:
One or more derived versions of image content may be obtained by interpolating two or more source versions of the same image content. A derived version may be targeted for a class of displays that differs from classes of displays targeted by the source versions. Source images in a source version may have been color graded in a creative process by a content creator/colorist. Interpolation of the source versions may be performed with interpolation parameters having two or more different values in two or more different clusters in at least one of the source images. A normalized version may be used to allow efficient distribution of multiple versions of the same content to a variety of downstream media processing devices, and to preserve or restore image details otherwise lost in one or more of the source versions.
Abstract:
An encoder receives an input enhanced dynamic range (EDR) image to be stored or transmitted using multiple coding formats in a layered representation. A layer decomposer generates a lower dynamic range (LDR) image from the EDR image. One or more base layer (BL) encoders encode the LDR image to generate a main coded BL stream and one or more secondary coded BL streams, where each secondary BL stream is coded in a different coding format than the main coded BL stream. A single enhancement layer (EL) coded stream and related metadata are generated using the main coded BL stream, the LDR image, and the input EDR image. An output coded stream includes the coded EL stream, the metadata, and either the main coded BL stream or one of the secondary coded BL streams. Computation-scalable decoding and display management processes for EDR images are also described.
Abstract:
Methods and systems for controlling judder are disclosed. Judder can be introduced locally within a picture, to restore a judder feeling which is normally expected in films. Judder metadata can be generated based on the input frames. The judder metadata includes base frame rate, judder control rate and display parameters, and can be used to control judder for different applications.
Abstract:
A processor for video coding receives a full-frame rate (FFR) HDR video signal and a corresponding FFR SDR video signal. An encoder generates a scalable bitstream that allows decoders to generate half-frame-rate (HFR) SDR, FFR SDR, HFR HDR, or FFR HDR signals. Given odd and even frames of the input FFR SDR signal, the scalable bitstream combines a base layer of coded even SDR frames with an enhancement layer of coded packed frames, where each packed frame includes a downscaled odd SDR frame, a downscaled even HDR residual frame, and a downscaled odd HDR residual frame. In an alternative implementation, the scalable bitstream combines four signals layers: a base layer of even SDR frames, an enhancement layer of odd SDR frames, a base layer of even HDR residual frames and an enhancement layer of odd HDR residual frames. Corresponding decoder architectures are also presented.