Abstract:
Stereoscopic video data and corresponding depth map data for stereoscopic and auto-stereoscopic displays are coded using a coded base layer and one or more coded enhancement layers. Given a 3D input picture and corresponding input depth map data, a side-by-side and a top-and-bottom picture are generated based on the input picture. Using an encoder, the side-by-side picture is coded to generate a coded base layer Using the encoder and a texture reference processing unit (RPU), the top-and-bottom picture is encoded to generate a first enhancement layer, wherein the first enhancement layer is coded based on the base layer stream, and using the encoder and a depth-map RPU, depth data for the side-by-side picture are encoded to generate a second enhancement layer, wherein the second enhancement layer is coded based on to the base layer. Alternative single, dual, and multi-layer depth map delivery systems are also presented.
Abstract:
A 3D display is characterized by a quality of viewing experience (QVE) mapping which represents a display-specific input-output relationship between input depth values and output QVE values. Examples of QVE mappings based on a metric of “viewing blur” are presented. Given reference depth data generated for a reference display and a representation of an artist's mapping function, which represents an input-output relationship between original input depth data and QVE data generated using a QVE mapping for a reference display, a decoder may reconstruct the reference depth data and apply an inverse QVE mapping for a target display to generate output depth data optimized for the target display.
Abstract:
Novel methods and systems for decoding and displaying enhanced dynamic range (EDR) video signals are disclosed. To accommodate legacy digital media players with constrained computational resources, compositing and display management (DM) operations are moved from a digital media player to its attached EDR display. On a video receiver, base and enhancement video layers are decoded and multiplexed together with overlay graphics into an interleaved stream. The video and graphics signals are all converted to a common format which allows metadata to be embedded in the interleaved signal as part of the least significant bits in the chroma channels. On the display, the video and the graphics are de-interleaved. After compositing and display management operations guided by the received metadata, the received graphics data are blended with the output of the DM process and the final video output is displayed on the display's panel.
Abstract:
Stereoscopic video data and corresponding depth map data for stereoscopic and auto-stereoscopic displays are coded using a coded base layer and one or more coded enhancement layers. Given a 3D input picture and corresponding input depth map data, a side-by-side and a top-and-bottom picture are generated based on the input picture. Using an encoder, the side-by-side picture is coded to generate a coded base layer Using the encoder and a texture reference processing unit (RPU), the top-and-bottom picture is encoded to generate a first enhancement layer, wherein the first enhancement layer is coded based on the base layer stream, and using the encoder and a depth-map RPU, depth data for the side-by-side picture are encoded to generate a second enhancement layer, wherein the second enhancement layer is coded based on to the base layer. Alternative single, dual, and multi-layer depth map delivery systems are also presented.
Abstract:
Coding syntaxes in compliance with same or different VDR specifications may be signaled by upstream coding devices such as VDR encoders to downstream coding devices such as VDR decoders in a common vehicle in the form of RPU data units. VDR coding operations and operational parameters may be specified as sequence level, frame level, or partition level syntax elements in a coding syntax. Syntax elements in a coding syntax may be coded directly in one or more current RPU data units under a current RPU ID, predicted from other partitions/segments/ranges previously sent with the same current RPU ID, or predicted from other frame level or sequence level syntax elements previously sent with a previous RPU ID. A downstream device may perform decoding operations on multi-layered input image data based on received coding syntaxes to construct VDR images.
Abstract:
HDR images are coded and distributed. An initial HDR image is received. Processing the received HDR image creates a JPEG-2000 DCI-compliant coded baseline image and an HDR-enhancement image. The coded baseline image has one or more color components, each of which provide enhancement information that allows reconstruction of an instance of the initial HDR image using the baseline image and the HDR-enhancement images. A data packet is computed, which has a first and a second data set. The first data set relates to the baseline image color components, each of which has an application marker that relates to the HDR-enhancement images. The second data set relates to the HDR-enhancement image. The data packets are sent in a DCI-compliant bit stream.