Abstract:
This invention relates to an authentication method for authenticating a first party to a second party, where an operation is performed on condition that the authentication succeeds. If the first party is not authenticated, then if the first party qualifies for a sub-authorization, the operation is still performed. Further, a device that comprises a first memory area holding a comparison measure, which is associated with time, and which is also used in said authentication procedure, a second memory area holding a limited list of other parties which have been involved in an authentication procedure with the device, and a third memory area, holding compliance certificates concerning parties of said list.
Abstract:
A three dimensional [3D] video signal is processed in a video device (50). The device has generating means (52) for generating an output signal for transferring the video data via a high-speed digital interface like HDMI to a 3D display, which selectively generate a 3D display signal for displaying the 3D video data on a 3D display operative in a 3D mode, a 2D display signal for displaying 2D video data on the 3D display operative in a 2D mode, or a pseudo 2D display signal by including 2D video data in the output signal for displaying the 2D video data on the 3D display operative in the 3D mode. Processing means (53) detect a request to display 2D video data on the 3D display, while the 3D display is operative in the 3D mode, and, in response to the detection, the generating means are set to generate the pseudo 2D display signal for maintaining the 3D mode of the 3D display.
Abstract:
To allow pragmatic insertion of secondary images by an apparatus connected to the final display we invented an image processing apparatus (301, 501) with an output image connection (506) for connection to a display (550), and an input (510) for receiving an input image (IM) and metadata specifying at least one luminance mapping function (F_Lt), which luminance mapping function specifies the relationship between luminances in the input image and a second image with an at least 6 times higher or lower maximum luminance, and comprising a graphics generation unit (502) arranged to determine a secondary image (IMG), and an image composition unit (504) arranged to compose an output image (IMC) on the basis of the pixel colors of the input image and of the secondary image, characterized in that the image processing apparatus comprises a luminance function selection unit (505), which is arranged to output to a metadata output (507) a copy of the at least one luminance mapping function (F_Lt) in case no secondary image colors arc mixed with the input image, and which is arranged to output a predetermined mapping function (F3) in case the output image is not identical to the input image because some pixel colors of the secondary image have been used to change the input image colors.
Abstract:
Three dimensional [3D] image data and auxiliary graphical data are combined for rendering on a 3D display (30) by detecting depth values occurring in the 3D image data, and setting auxiliary depth values for the auxiliary graphical data (31) adaptively in dependence of the detected depth values. The 3D image data and the auxiliary graphical data at the auxiliary depth value are combined based on the depth values of the 3D image data. First an area of attention (32) in the 3D image data is detected. A depth pattern for the area of attention is determined, and the auxiliary depth values are set in dependence of the depth pattern.
Abstract:
An authentication method authenticates a first party to a second party, where an operation is performed on condition that the authentication succeeds. If the first party is not authenticated, then if the first party qualifies for a sub-authorization, the operation is still performed. Further, a device that includes a first memory area holding a comparison measure, which is associated with time, and which is also used in said authentication procedure, a second memory area holding a limited list of other parties which have been involved in an authentication procedure with the device, and a third memory area, holding compliance certificates concerning parties of said list.
Abstract:
A hybrid transmission/auto-conversion 3D format and scheme for transmission of 3D data towards various types of 3D displays is described. In the decoder (20) a stereo-to-depth convertor (24) generates a depth map. In the 3D video signal additional depth information called depth helper data (DH-bitstr) is sparsely transmitted both in time (partial depths in time) and/or spatially (partial depth within the frames). A depth switcher (25) selects the partial depths based on an explicit or implicit mechanism for indicating when these are to be used or when the depths must be automatically generated locally. Advantageously disturbing depth errors due to said stereo-to-depth convertor are reduced by the depth helper data.
Abstract:
A 3D video system for transmission of 3D data towards various types of 3D displays is described. A 3D source device (40) provides a three dimensional [3D] video signal (41) to a 3D destination device (50). The 3D destination device receives the 3D video signal, and has a destination depth processor (52) for providing a destination depth map for enabling warping of views for the 3D display. The 3D source device generates depth signaling data, which represents depth processing conditions for adapting, to the 3D display, the destination depth map or the warping of views. The 3D video signal contains the depth signaling data. The destination depth processor adapts, to the 3D display, the destination depth map or the warping of views in dependence on the depth signaling data. The depth signaling data enables the rendering process to get better results out of the depth data for the actual 3D display.
Abstract:
Three dimensional [3D] image data and auxiliary graphical data are combined for rendering on a 3D display (30) by detecting depth values occurring in the 3D image data, and setting auxiliary depth values for the auxiliary graphical data (31) adaptively in dependence of the detected depth values. The 3D image data and the auxiliary graphical data at the auxiliary depth value are combined based on the depth values of the 3D image data. First an area of attention (32) in the 3D image data is detected. A depth pattern for the area of attention is determined, and the auxiliary depth values are set in dependence of the depth pattern.
Abstract:
A hybrid transmission/auto-conversion 3D format and scheme for transmission of 3D data towards various types of 3D displays is described. In the decoder (20) a stereo-to-depth convertor (24) generates a depth map. In the 3D video signal additional depth information called depth helper data (DH-bitstr) is sparsely transmitted both in time (partial depths in time) and/or spatially (partial depth within the frames). A depth switcher (25) selects the partial depths based on an explicit or implicit mechanism for indicating when these are to be used or when the depths must be automatically generated locally. Advantageously disturbing depth errors due to said stereo-to-depth convertor are reduced by the depth helper data.
Abstract:
Because there are currently probably more than necessary different HDR video coding methods appearing, it is expected that practical communicated HDR videos may in several future scenarios consist of a complicated mix of differently encoded HDR video segments, which may be difficult to decode unless one has our presently presented video decoder (341) arranged to decode a high dynamic range video consisting of temporally successive images, in which the video is composed of successive time segments (S1, S2) consisting of a number of temporally successive images (I1, I2) which have pixel colors, which pixel colors in different time segments are defined by having lumas corresponding to pixel luminances according to different electro-optical transfer functions (EOTF), wherein the images in some of the segments are defined according to dynamically changeable electro-optical transfer functions which are transmitted as a separate function for each temporally successive image, and wherein the images in other segments have lumas defined by a fixed electro-optical transfer function, of which the information is co-communicated in data packages (DRAM) which are transmitted less frequently than the image repetition rate, and wherein at least one of said data packages (DRAM) characterizing the electro-optical transfer function of the image pixel lumas after a moment of change (t1) between a first and a second segment is transmitted prior to the moment of change (t1); and similarly a corresponding encoder which composes the segmented video stream assuring that at least one correct package (DRAM) describing the EOTF according to which the lumas of a later video segment is coded is received by receivers before the change to a different HDR encoding method segment.