Abstract:
A three-dimensional source device provides a three-dimensional display signal for a display via a high speed digital interface, such as HDMI. The three-dimensional display signal comprises a sequence of frames. The sequence of frames comprises units, each unit corresponding to frames comprising video information intended to be composited and displayed as a three-dimensional image. The three-dimensional source device includes three-dimensional transfer information comprising at least information about the video frames in the unit. The display detects the three-dimensional transfer information, and generates the display control signals based in dependence on the three-dimensional transfer information. The three-dimensional transfer information in an additional info frame packet comprises information about the multiplexing scheme for multiplexing frames into the three-dimensional display signal, the multiplexing scheme being selected of group of multiplexing schemes including frame alternating multiplexing, the three-dimensional transfer information indicating the number of frames being sequentially arranged within the video data period.
Abstract:
A recorder creating an encoded data stream comprising an encoded video stream and an encoded graphics stream, the video stream comprising an encoded 3D (three-dimensional) video object, and the graphics stream comprising at least a first encoded segment and a second encoded segment, the first segment comprising 2D (two-dimensional) graphics data and the second segment comprises a depth map for the 2D graphics data. A graphics decoder decoding the first and second encoded segments to form respective first and second decoded sequences. Outputting the first and second decoded sequences separately to a 3D display unit. The 3D display unit combining the first and second decoded sequences and rendering the combination as a 3D graphics image overlaying a 3D video image simultaneously rendered from a decoded 3D video object decoded from the encoded 3D video object.
Abstract:
A device and method process graphics to be overlayed over video for three-dimensional display. The video includes a series of video frames updated at a video rate, including main video frames and additional video frames. A first buffer buffers a first part of the overlay information to be overlayed over the main video frames. A second buffer buffers a second part of the overlay information to be overlayed over the additional video frames. For each video frame, the first part of the overlay information or the second part of the overlay information is copied to a frame-accurate area copier. The first part of the overlay information or the second part of the overlay information is output according to whether a current video frame is a main video frame or an additional video frame. The first part of the overlay information and the second part of the overlay information are updated at an overlay rate. The overlay rate is different than the video rate.
Abstract:
A method of decoding and outputting video information suitable for three-dimensional [3D] display, the video information comprising encoded main video information suitable for displaying on a 2D display and encoded additional video information for enabling three-dimensional [3D] display,the method comprising: receiving or generating three-dimensional [3D] overlay information to be overlayed over the video information; buffering a first part of the overlay information to be overlayed over the main video information in a first buffer; buffering a second part of overlay information to be overlayed over the additional video information in a second buffer; decoding the main video information and the additional video information and generating as a series of time interleaved video frames, each outputted video frame being either main video frame or additional video frame; determining a type of an video frame to be outputted being either a main video frame or an additional video frame; overlaying either first or second part of the overlay information on an video frame to be outputted in agreement with the determined type of frame outputting the video frames and the overlayed information.
Abstract:
The invention relates to a signal comprising video information and associated playback information, the video information and associated playback information being organized according to a playback format, the video information comprising a primary video stream for two-dimensional (2D) display, and an additional information stream for enabling three-dimensional (3D) display, wherein that the associated playback information comprises display information indicating the types of display possible. The invention also relates to a method and device for playback of such a signal, the method comprising receiving the video information and the associated playback information, processing the display information to determine that both two-dimensional (2D) display possible and three-dimensional (3D) display are possible for the received video information; determining a playback setting of a playback device indicating whether the video information should be displayed two-dimensional (2D) or three dimensional (3D); and processing for display either the primary video stream or the primary video stream and the additional information stream, in accordance with the playback setting of the playback device.
Abstract:
Methods and apparatus related to a LED-based lighting unit (10; 110; 210; 310; 410) having a radar for presence detection. A radar circuit (140; 240; 340A; 340B; 440) may be electrically coupled to conductive wiring (25; 125; 225; 325; 425) of the LED-based lighting unit that at least selectively powers the radar circuit and at least selectively powers the LEDs. In some implementations, an antenna coupled to the radar circuit may be formed from the conductive wiring and optionally at least partially isolated from any current flowing through the LEDs.
Abstract:
A 3D video system transfers video data from a video source device (40) to a destination device (50). The destination device has a destination depth processor (52) for providing destination depth data. The source device provides depth filtering data including filter location data, the depth filtering data representing a processing condition for processing the destination depth data in a filter area of the video indicated by the filter location data. The destination depth processor (52) is arranged for processing, in dependence on the depth filtering data, the destination depth data in an area of the video indicated by the filter location data. The depth filtering data enables the rendering process to improve the quality of the depth data.
Abstract:
To allow better quality rendering of video on any display, a method is proposed of encoding, in addition to video data (VID), additional data (DD) comprising at least one change time instant (TMA_1) indicating a change in time of a characteristic luminance (CHRLUM) of the video data, which characteristic luminance summarizes the set of luminances of pixels in an image of the video data, the method comprising: generating on the basis of the video data (VID) descriptive data (DED) regarding the characteristic luminance variation of the video, the descriptive data comprising at least one change time instant (TMA_1), and encoding and outputting the descriptive data (DED) as additional data (DD).
Abstract:
A method is proposed of encoding a change time The change time indicates a change in time of a characteristic luminance of the video data. The characteristic luminance summarizes the set of luminances of pixels the video data. The encoding of the time variation of the characteristic luminance provides a more compact and accurate storage and transmission of content.
Abstract:
Three dimensional [3D] image data and auxiliary graphical data are combined for rendering on a 3D display (30) by detecting depth values occurring in the 3D image data, and setting auxiliary depth values for the auxiliary graphical data (31) adaptively in dependence of the detected depth values. The 3D image data and the auxiliary graphical data at the auxiliary depth value are combined based on the depth values of the 3D image data. First an area of attention (32) in the 3D image data is detected. A depth pattern for the area of attention is determined, and the auxiliary depth values are set in dependence of the depth pattern.