Abstract:
A display device has a light blocking arrangement for selectively blocking light which has or would be emitted at large lateral angles. The display can be configured so that light reaching these elements is either allowed to reach the viewer or is blocked from reaching the viewer. This means that a public viewing mode can be chosen or a private viewing mode. The light blocking elements are controlled optically in order to simplify the construction and control.
Abstract:
A stacked display has the different color layers (20r), (20g), (20b) ordered with respect to the wavelength-dependency of the lens focus so that there is better focus of the colors on the display layers that modulate those colors. The optical system (30), (32) can be designed to have a wavelength-dependent focus that matches the position of each of the light modulating layers.
Abstract:
Autostereoscopic display device (1) comprising rows and columns of colour sub-pixels and a lenticular array (9) in registration with the display, the lenses of which being slanted with respect to the general column pixel direction in order to enable square, or near-square, 3D pixels.
Abstract:
A display device has a first, see-through mode of operation (30), (34) in which the display panel does not emit light and the display device blocks the light of a first polarization (state A) but allows light of a second polarization (state B) to pass through in both opposite directions. In a second, 3D display mode, the emissive pixels output light of the first polarization (state A) from the display output face and a view forming arrangement forms multiple views (36) in one output direction.
Abstract:
A 3D video system transfers video data from a video source device (40) to a destination device (50). The destination device has a destination depth processor (52) for providing destination depth data. The source device provides depth filtering data including filter location data, the depth filtering data representing a processing condition for processing the destination depth data in a filter area of the video indicated by the filter location data. The destination depth processor (52) is arranged for processing, in dependence on the depth filtering data, the destination depth data in an area of the video indicated by the filter location data. The depth filtering data enables the rendering process to improve the quality of the depth data.
Abstract:
An apparatus comprises a receiver (301) receiving an image signal representing a scene. The image signal includes image data comprising a number of images where each image comprises pixels that represent an image property of the scene along a ray having a ray direction from a ray origin. The ray origins are different positions for at least some pixels. The image signal further comprises a plurality of parameters describing a variation of the ray origins and/or the ray directions for pixels as a function of pixel image positions. A renderer (303) renders images from the number of images based on the plurality of parameters.
Abstract:
A method for calibrating at least one of the six-degrees-of-freedom of all or part of cameras in a formation positioned for scene capturing, the method comprising a step of initial calibration before the scene capturing. The step comprises creating a reference video frame which comprises a reference image of a stationary reference object. During scene capturing the method further comprises a step of further calibration wherein the position of the reference image of the stationary reference object within a captured scene video frame is compared to the position of the reference image of the stationary reference object within the reference video frame, and a step adapting the at least one of the six-degrees-of-freedom of a multiple cameras of the formation if needed in order to get an improved scene capturing after the further calibration.
Abstract:
A method for transitioning from a first set of video tracks, VT1, to a second set of video tracks, VT2, when rendering a multi-track video, wherein each video track has a corresponding rendering priority. The method comprises receiving an instruction to transition from a first set of first video tracks VT1 to a second set of second video tracks VT2, obtaining the video tracks VT2 and, if the video tracks VT2 are different to the video tracks VT1, applying a lowering function to the rendering priority of one or more of the video tracks in the first set of video tracks VT1 and/or an increase function to the rendering priority of one or more video tracks in the second set of video tracks VT2. The lowering function and the increase function decrease and increase the rendering priority over time respectively. The rendering priority is used in the determination of the weighting of a video track and/or elements of a video track used to render a multi-track video.
Abstract:
A distribution system comprises an audio server (101) for receiving incoming audio from remote clients (103) and for transmitting audio derived from the incoming audio to the remote clients (103). An audio apparatus comprises an audio a receiver (401) which receives data comprising: audio data for a plurality of audio components representing audio from a remote client of the plurality of remote clients; and proximity data for at least one of the audio components. The proximity data is indicative of proximity between remote clients. A generator (403) of the apparatus generates an audio mix from the audio components in response to the proximity data. For example, an audio component indicated to be proximal to a remote client may be excluded from an audio mix for that remote client.
Abstract:
An apparatus for generating an image comprises a receiver (101) which receives 3D image data providing an incomplete representation of a scene. A receiver (107) receives a target view vector indicative of a target viewpoint in the scene for the image and a reference source (109) provides a reference view vector indicative of a reference viewpoint for the scene. A modifier (111) generates a rendering view vector indicative of a rendering viewpoint as a function of the target viewpoint and the reference viewpoint for the scene. An image generator (105) generates the image in response to the rendering view vector and the 3D image data.