Abstract:
A client and server are provided with the same digital image of a slice of a biological material, which has been applied with a staining substance. The server is used for pre-processing the digital image and provides results of the pre-processing to the client in turn of a request. The server is configured to classify each pixel of the digital image, wherein the pixel is classified as stained, The client is configured to determine the region of interest at the digital image, and request data related to the classification of the pixels at the region of interest at the server. The server can provide the classification results related to the classification of the pixels at the region of interest at the digital image to the client.
Abstract:
An apparatus is arranged to generate a triangle mesh for a three dimensional image. The apparatus includes a depth map source (101) which provides a depth map and a tree generator (105) generates a k-D tree from the depth map. The k-D tree representing a hierarchical arrangement of regions of the depth map satisfying a requirement that a depth variation measure for undivided regions is below a threshold. A triangle mesh generator (107) positions an internal vertex within each region of the k-D tree. The triangle mesh is then generated by forming sides of triangles of the triangle mesh as lines between internal vertices of neighboring regions. The approach may generate an improved triangle mesh that is suitable for many 3D video processing algorithms.
Abstract:
There is provided a method and apparatus for modifying a contour comprising a sequence of points positioned on an image. A position of a movable indicator on the image relative to one or more points of the sequence is detected (202). The movable indicator is movable by a user. At least one point is removed from the contour, at least one point is added to the contour, or at least one point is removed from the contour and at least one point is added to the contour based on a distance of the detected position of the movable indicator on the image from the one or more points (204).
Abstract:
An autostereoscopic 3D display comprises a first unit (503) for generating an intermediate 3D image. The intermediate 3D image comprises a plurality of regions and the first unit (503) is arranged to generate a first number of image blocks of pixel values corresponding to different view directions for the region regions. The number of image blocks is different for some regions of the plurality of regions. A second unit (505) generates an output 3D image comprising a number of view images from the intermediate 3D image, where each of the view images correspond to a view direction. The display further comprises a display arrangement (301) and a driver (507) for driving the display arrangement (301) to display the output 3D image. An adaptor (509) is arranged to adapt the number of image blocks for a first region in response to a property of the intermediate 3D image or a representation of a three dimensional scene from which the first image generating unit (503) is arranged to generate the intermediate image.
Abstract:
Three dimensional [3D] image data and auxiliary graphical data are combined for rendering on a 3D display (30) by detecting depth values occurring in the 3D image data, and setting auxiliary depth values for the auxiliary graphical data (31) adaptively in dependence of the detected depth values. The 3D image data and the auxiliary graphical data at the auxiliary depth value are combined based on the depth values of the 3D image data. First an area of attention (32) in the 3D image data is detected. A depth pattern for the area of attention is determined, and the auxiliary depth values are set in dependence of the depth pattern.
Abstract:
The invention provides a directional back-light arrangement for an auto-stereoscopic display in which different parts of the backlight arrangement point in different directions. This means that different parts of the backlight arrangement will be suitable for directing images in different directions, while reducing the effect of optical aberrations resulting from large exit angles.
Abstract:
A display device (120) has a 3D display (160) for emitting at least two views of 3D image data to enable autostereoscopic viewing of 3D image data at multiple viewing positions (182, 184). A processor (140) processes the 3D image data (122) for generating the views for display on the 3D display, and a viewer detector (130) detects a viewer position of a viewer in front of the 3D display. The processor has a viewer conflict detector (141) for detecting and resolving viewer position conflicts. The detector obtains at least a first viewer position of a first viewer via the viewer detector, and detects a viewer position conflict at the first viewer position where said first view and second view do not provide the 3D effect for the first viewer. If so, the detector controls generating the views in dependence of the detected viewer position conflict. At least one of said at least two views as received by the first viewer is dynamically modified to signal or resolve the conflict.
Abstract:
An apparatus comprises a receiver (301) receiving an image signal representing a scene. The image signal includes image data comprising a number of images where each image comprises pixels that represent an image property of the scene along a ray having a ray direction from a ray origin. The ray origins are different positions for at least some pixels. The image signal further comprises a plurality of parameters describing a variation of the ray origins and/or the ray directions for pixels as a function of pixel image positions. A renderer (303) renders images from the number of images based on the plurality of parameters.
Abstract:
A method for calibrating at least one of the six-degrees-of-freedom of all or part of cameras in a formation positioned for scene capturing, the method comprising a step of initial calibration before the scene capturing. The step comprises creating a reference video frame which comprises a reference image of a stationary reference object. During scene capturing the method further comprises a step of further calibration wherein the position of the reference image of the stationary reference object within a captured scene video frame is compared to the position of the reference image of the stationary reference object within the reference video frame, and a step adapting the at least one of the six-degrees-of-freedom of a multiple cameras of the formation if needed in order to get an improved scene capturing after the further calibration.
Abstract:
An image synthesis apparatus comprises a receiver (301) for receiving image parts and associated depth data of images representing a scene from different view poses from an image source. A store (311) stores a depth transition metric for each image part of a set of image parts where the depth transition metric for an image part is indicative of a direction of a depth transition in the image part. A determiner (305) determines a rendering view pose and an image synthesizer (303) synthesizes at least one image from received image part. A selector is arranged to select a first image part of the set of image parts in response to the depth transition metric and a retriever (309) retrieves the first image part from the image source. The synthesis of an image part for the rendering view pose is based on the first image part.