Abstract:
A method and apparatus for determining an interpupillary distance (IPD) are provided. To determine an IPD of a user, three-dimensional (3D) images for candidate IPDs may be generated, and user feedback on the 3D images may be received. A final IPD may be determined based on the user feedback.
Abstract:
A method and apparatus for matching stereo images including calculating a data cost of at least one image, calculating a smoothness cost of the image, and matching pixels among pixels of a plurality of images based on the data cost and the smoothness cost.
Abstract:
Disclosed is a method and apparatus for calibrating parameters of a three-dimensional (3D) display apparatus, the method including acquiring a first captured image of a 3D display apparatus displaying a first pattern image, adjusting a first parameter set of the 3D display apparatus based on the first captured image, acquiring a second captured image of the 3D display apparatus displaying a second pattern image based on the adjusted first parameter set, and adjusting a second parameter set of the 3D display apparatus based on the second captured image.
Abstract:
A multi-lens based capturing apparatus and method are provided. The capturing apparatus includes a lens array including lenses and a sensor including sensing pixels, wherein at least a portion of sensing pixels in the sensor may generate sensing information based on light entering through different lenses in the lens array, and light incident on each sensing pixel, among the portion of the plurality of sensing pixels may correspond to different combinations of viewpoints.
Abstract:
A method and apparatus for correcting an image error in a naked-eye three-dimensional (3D) display, the method including controlling a flat-panel display displaying a stripe image, calculating a raster parameter of the naked-eye 3D display based on a captured stripe image, and correcting a stereoscopic image displayed on the naked-eye 3D display based on the calculated raster parameter, wherein the naked-eye 3D display includes the flat-panel display and the raster is disclosed.
Abstract:
An image processing method includes generating a target color image of a second viewpoint by warping a color image of a first viewpoint to the second viewpoint using a depth image corresponding to the color image of the first viewpoint; determining a conversion relationship of temporally neighboring color images among a plurality of color images of the second viewpoint, the plurality of color images including the target color image; and restoring a first hole of the target color image based on the conversion relationship.
Abstract:
A method for processing a three-dimensional (3D) object may including acquiring, based on an interaction of a user with at least one 3D object displayed on a 3D display, location information and depth information of pixels corresponding to the interaction. The method may including processing the at least one 3D object based on whether the location information and the depth information satisfy a depth continuity condition.
Abstract:
An apparatus and method for extracting a feature region from a point cloud are provided. The apparatus may divide the point cloud into a plurality of regions, and may extract at least one feature region from among the regions.
Abstract:
An image processing apparatus and a method implemented by the image process apparatus generates a vertical disparity map through regression analysis, based on a difference between vertical coordinate values from feature correspondence information of left and right source images. A geometric difference through image warping is calibrated before depth information is restored through depth estimation. Thus, a process of optimizing a camera model may not have to be performed, and occurrence of black areas that may be caused by image rotation may be reduced.
Abstract:
A method and apparatus for determining an interpupillary distance (IPD) are provided. To determine an IPD of a user, three-dimensional (3D) images for candidate IPDs may be generated, and user feedback on the 3D images may be received. A final IPD may be determined based on the user feedback.