Abstract:
The 3D information generating device includes a light source unit configured to irradiate light onto a target object, a coordinate mechanism unit disposed between the target object and the light source unit and including a plurality of inclined projections reflecting the light, a camera unit configured to output an image obtained by simultaneously photographing the coordinate mechanism unit and the target object, and an information processing unit configured to check a projection plane formed by the light of the light source unit and, by using the projection plane, generate 3D information about the target object while calibrating an error. Accordingly, an unskilled user may easily perform a 3D scan operation by using the coordinate mechanism unit, and moreover, since a coordinate mechanism is not installed in the outside, the 3D information generating device may easily move and may be easily maintained and managed.
Abstract:
An apparatus for generating a 3-dimensional face model includes a multi-view image capturer configured to sense a motion of the mobile device and automatically capture still images from two or more directions; and a 3D model generator configured to generate a 3D face mode using the two or more still images obtained by the multi-view image capturer.
Abstract:
The present disclosure relates to a mixed phenomena expressing method for expressing physical phenomena in a mixed reality and a mixed reality apparatus for performing the method. To be more specific, the physical phenomena expressing method configures a scene which may enhance a context or a realism to have the same level as the face-to-face situation using a scene configuration of the input image collected from the sensor to provide a service of a mixed reality and a physics/physical property analysis of a target object included in the input image.
Abstract:
Provided are a multi-primitive fitting method including an acquiring point cloud data by collecting data of each of input points, a obtaining a segment for the points using the point cloud data, and a performing primitive fitting using data of points included in the segment and the point cloud data, and a multi-primitive fitting device that performs the method.
Abstract:
Disclosed is a method of calibrating a depth image based on a relationship between a depth sensor and a color camera, and an apparatus for calibrating a depth image may include a three-dimensional (3D) point determiner to determine a 3D point of a camera image and a 3D point of a depth image simultaneously captured with the camera image, a calibration information determiner to determine calibration information for calibrating an error of a depth image captured by the depth sensor and a geometric information between the depth sensor and a color camera, using the 3D point of the camera image and the 3D point of the depth image, and a depth image calibrator to calibrate the depth image based on the calibration information and the 3D point of the depth image.
Abstract:
A method for generating three-dimensional (3D) content for a performance of a performer in an apparatus for generating 3D content is provided. The apparatus for generating 3D content obtains a 3D appearance model and texture information of the performer using the images of the performer located in the space, sets a plurality of nodes in the 3D appearance model of the performer, generates a 3D elastic model of the performer using the texture information, obtains a plurality of first images of the performance scene of the performer photographed by a plurality of first cameras installed in a performance hall, renders a plurality of virtual images obtained by photographing a 3D appearance model according to position change of each node in a 3D elastic model of the performer through a plurality of first virtual cameras having the same intrinsic and extrinsic parameters as the plurality of first cameras, using the texture information, determines an optimal position of each node by using color differences between the plurality of first images and a plurality of first rendered with respect to the plurality of virtual images obtained by the plurality of first virtual cameras, and generates a mesh model describing the performance scene by applying 3D elastic model parameter values corresponding to the optimal position of each node to the 3D elastic model.
Abstract:
A disclosure of the present invention is related to an apparatus and a method for generating three-dimensional information. The apparatus for generating three-dimensional information may include a light source providing light to an object to be reconstructed in three-dimensional information, a coordinate reference mechanism unit provided between the light source and the object and having a plurality of protrusions reflecting the light; a camera unit outputting an image capturing the coordinate reference mechanism unit and the object simultaneously; and a three-dimensional information processing unit generating the three-dimensional information of the object by identifying a projection plane formed by the light and using the projection plane, considering a relationship between a plurality of actual protrusion reflection points at which the light is reflected by the plurality of protrusions respectively and a plurality of protrusion reflection points displayed in the image.
Abstract:
Provided is an apparatus and method for extracting a movement path, the movement path extracting apparatus including an image receiver to receive an image from a camera group in which a mutual positional relationship among cameras is fixed, a geographic coordinates receiver to receive geographic coordinates of a moving object on which the camera group is fixed, and a movement path extractor to extract a movement path of the camera group based on a direction and a position of a reference camera of the camera group using the image and the geographic coordinates.
Abstract:
A primitive fitting apparatus is provided. The primitive fitting apparatus may include a selecting unit to receive, from a user, a selection of points used to fit a primitive a user desires to fit from a point cloud, an identifying unit to receive a selection of the primitive from the user and to identify the selected primitive, and a fitting unit to fit the primitive to correspond to the points, using the points and primitive.
Abstract:
A view image providing device and method are provided. The view image providing device may include a panorama image generation unit to generate a panorama image using a cube map including a margin area by obtaining an omnidirectional image, a mesh information generation unit to generate 3-dimensional (3D) mesh information that uses the panorama image as a texture by obtaining 3D data, and a user data rendering unit to render the panorama image and the mesh information into user data according to a position and direction input by a user.