Abstract:
Provided is a 3D registration method and apparatus that may select a key point from among plural points included in 3D target data, based on a geometric feature or a color feature of each of the plural points, may adjust a position of the selected key point of the 3D target data based on features of, or a distance between, a key point of the 3D source data and the selected key point of the 3D target data, may calculate reliabilities of plural key points of the 3D source data based on respective features of at least one key point of the 3D target data determined to correspond to the plural key points of the 3D source data, and may generate 3D registration data by performing 3D registration between the 3D source data and the 3D target data based on the calculated reliabilities.
Abstract:
A method and apparatus for detecting a three-dimensional (3D) point cloud point of interest (POI), the apparatus comprising a 3D point cloud data acquirer to acquire 3D point cloud data, a shape descriptor to generate a shape description vector describing a shape of a surface in which a pixel point of a 3D point cloud and a neighboring point of the pixel point are located, and a POI extractor to extract a POI based on the shape description vector is disclosed.
Abstract:
A method and an apparatus for adjusting a pose in a face image are provided. The method of adjusting a pose in a face image involves detecting two-dimensional (2D) landmarks from a 2D face image, positioning three-dimensional (3D) landmarks in a 3D face model by determining an initial pose of the 3D face model based on the 2D landmarks, updating the 3D landmarks by iteratively adjusting a pose and a shape of the 3D face model, and adjusting a pose in the 2D face image based on the updated 3D landmarks.
Abstract:
An apparatus for detecting a body part from a user image may include an image acquirer to acquire a depth image, an extractor to extract the user image from a foreground of the acquired depth image, and a body part detector to detect the body part from the user image, using a classifier trained based on at least one of a single-user image sample and a multi-user image sample. The single-user image may be an image representing non-overlapping users, and the multi-user image may be an image representing overlapping users.