Abstract:
Disclosed herein is a method for supporting an attention test based on an attention map and an attention movement map. The method includes generating a score distribution for each segment area of frames satisfying preset conditions, among frames of video content (video) that is produced in advance so as to be suitable for the purpose of a test, generating an attention map corresponding to the frames based on the distribution of the gaze point of a subject, generating an attention movement map corresponding to the frames based on information about movement of the gaze point of the subject, and calculating the attention of the subject using the score distribution for each segment area, the attention map, and the attention movement map.
Abstract:
Disclosed herein are an apparatus and method for monitoring a user based on multi-view face images. The apparatus includes memory in which at least one program is recorded and a processor for executing the program. The program may include a face detection unit for extracting face area images from respective user images captured from two or more different viewpoints, a down-conversion unit for generating at least one attribute-specific 2D image by mapping information about at least one attribute in the 3D space of the face area images onto a 2D UV space, and an analysis unit for generating user monitoring information by analyzing the at least one attribute-specific 2D image.
Abstract:
Disclosed herein are a virtual content-mixing method for augmented reality and an apparatus for the same. The virtual content-mixing method includes generating lighting physical-modeling data based on actual lighting information for outputting virtual content, generating camera physical-modeling data by acquiring a plurality of parameters corresponding to a camera, and mixing the virtual content with an image that is input through an RGB camera, based on the lighting physical-modeling data and the camera physical-modeling data.
Abstract:
Disclosed herein are a virtual content-mixing method for augmented reality and an apparatus for the same. The virtual content-mixing method includes generating lighting physical-modeling data based on actual lighting information for outputting virtual content, generating camera physical-modeling data by acquiring a plurality of parameters corresponding to a camera, and mixing the virtual content with an image that is input through an RGB camera, based on the lighting physical-modeling data and the camera physical-modeling data.
Abstract:
Disclosed herein are a method and apparatus for active identification based on gaze path analysis. The method may include extracting the face image of a user, extracting the gaze path of the user based on the face image, verifying the identity of the user based on the gaze path, and determining whether the face image is authentic.
Abstract:
Disclosed herein are an apparatus and method for generating a 3D avatar. The method, performed by the apparatus, includes performing a 3D scan of the body of a user using an image sensor and generating a 3D scan model using the result of the 3D scan of the body of the user, matching the 3D scan model and a previously stored template avatar, and generating a 3D avatar based on the result of matching the 3D scan model and the template avatar.
Abstract:
An apparatus and method for providing augmented reality-based realistic experience. The apparatus for providing augmented reality-based realistic experience includes a hardware unit and a software processing unit. The hardware unit includes a mirror configured to have a reflective characteristic and a transmissive characteristic, a display panel configured to present an image of an augmented reality entity, and a sensor configured to acquire information about a user. The software processing unit presents the augmented reality entity via the display panel based on the information about the user from the hardware unit after performing color compensation on the color of the augmented reality entity.
Abstract:
Disclosed herein is an apparatus and method for estimating the joint structure of a human body. The apparatus includes a multi-view image acquisition unit for receiving multi-view images acquired by capturing a human body. A human body foreground separation unit extracts a foreground region corresponding to the human body from the acquired multi-view images. A human body shape restoration unit restores voxels indicating geometric space occupation information of the human body using the foreground region corresponding to the human body, thus generating voxel-based three-dimensional (3D) shape information of the human body. A skeleton information extraction unit generates 3D skeleton information from the generated voxel-based 3D shape information of the human body. A skeletal structure estimation unit estimates positions of respective joints from a skeletal structure of the human body using both the generated 3D skeleton information and anthropometric information.