Abstract:
An image processing apparatus includes a determination unit, a search unit, a weight assignment unit and a filling unit. The determination unit determines whether a hole is surrounded by the foreground in a disparity map or a depth map. The search unit searches for multiple relative backgrounds along multiple directions when the hole is surrounded by the foreground. The weight assignment unit respectively assigns weights to the relative backgrounds. The filling unit selects an extremum from the weights, and fills the hole according to the relative background corresponding to the extremum.
Abstract:
A method and a system for simultaneously tracking several 6 DoF poses of a movable object and a movable camera are provided. The method the following steps: A series of images are captured by a movable camera, several environmental feature points are extracted from the images and are matched to compute several camera matrixes of the movable camera, and the 6 DoF poses of the movable camera are computed using the camera matrixes. At the same time, several feature points of the movable object are inferred from the images captured by the movable camera, the coordinates of the feature points of the movable object are corrected using the camera matrixes corresponding to the images as well as the predefined geometric and temporal constraints. Then, the 6 DoF poses of the movable object are computed using the coordinates of the corrected feature points and their corresponding camera matrixes.
Abstract:
A motion tracking system includes a first image-capturing module, a computing module and a database. The first image-capturing module captures the full body motion of an object to obtain a depth image. The database provides a plurality of training samples, wherein the training samples include a plurality of depth feature information related to joint positions of the object. The computing module receives the depth image, performs an association operation and a prediction operation for the depth image to obtain a plurality of first joint positions of the object. The computing module projects the first joint positions to a three-dimensional space to generate a three-dimensional skeleton of the object. The depth image includes an image in which limbs of the object are not occluded or an image in which some of limbs of the object are not occluded and the others of limbs of the object are occluded.
Abstract:
A three dimensional image measurement system including a first optical system and a second optical system is provided. The first optical system is adapted to output a structural light beam and a zero order light beam. There is an angle between the structural light beam and the zero order light beam. The first optical system performs an optical operation to project the structural light beam to a three dimensional object to obtain three dimensional information of the three dimensional object. The second optical system is adapted to receive the zero order light beam and perform another optical operation by using the zero order light beam. The first optical system includes a plurality of optical elements. The value of the angle between the structural light beam and the zero order light beam is determined according to position parameters of the optical elements.
Abstract:
A three dimensional image measurement system including a first optical system and a second optical system is provided. The first optical system is adapted to output a structural light beam and a zero order light beam. There is an angle between the structural light beam and the zero order light beam. The first optical system performs an optical operation to project the structural light beam to a three dimensional object to obtain three dimensional information of the three dimensional object. The second optical system is adapted to receive the zero order light beam and perform another optical operation by using the zero order light beam. The first optical system includes a plurality of optical elements. The value of the angle between the structural light beam and the zero order light beam is determined according to position parameters of the optical elements.
Abstract:
A depth sensing apparatus with self-calibration and a self-calibration method thereof are provided. The depth sensing apparatus includes a projection apparatus, an image capturing apparatus and a calibration module. The projection apparatus projects a calibration pattern and a depth computation pattern to a reference plane based on a predefined calibration pattern and a predefined depth computation pattern. The image capturing apparatus captures an image including the calibration pattern and the depth computation pattern. The calibration module coupled to the image capturing apparatus adjusts apparatus parameters of the depth sensing apparatus to calibrate a depth computation deviation according to the calibration pattern of the image, the predefined calibration pattern and a predefined lookup table corresponding to the predefined calibration pattern.
Abstract:
A hole filling method for multi-view disparity maps is provided. At least one disparity map is respectively captured as a plurality of known views among a plurality of views for capturing an object. As for a plurality of virtual views among the views excluding the at least one known view, disparity maps of the virtual views are synthesized by sequentially using the disparity maps of the known views according to a distance of a virtual camera position or a transformed angle between each virtual view and each known view. Hole filling information of the disparity maps of other virtual views having the distances or the transformed angles smaller than that of the virtual view is used to fill holes in the synthesized disparity maps of the virtual views.