Abstract:
Described are machine vision systems and methods for simultaneous kinematic and hand-eye calibration. A machine vision system includes a robot and a 3D sensor in communication with a control system. The control system is configured to move the robot to poses, and for each pose: capture a 3D image of calibration target features and robot joint angles. The control system is configured to obtain initial values for robot calibration parameters, and determine initial values for hand-eye calibration parameters based on the initial values for the robot calibration parameters, the 3D image, and joint angles. The control system is configured to determine final values for the hand-eye calibration parameters and robot calibration parameters by refining the hand-eye calibration parameters and robot calibration parameters to minimize a cost function.
Abstract:
This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.
Abstract:
Described are methods, systems, and apparatus, including computer program products for finding correspondences of one or more parts in a camera image of two or more cameras. For a first part in a first camera image of a first camera, a first 3D ray that is a first back-projection of a first feature coordinate of the first part in the first camera image to a 3D physical space is calculated. For a second part in a second camera image of a second camera, a second 3D ray that is a second back-projection of a second feature coordinate of the second part in the second camera image to the 3D physical space is calculated, wherein the first feature coordinate and the second feature coordinate correspond to a first feature as identified in a model. A first distance between the first 3D ray and the second 3D ray is calculated.
Abstract:
This invention provides a system and method that ties the coordinate spaces at the two locations together during calibration time using features on a runtime workpiece instead of a calibration target. Three possible scenarios are contemplated: wherein the same workpiece features are imaged and identified at both locations; wherein the imaged features of the runtime workpiece differ at each location (with a CAD or measured workpiece rendition available); and wherein the first location containing a motion stage has been calibrated to the motion stage using hand-eye calibration and the second location is hand-eye calibrated to the same motion stage by transferring the runtime part back and forth between locations. Illustratively, the quality of the first two techniques can be improved by running multiple runtime workpieces each with a different pose, extracting and accumulating such features at each location; and then using the accumulated features to tie the two coordinate spaces.
Abstract:
Described are methods, systems, and apparatus, including computer program products for finding correspondences of one or more parts in a camera image of two or more cameras. For a first part in a first camera image of a first camera, a first 3D ray that is a first back-projection of a first feature coordinate of the first part in the first camera image to a 3D physical space is calculated. For a second part in a second camera image of a second camera, a second 3D ray that is a second back-projection of a second feature coordinate of the second part in the second camera image to the 3D physical space is calculated, wherein the first feature coordinate and the second feature coordinate correspond to a first feature as identified in a model. A first distance between the first 3D ray and the second 3D ray is calculated.
Abstract:
A system and method for robustly calibrating a vision system and a robot is provided. The system and method enables a plurality of cameras to be calibrated into a robot base coordinate system to enable a machine vision/robot control system to accurately identify the location of objects of interest within robot base coordinates.
Abstract:
This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene involving microscopic and near microscopic objects under manufacture moved by a manipulator, so as to acquire contemporaneous images of a runtime object and determine the pose of the object for the purpose of guiding manipulator motion. At least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly can be combined with the searched 2D object features in images of other camera assemblies, based on their trained object features to generate a set of 3D features and thereby determine a 3D pose of the object.
Abstract:
This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system implementation having a plurality of cameras arranged at different orientations with respect to a scene involving microscopic and near microscopic objects under manufacture moved by a manipulator, so as to acquire contemporaneous images of a runtime object and determine the pose of the object for the purpose of guiding manipulator motion. At least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image, corresponding to trained object features in the non-perspective camera assembly can be combined with the searched 2D object features in images of other camera assemblies, based on their trained object features to generate a set of 3D features and thereby determine a 3D pose of the object.