Abstract:
A projection pattern creation apparatus is configured to capture an image of a projection pattern projected from a pattern projection device by an imaging device to measure a three-dimensional position and/or a shape of an object. The projection pattern creation apparatus includes: a projection pattern deformation unit configured to reproduce deformation when a projected projection pattern is included in an image captured by the imaging device on the basis of characteristics of optical systems of the pattern projection device and the imaging device, and/or a positional relation between the pattern projection device and the imaging device and generate a deformation projection pattern; and a first projection pattern improvement unit configured to generate a second projection pattern obtained by improving a first projection pattern, on a basis of a first deformation projection pattern generated when the first projection pattern is projected toward evaluation surfaces having different positions and inclinations.
Abstract:
A three-dimensional imaging device includes a distance image acquiring unit configured to acquire distance images by switching exposure conditions; an effective pixel count calculation unit that calculates an effective pixel count; an ineffective pixel identifying unit that identifies ineffective pixels; and an exposure condition adjusting unit that sets the exposure conditions. The exposure condition adjusting unit sets a reference exposure condition, determines whether a reference effective pixel count is equal to or less than a first threshold value, and, in response to the reference effective pixel count being equal to or less than the first threshold value, sets an additional exposure condition that maximizes a total effective pixel count of a total distance image, and repeats additional setting of the additional exposure condition using the total distance image as a new reference distance image until the total effective pixel count becomes larger than the first threshold value.
Abstract:
An imaging command unit commands an imaging device to image an object projected with a shaded image where a predetermined shade value is set with respect to each pixel. An image acquisition unit acquires a shaded image imaged by the imaging device in accordance with a command of the imaging command unit. An image production unit produces a shaded image where any of shade values is set with respect to each pixel so as to have a brightness distribution opposite to a brightness distribution of the shaded image. An imaging command unit commands the imaging device to image the object projected with the shaded image. The image acquisition unit acquires a shaded image imaged by the imaging device in accordance with a command of the imaging command unit. An object information acquisition unit acquires information on the object in the shaded image based on the shaded image.
Abstract:
The parameters of image processing are adjusted for detecting a workpiece, by quantifying the quality of the parameters of the image processing. This image processing device automatically adjusts detection parameters that are used in image processing for detecting an imaging object. The image processing device includes a detection parameter generation unit that generates detection parameter combinations, an imaging condition setting unit that sets imaging conditions for each of the detection parameter combinations, a detectability determination unit that determines whether or not the imaging object is detectable for each combination of detection parameters and imaging conditions, an imaging range calculation unit that calculates a range of imaging conditions under which the imaging object is determined, by the detectability determination unit, to have been detected, and a parameter determination unit that determines a detection parameter combination for which the calculated range of imaging conditions is the widest.
Abstract:
The present invention facilitates detection of an object imaged by a three-dimensional sensor. The image processing device includes: a distance calculating unit that, on the basis of three-dimensional data acquired by the three-dimensional sensor, calculates a distance between each of points in the three-dimensional data and a reference plane; a distance image creating unit that creates a distance image having pixel values constituted by values each calculated on the basis of the distance calculated by the distance calculating unit; and an image processing unit that carries out image processing on the distance image. The reference plane may be a bearing surface bearing the object or a surface parallel with the bearing surface. The reference plane may also be a surface of the object. The image processing device may include the three-dimensional sensor.
Abstract:
An image processing apparatus includes a two-dimensional image storage unit configured to store a plurality of two-dimensional image data captured by photographing an identical imaging target object under different exposure conditions; a distance image storage unit configured to store distance image data including a pixel array of a known relationship to a pixel array of the two-dimensional image data; a pixel extraction unit configured to extract, among pixels in each of the two-dimensional image data, a first pixel at which a difference in brightness between identical pixels is less than a predetermined value; and a distance image adjusting unit configured to specify a second pixel of the distance image data at a position corresponding to the first pixel in the pixel array, and to set the second pixel as a non-imaging pixel in the distance image data.
Abstract:
For calibration on a single camera or a stereo camera, a calibration range is set in advance in an image coordinate system and the calibration is performed in an arbitrary range. A visual sensor controller is a calibration device that associates a robot coordinate system at a robot and an image coordinate system at a camera by placing a target mark at the robot, moving the robot, and detecting the target mark at multiple points in a view of the camera. The calibration device comprises: an image range setting unit that sets an image range in the image coordinate system at the camera; and a calibration range measurement unit that measures an operation range for the robot corresponding to the image range before implementation of calibration by moving the robot and detecting the target mark.
Abstract:
A robot system is provided with a three-dimensional sensor which acquires three-dimensional information of an object, and a robot which includes a gripping device for gripping an object. The robot system uses first three-dimensional information which relates to a state before an object is taken out and second three-dimensional information which relates to a state after an object is taken out as the basis to acquire three-dimensional shape information of an object, and uses the three-dimensional shape information of the object as the basis to calculate a position and posture of the robot when an object is placed at a target site.
Abstract:
Provided is a robot capable of automatically confirming the accuracy of a three-dimensional sensor and correcting the accuracy. The robot comprises: a three-dimensional sensor; a notification unit for notifying a determination timing for determining a deviation of an optical system of the three-dimensional sensor, on the basis of a change in a physical quantity related to the three-dimensional sensor; and a determination unit for determining whether or not there is a deviation at the optical system of the three-dimensional sensor. The change in the physical quantity includes at least one of acceleration and the number of times of acceleration and deceleration added to the three-dimensional sensor, a change in temperature of the three-dimensional sensor within a certain period, a change in temperature of the three-dimensional sensor within a total operating period, and the number of times of change in temperature of the three-dimensional sensor within the total operating period.
Abstract:
A 3D image-capturing device that includes at least one camera that acquires a 2D image and distance information of an object, a monitor that displays the 2D image acquired by the camera, and at least one processor including hardware. The processor acquires a first area for which the distance information is not required in the 2D image displayed on the monitor, and sets an image-capturing condition so that the amount of distance information acquired by the camera in the acquired first area is less than or equal to a prescribed first threshold and the amount of distance information acquired by the camera in a second area, which is at least part of an area other than the first area, is greater than a prescribed second threshold that is larger than the first threshold.