Abstract:
Disclosed is a working method using a sensor, which increases recognition of a component to increase mounting of a component and enhancing productivity. The working method includes: extracting an object to be picked from a pile of objects using the sensor; picking the extracted object to move the picked object to a predetermined place; and estimating an angle of the moved object in the current position using the sensor. Accordingly, the working method can perform precise component recognition and posture estimation by two steps: a component picking step and a component recognition step, and effectively apply to a manufacturing line, thereby improving mounting of a component and enhancing productivity of a product.
Abstract:
Provided are a human-tracking method and a robot apparatus. The human-tracking method includes receiving an image frame including a color image and a depth image, determining whether user tracking was successful in a previous image frame, and determining a location of a user and a goal position to which a robot apparatus is to move based on the color image and the depth image in the image frame, when user tracking was successful in the previous frame. Accordingly, a current location of the user can be predicted from the depth image, user tracking can be quickly performed, and the user can be re-detected and tracked using user information acquired in user tracking when detection of the user fails due to obstacles or the like.
Abstract:
The present invention relates to a device and a method of detecting arbitrarily piled objects, and a device and a method for picking a detected object. The present invention may provide a device and a method of detecting an object, which extract a unique local characterized part of the object by using a visual sensor, detect an object region, and estimate a posture from the detected object region. Also the present invention may provide an object picking system capable of being applied to an actual production process, such as assembling or packaging, in a cell producing method.
Abstract:
An apparatus includes an image receiving module configured to collect a depth image provided from a camera, a human body detection module configured to detect a human body from the collected depth image, and an activity recognition module configured to recognize an action of the human body on the basis of a 3-dimensional action volume extracted from the human body and a previously learned action model.