Abstract:
A cleaning robot includes a data acquisition unit that acquires actual sensor data by measuring a distance from a current position to an object to be measured; a local map acquisition unit that acquires a local map by scanning the vicinity of the current position based on an environmental map stored in advance; and a processor that determines coordinates of the current position for the local map by performing matching between the local map and the actual sensor data, and determines a traveling direction based on the current position by calculating a main segment angle of a line segment existing in the local map.
Abstract:
A mobile robot configured to move on a ground. The mobile robot including a contact angle estimation unit estimating contact angles between wheels of the mobile robot and the ground and uncertainties associated with the contact angles, a traction force estimation unit estimating traction forces applied to the wheels and traction force uncertainties, a normal force estimation unit estimating normal forces applied to the wheels and normal force uncertainties, a friction coefficient estimation unit estimating friction coefficients between the wheels and the ground, a friction coefficient uncertainty estimation unit estimating friction coefficient uncertainties, and a controller determining the maximum friction coefficient from among the friction coefficients such that the maximum friction coefficient has an uncertainty less than a threshold and at a point of time when the torque applied to each of the wheels changes from an increasing state to a decreasing state, among the estimated friction coefficients.
Abstract:
An image processing apparatus for searching for a feature point by use of a depth image and a method thereof are provided. The image processing apparatus includes an input unit configured to input a three-dimensional image having depth information, a feature point extraction unit configured to obtain a designated point from an object image extracted from the depth image to obtain a feature point that is located at a substantially farthest distance from the designated point, and to obtain other feature points that are located at substantially farthest distances from feature points that are previously obtained as well as the designated point. The apparatus includes a control unit configured to control the input unit and the feature point extraction unit so that time in estimating a structure of the object is reduced, and a recognition result is enhanced.
Abstract:
An apparatus and a method for pose recognition, the method for pose recognition including generating a model of a human body in a virtual space, predicting a next pose of the model of the human body based on a state vector having an angle and an angular velocity of each part of the human body as a state variable, predicting a depth image about the predicted pose, and recognizing a pose of a human in a depth image captured in practice, based on a similarity between the predicted depth image and the depth image captured in practice, wherein the next pose is predicted based on the state vector having an angular velocity as a state variable, thereby reducing the number of pose samples to be generated and improving the pose recognition speed.
Abstract:
A server may include a communication circuit communicating with a user terminal, storage including a fingerprint DB storing fingerprints corresponding to a plurality of points and a signal fluctuation probability DB, and a processor electrically connected to the communication circuit and the storage. The processor may be configured to store similarity between first signal strength and second signal strength, which are determined based on a probability that a pair of the first signal strength and the second signal strength received from a first AP occurs with respect to fingerprints corresponding to a first point, in the signal fluctuation probability DB, to obtain a fingerprint including signal strength received from the first AP, from the user terminal, and to determine a location of the user terminal based on the obtained fingerprint and the signal fluctuation probability DB.
Abstract:
Embodiments of the present disclosure relate to a movable object and a method for controlling the same. A method for controlling a movable object may include acquiring virtual data representing distances between each of a plurality of positions within an area and surfaces in the area, in a plurality of directions, respectively, based on a map of the area. An algorithm, such as a machine learning algorithm, may be executed that outputs positions corresponding to the virtual data. Actual distance data between the movable object and a plurality of surfaces in the vicinity of the movable object may be acquired. An actual position of the movable object may then be estimated corresponding to the actual distance data by executing the algorithm using the actual distance data. The movable object may be controlled based on the estimated actual position.
Abstract:
Disclosed herein is a control method of a robot including: calculating hardness information about the ground on which a wearer moves; and controlling the robot according to the calculated hardness information.
Abstract:
Disclosed herein are an endoscope system and a control method thereof. The control method includes acquiring plural omnidirectional images of the surroundings of an endoscope using a stereo omnidirectional camera mounted on the endoscope, calculating distances between the endoscope and an object around the endoscope using the acquired plural omnidirectional images, and executing an operation to avoid collision between the endoscope and the object around the endoscope based on the calculated distances, thus facilitating safe operation of the endoscope.
Abstract:
A method of controlling a mobile apparatus includes acquiring a first original image and a second original image, extracting a first feature point of the first original image and a second feature point of the second original image, generating a first blurring image and a second blurring image by blurring the first original image and the second original image, respectively, calculating a similarity between at least two images of the first original image, the second original image, the first blurring image, and the second blurring image, determining a change in scale of the second original image based on the calculated similarity, and controlling at least one of an object recognition and a position recognition by matching the second feature point of the second original image to the first feature point of the first original image based on the change in scale.
Abstract:
An electronic device to be put on a cradle may include a housing including a part in a hemispherical shape and physically coming into contact with the cradle in an arbitrary position when the electronic device is put on the cradle, a display arranged on another part of the housing, a camera module to obtain an image in a direction that the display faces, a sensor module to sense an orientation of the electronic device; and a processor to determine a target orientation of the electronic device based on the obtained image, and create control data to change the orientation of the electronic device based on the sensed orientation and the target orientation of the electronic device.