Abstract:
An apparatus and a method for inputting a character using user's motion is provided. When a user's hand makes a motion, a character input pattern is recognized from the motion, a character to be input is determined by comparing the recognized character input pattern with character input pattern information which predefines various character input patterns and corresponding characters, and the determined character is displayed.
Abstract:
Provided is an apparatus for recognizing user postures. The apparatus recognizes detailed postures such as a stand posture, a bending forward posture, a bending knees posture, a tilt right posture, and a tilt left posture, based on a variation between body information at a previous time and body information at a current time.
Abstract:
Provided are an apparatus and method for inputting a character based on a hand gesture. The apparatus includes a gesture recognition unit configured to recognize hand gestures of a user corresponding to horizontal directions, vertical directions, and diagonal directions, a control unit configured to control character input according to the directions of the recognized hand gestures, and a display unit configured to display a character pad for the character input and display characters input according to the directions of the hand gestures recognized on the character pad.
Abstract:
Provided are an apparatus and method for inputting a character The apparatus includes a recognition unit configured to measure lengths from arbitrary points on a user's hands to respective fingertips and recognize a click gesture using the measured lengths, a control unit configured to control character input according to the recognized click gesture, and a display unit configured to display a character pad for the character input and display a character input according to the click gesture recognized on the character pad.
Abstract:
Provided are an apparatus and method for controlling a virtual mouse based on a hand motion. The apparatus includes an image processing unit configured to measure a maximum horizontal width of a hand region of a user and a width of an index finger in any one of a first depth image captured in a state in which the index finger is spread out in a vertical direction with his or her fist and a second depth image captured in a state in which the index finger is folded with the fist, a hand motion recognizing unit configured to compare the maximum horizontal width and the width of the index finger to recognize whether the index finger is folded, and a function matching unit configured to match a change in a folded state of the index finger to a pre-set first function and output a control signal for the function.