Abstract:
Provided are an apparatus and method for controlling a virtual mouse based on a hand motion. The apparatus includes an image processing unit configured to measure a maximum horizontal width of a hand region of a user and a width of an index finger in any one of a first depth image captured in a state in which the index finger is spread out in a vertical direction with his or her fist and a second depth image captured in a state in which the index finger is folded with the fist, a hand motion recognizing unit configured to compare the maximum horizontal width and the width of the index finger to recognize whether the index finger is folded, and a function matching unit configured to match a change in a folded state of the index finger to a pre-set first function and output a control signal for the function.
Abstract:
The present invention relates to an image display apparatus and method for displaying a three-dimensional (3D) image capable of interacting with a user, and provides a 3D image display apparatus and method that may interact with a user by reproducing a 3D image, generated by a conventional volume display apparatus, at a position different from a position at which the 3D image is generated, and by modifying and playing the reproduced 3D image based on a motion of the user. Also, the present invention enables the user to realistically interact with the 3D image by stimulating tactile sensation of the user through ultrasound.
Abstract:
An autostereoscopic three-dimensional (3D) display apparatus is provided. The autostereoscopic 3D display apparatus includes an image display unit configured to display a 3D image including a 3D virtual object or a 3D image including a 3D virtual object and text; and an optical unit configured to reflect or transmit the displayed 3D image from the image display unit toward a viewer, transmit an image of a real object facing the viewer, and display a combination of the 3D image and the image of the real object to the viewer.
Abstract:
Provided are a character input apparatus and method based on handwriting. The character input apparatus based on handwriting includes an image obtaining unit configured to obtain an image of a motion of a hand moving in a space, a pattern storage unit configured to store various types of character input pattern information required for inputting characters, a pattern recognizing unit configured to recognize a character input pattern from the image of a motion of a hand obtained by the image obtaining unit, compare the recognized character input pattern with the character input pattern information stored in the pattern storage unit, and output a character matched to the same pattern, and an input character display unit configured to display the character output from the pattern recognizing unit.
Abstract:
Provided is an apparatus for providing virtual reality content of a moving means, the apparatus including a plurality of transparent display units mounted on front and side surfaces of the moving means and configured to display virtual reality content, a dynamic motion sensing unit configured to detect a three-dimensional (3D) dynamic motion of the moving means, and a control unit configured to control reproduction of previously stored virtual reality content so as to display virtual reality content corresponding to the 3D dynamic motion on the plurality of transparent display units, when the 3D dynamic motion of the moving means is detected by the dynamic motion sensing unit.
Abstract:
Provided are a navigation gesture recognition system and a gesture recognition method thereof. The navigation gesture recognition system includes a depth camera configured to capture a depth image, a memory configured to store a program for recognizing a navigation gesture of an object, and a processor configured to execute the program stored in the memory, wherein the processor executes the program to generate a virtual spatial touch screen from the depth image, detects a moving direction of a gesture of the object included in the depth image on the spatial touch screen, and generates a control command corresponding to the moving direction.
Abstract:
A three-dimensional (3D) transparent display device is provided. The 3D transparent display device includes a position obtaining unit configured to obtain information regarding 3D positions of both eyes of a user and a 3D position of a real object; a controller configured to estimate a two-dimensional (2D) position on a display screen at which an image of a virtual object is to be displayed on the basis of the 3D positions of the both eyes of the user and the 3D position of the real object; and a 3D transparent display panel configured to display the image of the virtual object on the display screen on the basis of the estimated 2D position of the virtual object.
Abstract:
Provided is a user interface (UI) apparatus based on a hand gesture. The UI apparatus includes an image processing unit configured to detect a position of an index finger and a center position of a hand from a depth image obtained by photographing a user's hand, and detect a position of a thumb on a basis of the detected position of the index finger and the detected center position of the hand, a hand gesture recognizing unit configured to recognize a position change of the index finger and a position change of the thumb, and a function matching unit configured to match the position change of the index finger to a predetermined first function, match the position change of the thumb to a predetermined second function, and output a control signal for executing each of the matched functions.
Abstract:
The present application relates to a method of representing content of an augmented reality (AR) simulator, and more specifically, to a service method for representing AR content in a simulator for a moving apparatus (an automobile, a monorail car, a cable car, a drone, etc.) according to a background. The method of representing content of an AR simulator includes recognizing AR code using background image content presented by a background screen controller and synchronizing AR content corresponding to the recognized AR code with the background image content and outputting the synchronized AR content.
Abstract:
Provided is an apparatus for recognizing user postures. The apparatus recognizes detailed postures such as a stand posture, a bending forward posture, a bending knees posture, a tilt right posture, and a tilt left posture, based on a variation between body information at a previous time and body information at a current time.