Abstract:
A wind pressure type of haptic firefighting nozzle interface that is interworked with virtual reality (VR) content for virtual firefighting training is provided. The wind pressure type of haptic firefighting nozzle interface includes: a flow controller for adjusting a spraying intensity of water sprayed on the VR content; a stream shaper for adjusting a spray shape according to a radiation angle of the water sprayed on the VR content; and at least one first haptic device for providing haptic feedback corresponding to the spraying intensity and the spraying shape determined according to control through the flow controller and the stream shaper using wind pressure.
Abstract:
Disclosed herein are a motion-based virtual-reality simulator device and a method for controlling the same. The motion-based virtual-reality simulator device includes a virtual-reality interface for outputting virtual reality content in a form of multi-sense data that is perceptible by a user, a virtual-reality movement implementation unit including a track component for performing continuous track motion in a state in which a user gets on the track component, the track component being configured such that a slope of the track component is controlled while a surface of the track component is deformed into an even surface or an uneven surface, and an interworking control unit for outputting a control signal to the virtual-reality movement implementation unit so that a slope and a surface of the track component are deformed in accordance with the virtual reality content that is output through the virtual-reality interface.
Abstract:
A slim immersive display device, a slim visualization device, and a user eye-tracking device. The slim immersive display device includes a display panel, a super-proximity visualization optical unit formed of a pinhole array film or a micro-lens array and configured to form an image output via the display panel on a retina of an eyeball of a user located a very short distance from the super-proximity visualization optical unit, an environment information control unit configured to determine an image to be output in accordance with virtual reality environment information, and an image generation unit configured to generate the output image determined by the environment information control unit in a form of super-proximity unit images and output the super-proximity unit images to the display panel.
Abstract:
A selective hybrid-type three-dimensional (3D) image viewing device and a display method using the same are disclosed. The selective hybrid-type 3D image viewing device includes a first left and right image separation module, a second left and right image separation module, and a mechanical unit. The first left and right image separation module visualizes a 3D image output from a first display so that a user can view this 3D image. The second left and right image separation module visualizes a 3D image output from a second display to which a left and right image separation technique different from that of the first display has been applied so that the user can view this 3D image. The mechanical unit is configured such that the first and second left and right image separation modules are integrated into the mechanical unit in a detachable manner.
Abstract:
Disclosed herein are a slim immersive display device and a slim visualization device. The slim immersive display device includes a slim visualization module for forming an image on a retina of an eyeball of a user based on an externally input image signal, a state information acquisition unit for acquiring a state of an external device as a state image, and a content control unit for analyzing the state image, generating an image corresponding to virtual-reality environment information, and inputting the image to the slim visualization module.
Abstract:
Disclosed herein is an interactive digital art apparatus. The interactive digital art apparatus includes a display unit, an input interface unit, a posture tracking unit, and a processing unit. The display unit visualizes a virtual canvas. The input interface unit is manipulated by a user, and outputs input interface information attributable to an action in which the user draws a picture on the virtual canvas in front of the display unit. The posture tracking unit tracks a direction in which the face of the user is directed, and the posture of the input interface unit. The processing unit draws a picture on the virtual canvas based on the input interface information, the direction in which the face of the user is directed, and the posture of the input interface unit.
Abstract:
Disclosed herein are a method for remotely controlling virtual content and an apparatus for the method. The method for remotely controlling virtual content includes acquiring spatial data about a virtual space, creating at least one individual space by transforming the virtual space in accordance with a user interaction area that corresponds to a user based on the spatial data, visualizing the at least one individual space in the user interaction area and providing an interactive environment which enables an interaction between the user's body and a virtual object included in the at least one individual space, and controlling the virtual object in response to a user interaction event occurring in the interactive environment.
Abstract:
Disclosed herein are a method for providing a composite image based on optical transparency and an apparatus for the same. The method includes supplying first light of a first light source for projecting a virtual image and second light of a second light source for tracking eye gaze of a user to multiple point lights based on an optical waveguide; adjusting the degree of light concentration of any one of the first light and the second light based on a micro-lens array and outputting the first light or the second light, of which the degree of light concentration is adjusted; tracking the eye gaze of the user by collecting the second light reflected from the pupil of the user based on the optical waveguide; and combining an external image with the virtual image based on the tracked eye gaze and providing the combined image to the user.
Abstract:
An ideal wearable display apparatus, optimized based on human factors, includes: a user information tracking part for obtaining characteristic information of a user who wears the wearable display apparatus; a hardware module part, which includes a mechanism control module part for changing the spatial arrangement position and posture of a mechanism part of the wearable display apparatus; a software module part for simulating and generating virtual environment information based on static hardware parameters, input image data, and the information of the user information tracking part and mechanism control module part; and a human factor module part for correcting the difference between a simulation model in the software module part and a model recognized through the actual use of the apparatus.