Abstract:
Provided are an electronic device control apparatus and method using a transparent display. The electronic device control apparatus includes a first image acquirer configured to acquire a plurality of thing-direction images respectively captured by a plurality of cameras, a virtual image generator configured to combine the plurality of thing-direction images to generate a virtual image corresponding to a thing-direction region projected on the transparent display, a user interface configured to receive a user input, and a function attribute mapper configured to recognize an object included in the virtual image, based on the user input, and map, to the virtual image, a function attribute for controlling the recognized object.
Abstract:
A method of supporting self-learning and a system for performing the method are disclosed. The method includes estimating an object of interest of a user in content based on user performance information inputted by the user in response to content situation information and the content being received from a learning module including the content, analyzing the content situation information, determining whether to generate support information for supporting the user to perform the content based on the estimated object of interest and the analyzed content situation information, and generating the support information that allows the user to self-learn the content according to whether to generate the support information, and outputting the support information to the learning module.
Abstract:
A method and an apparatus for virtual training based on tangible interaction are provided. The apparatus acquires data for virtual training, and acquires a three-dimensional position of a real object based on a depth image and color image of the real object and infrared (IR) data included in the obtained data. Then, virtualization of an overall appearance of a user is performed by extracting a depth from depth information on a user image included in the obtained data and matching the extracted depth with the color information, and depth data and color data for the user obtained according to virtualization of the user is visualized in virtual training content. In addition, the apparatus performs correction on joint information using the joint information and the depth information included in the obtained data, estimates a posture of the user using the corrected joint information, and estimates a posture of a training tool using the depth information and IR data included in the obtained data.
Abstract:
Provided are an apparatus and method for experiencing an augmented reality (AR)-based screen sports match which enable even a child, an elderly person, and a person with a disability to easily and safely experience a ball sports match, such as tennis, badminton, or squash, as a screen sports match without using a wearable marker or sensor, a wearable display, an actual ball, and an actual tool.
Abstract:
Provided is an alternative text generating method. The alternative text generating method includes recognizing input visual content, generating input information corresponding to a recognition result of the recognition of the visual content, generating an editing window including an input item to which the input information is automatically input, automatically generating an alternative text, based on an alternative text generation rule and the input information, and displaying the generated alternative text on a text box of the editing window.
Abstract:
Provided is a system for education in augmented reality (AR) for technology education in AR, The system for education in AR includes an instructor terminal configured to, in order to guide training of at least one learner on site or remotely using AR content, identify a training process of the trainee and generate training support information, an AR service providing server configured to manage the instructor terminal and a learner terminal that participate in the training using the AR content based on a request of the instructor terminal, and at least one learner terminal configured to transmit content training information to the AR service providing server.
Abstract:
A virtual content providing device includes: a communication circuit capable of communicating with a wearable device worn by a user; and a processor functionally connected to the communication circuit. The processor is configured to: display virtual training content on the wearable device through the communication circuit; generate training situation information and user condition information associated with the virtual training content through the wearable device; determine a intervention time point for the virtual training content based on the training situation information and the user condition information; generate virtual intervention content associated with the intervention time point; and display the virtual intervention content in synchronization with the virtual training content through the wearable device.