Abstract:
An interfacing apparatus may sense an input object and move at least one interface object displayed on a display toward the sensed input object.
Abstract:
Disclosed is an image adjustment apparatus including a receiver which is configured to receive a first input image of an object which is time-synchronously captured and a second input image in which a motion event of the object is sensed time-asynchronously, and an adjuster which is configured to adjust the first input image and the second input image.
Abstract:
Embodiments of the present invention provide a method and device for processing Dynamic Vision Sensor (DVS) events. The method comprises the operations of: acquiring a DVS event map sequence; for any DVS event map in the DVS event map sequence, extracting a DVS event feature descriptor, where the DVS event feature descriptor has a scale-invariant feature and/or a rotation-invariant feature; determining, according to the extracted DVS event feature descriptor, a three-dimensional space pose of the DVS event map at the current moment; and, generating, according to the three-dimensional space pose of each DVS event map, a DVS event map sequence with temporal consistency. The embodiments of the present invention are used for generating, from a DVS event map sequence, a DVS event map sequence with temporal consistency.
Abstract:
An apparatus and a method for processing an input are provided. The apparatus includes a shape identifier configured to identify a first shape corresponding to a user input among shapes, a pattern identifier configured to identify a first pattern corresponding to the user input among patterns, and a determiner configured to determine a first command corresponding to the first shape and the first pattern.
Abstract:
A user input processing apparatus using a motion of an object to determine whether to track the motion of the object, and track the motion of the object using an input image including information associated with the motion of the object.
Abstract:
A method of displaying a menu based on at least one of a depth information and a space gesture is provided. The method including determining depth information corresponding to a distance from a screen of a user terminal to a hand of a user; identifying at least one layer among a plurality of layers based on the depth information; and applying a graphic effect to the identified layer so that a menu page corresponding to the at least one identified layer is displayed on the screen of the user terminal.
Abstract:
Provided is an event-based image processing apparatus and method, the apparatus including a sensor which senses occurrences of a predetermined event in a plurality of image pixels and which outputs an event signal in response to the sensed occurrences, a time stamp unit which generates time stamp information by mapping a pixel corresponding to the event signals to a time at which the event signals are output from the sensor, and an optical flow generator which generates an optical flow based on the time stamp information in response to the outputting of the event signals.
Abstract:
An image adjustment apparatus includes a receiver which is configured to receive a first input image of an object which is time-synchronously captured and a second input image in which a motion event of the object is sensed time-asynchronously, and an adjuster which is configured to adjust the first input image and the second input image.
Abstract:
An apparatus for use interface and a method of user interface are provided. The apparatus may include a classifier configured to classify an event as corresponding to a class among at least two classes, an updater configured to update class information related to the class corresponding to the event, and a processor configured to determine a user input corresponding to the event based on the updated class information.
Abstract:
A Dynamic Vision Sensor (DVS) pose-estimation system includes a DVS, a transformation estimator, an inertial measurement unit (IMU) and a camera-pose estimator based on sensor fusion. The DVS detects DVS events and shapes frames based on a number of accumulated DVS events. The transformation estimator estimates a 3D transformation of the DVS camera based on an estimated depth and matches confidence-level values within a camera-projection model such that at least one of a plurality of DVS events detected during a first frame corresponds to a DVS event detected during a second subsequent frame. The IMU detects inertial movements of the DVS with respect to world coordinates between the first and second frames. The camera-pose estimator combines information from a change in a pose of the camera-projection model between the first frame and the second frame based on the estimated transformation and the detected inertial movements of the DVS.