Abstract:
An image adjustment apparatus includes a receiver which is configured to receive a first input image of an object which is time-synchronously captured and a second input image in which a motion event of the object is sensed time-asynchronously, and an adjuster which is configured to adjust the first input image and the second input image.
Abstract:
A Dynamic Vision Sensor (DVS) pose-estimation system includes a DVS, a transformation estimator, an inertial measurement unit (IMU) and a camera-pose estimator based on sensor fusion. The DVS detects DVS events and shapes frames based on a number of accumulated DVS events. The transformation estimator estimates a 3D transformation of the DVS camera based on an estimated depth and matches confidence-level values within a camera-projection model such that at least one of a plurality of DVS events detected during a first frame corresponds to a DVS event detected during a second subsequent frame. The IMU detects inertial movements of the DVS with respect to world coordinates between the first and second frames. The camera-pose estimator combines information from a change in a pose of the camera-projection model between the first frame and the second frame based on the estimated transformation and the detected inertial movements of the DVS.
Abstract:
Methods and apparatuses for time alignment calibration are provided including acquiring an event-stream and video images of a target object which are simultaneously shot by a dynamic vision sensor and an assistant vision sensor, determining a key frame that reflects obvious movement of the target object from the video images, mapping effective pixel positions of the target object in the key frame and effective pixel positions of the target object in the neighboring frames according to a spatial relative relation between the dynamic vision sensor and the assistant vision sensor, determining a first target object template that covers events in a first event-stream segment from the plurality of target object templates, and using a time alignment relation of an intermediate instant of the first event-stream segment and a timestamp of a frame corresponding to the first target object template between the dynamic vision sensor and the assistant vision sensor.
Abstract:
A method of displaying a menu based on at least one of a depth information and a space gesture is provided. The method including determining depth information corresponding to a distance from a screen of a user terminal to a hand of a user; identifying at least one layer among a plurality of layers based on the depth information; and applying a graphic effect to the identified layer so that a menu page corresponding to the at least one identified layer is displayed on the screen of the user terminal.
Abstract:
An apparatus and a method for providing a user interface are provided. The apparatus for providing a user interface provides a first user interface mode, and can switch to a second user interface mode if it receives a user command instructing it to switch its mode to the second user interface mode which has a different user command input method from the first user interface mode. In the switching process, the apparatus for providing a user interface can re-set a recognition pattern so as to classify less user input types than are classified in the first user interface mode.
Abstract:
An apparatus for use interface and a method of user interface are provided. The apparatus may include a classifier configured to classify an event as corresponding to a class among at least two classes, an updater configured to update class information related to the class corresponding to the event, and a processor configured to determine a user input corresponding to the event based on the updated class information.
Abstract:
A mobile device configured for data transmission to a corresponding mobile device is provided. The mobile device may include a gesture input unit configured to receive a gesture, a gesture determination unit configured to determine whether the gesture corresponds to a preset gesture associated with a command to perform data transmission to the corresponding mobile device, and a data communication unit configured to transmit a data transmission request to the corresponding mobile device based on a result of the determination, configured to receive, from the corresponding mobile device, an acceptance signal indicating an input of an acceptance gesture at the corresponding mobile device, and configured to transmit data to the corresponding mobile device in response to receiving the acceptance signal.
Abstract:
An event information processing apparatus and method are provided that may process an asynchronous event. The event information processing apparatus may include a grouper which groups at least one item of event information generated at an identical time, a time information identifier which identifies basic time information associated with the grouped event information, and an information transmitter which arranges and thereby transmits the grouped event information and basic time information.