Abstract:
A user input processing apparatus using a motion of an object to determine whether to track the motion of the object, and track the motion of the object using an input image including information associated with the motion of the object.
Abstract:
Apparatuses and methods for sensing spatial information based on a vision sensor are disclosed. The apparatus and method recognize the spatial information of an object sensed by the vision sensor that senses a temporal change of light. The light being input into the vision sensor is artificially changed using a change unit configured to change the light being input to the vision sensor.
Abstract:
A method for a user interface based on a gesture includes setting at least one gesture region including at least one basic region and at least one navigation region based on a preset location or a detected location of at least one object to be tracked, the at least one navigation region including at least one item, detecting a gesture of the at least one object to be tracked using an input device, and recognizing, from the detected gesture, at least one of a select gesture for selecting any one item among the at least one item of the at least one navigation region and a confirm gesture for moving from the at least one navigation region to the at least one basic region.
Abstract:
Disclosed is an electronic device which includes a dynamic vision sensor that includes a first pixel sensing a change in light intensity and generates an event signal based on the sensed change in light intensity, an illuminance estimator that estimates illuminance of a light, and a time delay compensator that calculates a time delay between a first time at which the change in light intensity occurs and a second time at which the first pixel senses the change in light intensity, based on the illuminance of the light, and to compensate for the time delay.
Abstract:
A semiconductor device including an interconnection structure including a copper pad, a pad barrier layer and a metal redistribution layer, an interconnection structure thereof and methods of fabricating the same are provided. The semiconductor device includes a copper pad disposed on a first layer, a pad barrier layer including titanium disposed on the copper pad, an inorganic insulating layer disposed on the pad barrier layer, a buffer layer disposed on the inorganic insulating layer, wherein the inorganic insulating layer and the buffer layer expose a portion of the pad barrier layer, a seed metal layer disposed on the exposed buffer layer, a metal redistribution layer disposed on the seed metal layer, and a first protective layer disposed on the metal redistribution layer.
Abstract:
Provided is a neuromorphic signal processing device for locating a sound source using a plurality of neuron circuits, the neuromorphic signal processing device including a detector configured to output a detected spiking signal using a detection neuron circuit corresponding to a predetermined time difference, in response to a first signal and a second signal containing an identical input spiking signal with respect to the predetermined time difference, for each of a plurality of predetermined frequency bands, a multiplexor configured to output a multiplexed spiking signal corresponding to the predetermined time difference based on a plurality of the detected spiking signals output from a plurality of neuron circuits corresponding to the plurality of frequency bands, and an integrator configured to output an integrated spiking signal corresponding to the predetermined time difference, based on a plurality of the multiplexed spiking signals corresponding to a plurality of predetermined time differences.
Abstract:
Provided is an event-based image processing apparatus and method, the apparatus including a sensor which senses occurrences of a predetermined event in a plurality of image pixels and which outputs an event signal in response to the sensed occurrences, a time stamp unit which generates time stamp information by mapping a pixel corresponding to the event signals to a time at which the event signals are output from the sensor, and an optical flow generator which generates an optical flow based on the time stamp information in response to the outputting of the event signals.
Abstract:
A method for a user interface based on a gesture includes setting at least one gesture region including at least one basic region and at least one navigation region based on a preset location or a detected location of at least one object to be tracked, the at least one navigation region including at least one item, detecting a gesture of the at least one object to be tracked using an input device, and recognizing, from the detected gesture, at least one of a select gesture for selecting any one item among the at least one item of the at least one navigation region and a confirm gesture for moving from the at least one navigation region to the at least one basic region.