Abstract:
Disclosed is a stereo camera-based autonomous driving method and apparatus, the method including estimating a driving situation of a vehicle, determining a parameter to control a stereo camera width of a stereo camera based on the estimated driving situation, controlling a capturer configured to control arrangement between two cameras of the stereo camera for a first direction based on the determined parameter, and measuring a depth of an object located in the first direction based on two images respectively captured by the two cameras with the controlled arrangement.
Abstract:
A user terminal and a method for unlocking the user terminal are provided. The method includes determining whether to generate a wakeup signal that wakes up a processor, based on a tone comprised in a voice signal; and determining, by the processor, whether to unlock the user terminal based on a text extracted from the voice signal, in response to the wakeup signal being generated.
Abstract:
Exemplary embodiments provide an image vision processing method, device and equipment and relate to: determining parallax and depth information of event pixel points in a dual-camera frame image acquired by Dynamic Vision Sensors; determining multiple neighboring event pixel points of each non-event pixel point in the dual-camera frame image; determining, according to location information of each neighboring event pixel point of each non-event pixel point, depth information of the non-event pixel point; and performing processing according to the depth information of each pixel point in the dual-camera frame image. Since non-event pixel points are not required to participate in the matching of pixel points, even if it is difficult to distinguish between the non-event pixel points or the non-event pixel points are occluded, depth information of the non-event pixel points can be accurately determined according to the location information of neighboring event pixel points.
Abstract:
A method of extracting a static pattern from an output of an event-based sensor. The method may include receiving an event signal from the event-based sensor in response to dynamic input, and extracting a static pattern associated with the dynamic input based on an identifier and time included in the event signal. The static pattern may be extracted from a map generated based on the identifier and time.
Abstract:
A disparity determination method and apparatus are provided. The disparity determination method includes receiving first signals of an event from a first sensor disposed at a first location and second signals of the event from a second sensor disposed at a second location that is different than the first location, and extracting a movement direction of the event, based on at least one among the first signals and the second signals. The disparity determination method further includes determining a disparity between the first sensor and the second sensor, based on the movement direction, a difference between times at which the event is sensed by corresponding pixels in the first sensor, and a difference between times at which the event is sensed by corresponding pixels in the first sensor and the second sensor.
Abstract:
A user input processing method is provided. The user input processing method determines, based on a recognition reliability of a user input for a function, a delay time and whether the function is to be performed, the function being determined in advance, and controls the function based on the delay time and whether the function is to be performed.