Abstract:
An optical module and a method for its used are provided. The optical module comprises: an image capturing device; an illuminating device; a position measurement unit and a processor configured to: a) identify objects included in at least one frame that has been captured by the image capturing device; b) determine expected position of the one or more identified objects based on data received from the position measurement unit; and c) control operation of the at least one illuminating device while capturing a new image, to illuminate only part of the field of view of the target being acquired by the at least one image capturing device, wherein the one or more identified objects are included.
Abstract:
A peripheral electronic device is described which is configured to communicate with a computing device comprising a display having a screen configured to display a virtual gaze cursor; wherein the peripheral electronic device comprises at least one user interface configured to trigger at least one operational command in response to interaction with a user, wherein the at least one operational command is associated with a current location of the virtual gaze cursor at the screen, and wherein a change at the current location of the virtual gaze cursor being displayed, is determined based on a shift of a user's gaze from a first location at said screen to a different location thereat, or based on a tilt of the user's head, or based on any combination thereof.
Abstract:
A natural user interface system and a method for natural user interface, the system may include an integrated circuit dedicated for natural user interface processing, the integrated circuit may include: a plurality of defined data processing dedicated areas to perform computational functions relating to a corresponding plurality of natural user interface features, to obtain the plurality of user interface features based on scene features detected by a plurality of sensors within a defined period of time; a central processing unit configured to carry out software instructions to support the computational functions of the dedicated areas; and at least one defined area for synchronized data management, to receive signals corresponding to detected scene features from the plurality of sensors and to route the signals to suitable dedicated areas of the plurality of dedicated areas to provide real-time acquiring of user interface features.
Abstract:
A computational platform and a method for use in a depth calculation process based on information comprised in an image captured by one or more image capturing sensors, wherein the computational platform enables distinguishing between areas included in the captured image that comprise details that are implementable by a matching algorithm and areas that do not have such details, wherein the computational platform comprises at least one processor, configured to select at least one matching window comprised in the captured image for matching a corresponding part included in each image captured by the one or more image capturing devices; calculate a metric based on a respective selected matching window; and calculate a depth map based on the calculated metric associated with the at least one matching window.
Abstract:
An arrangement and a method are provided, wherein the arrangement is configured to prevent current pulses from reaching a laser diode, wherein the arrangement is configured to receive voltage required for its operation from a USB power source, and wherein the arrangement comprises: a capacitor configured to provide energy for the operation of the laser diode at times when high current pulses are required by a camera associated with the arrangement; a software configurable micro controller, operative to control functionality of the arrangement so as to prevent current pulses from reaching the laser diode; and a current source configured to provide current for the operation of the laser diode under safe conditions and to cease provisioning of that current when conditions are unsafe for the operation of the laser diode.
Abstract:
A method and a computational module are provided for carrying out a quantization process of a plurality of channels carrying data received from an image capturing sensor. The computational module comprises: at least one array of processors, configured to a) retrieve data from: a1) a neural network graph, a2) a dataset associated with a data and a3) parameters' values of a neural network model; b) carry out a dynamic range calibration process for the channels received and using the neural network graph for deriving grouping constrains associated with respective channels; c) carry out a grouping optimization based on results obtained for each channel from its respective dynamic range calibration and its grouping constrains; d) arrange the channels so that channels having similar grouping constrains are grouped together into one output channel; and e) calculate required quantization parameters for carrying out a quantization process of the output channels.
Abstract:
A central platform is provided which is configured to operate in a system comprising a plurality of moveable devices, each comprising at least one optical depth sensor. The central platform is characterized in that it comprises a processor adapted to establish a time frame within which the plurality of optical depth sensors operate, and wherein that time frame includes a plurality of time slots, each allocated for the operation of a respective optical depth sensor.
Abstract:
A method and a computational module are provided for carrying out a computer vision application, and comprising at least one processing means, wherein the computational module is characterized in that it has a systolic array architecture, configured to receive the conveyed information from the at least one image sensor and to apply Row Stationary dataflow for calculating convolutions in a Convolutional Neural Network (“CNN”).
Abstract:
A central unit is provided which is operative in a system that comprises a plurality of moveable devices, each comprising an optical depth sensor. The central unit comprises a processor adapted to: divide the moveable devices into a plurality of groups, wherein each of the groups is characterized by a specific wavelength range at which all projecting modules associated with the optical depth sensors of the moveable devices belonging to that group, are operative; establish a time frame within which each of the optical depth sensors of the moveable devices will operate, wherein the time frame comprises a plurality of time slots; and associate at least two of the moveable devices with a single time slot, wherein each of the at least two moveable devices belongs to a different group than the other.
Abstract:
A system and a method on an integrated circuit are provided herein. The system may include: a plurality of defined data processing dedicated areas to perform computational functions relating to a corresponding plurality of natural user interface features, to obtain the plurality of user interface features based on scene features detected by a plurality of sensors within a defined period of time; a central processing unit configured to carry out software instructions to support the computational functions of the dedicated areas; and at least one defined area for synchronized data management, to receive signals corresponding to detected scene features from the plurality of sensors and to route the signals to suitable dedicated areas of the plurality of dedicated areas to provide real-time acquiring of user interface features.