Abstract:
A peripheral electronic device is described which is configured to communicate with a computing device comprising a display having a screen configured to display a virtual gaze cursor; wherein the peripheral electronic device comprises at least one user interface configured to trigger at least one operational command in response to interaction with a user, wherein the at least one operational command is associated with a current location of the virtual gaze cursor at the screen, and wherein a change at the current location of the virtual gaze cursor being displayed, is determined based on a shift of a user's gaze from a first location at said screen to a different location thereat, or based on a tilt of the user's head, or based on any combination thereof.
Abstract:
An image processing method is disclosed. The method comprises receiving from an imaging sensor input image data arranged in a plurality of pixel data rows corresponding to a grid of sensor pixels, and sampling the input image data to provide sampled image data having less pixel data rows. The method further comprises correcting image distortion in at least a portion of the sampled data.
Abstract:
A system for video conferencing is disclosed. The system comprises a data processor which receives from a remote location a stream of imagery data of a remote user, and displays an image of the remote user on a display device. The data processor also receives a stream of imagery data of an individual in a local scene in front of the display device, and extracts a gaze direction and/or a head orientation of the individual. The data processor varies a view of the image responsively to the gaze direction and/or the head orientation.
Abstract:
A natural user interface system and a method for natural user interface, the system may include an integrated circuit dedicated for natural user interface processing, the integrated circuit may include: a plurality of defined data processing dedicated areas to perform computational functions relating to a corresponding plurality of natural user interface features, to obtain the plurality of user interface features based on scene features detected by a plurality of sensors within a defined period of time; a central processing unit configured to carry out software instructions to support the computational functions of the dedicated areas; and at least one defined area for synchronized data management, to receive signals corresponding to detected scene features from the plurality of sensors and to route the signals to suitable dedicated areas of the plurality of dedicated areas to provide real-time acquiring of user interface features.
Abstract:
An electronic device comprising: a display; one or more gaze detection sensors for determining a portion of the display to which a user's gaze is currently directed to; a timer to measure periods of time associated with the user's current gaze at the display; and one or more processors operative to: receive data relating to periods of time measured by the timer and determine therefrom a characteristic rate at which the user shifts his gaze from one portion of the display to another; determine a portion of the display towards which the user's gaze was directed for a period of time longer than a period of time which is expected in accordance with his characteristic rate; identify an object included in the determined portion of the display; retrieve information that relates to the identified object; and enable displaying information which is based on the retrieved information.
Abstract:
A system configured to operate in an unknown, possibly texture-less environment, with possibly self-similar surfaces, and comprising a plurality of platforms configured to operate as mobile platforms, where each of these platforms comprises an optical depth sensor, and one platform operates as a static platform and comprising at least one optical projector. Upon operating the system, the static platform projects a pattern onto the environment, wherein each of the mobile platforms detects the pattern or a part thereof by its respective optical depth sensor while moving, and wherein information obtained by the optical depth sensors, is used to determine moving instructions for mobile platforms within that environment. Optionally, the system operates so that every time period another mobile platform from among the plurality of platforms, takes the role of operating as the static platform, while the preceding platform returns to operate as a mobile platform.
Abstract:
An apparatus is described that is configured to operate in conjunction with an autonomous vehicle under adverse weather conditions. The apparatus is configured to be installed at the bottom part of the autonomous vehicle, and comprises at least one optical depth sensor and at least one optical projecting module, wherein the at least one optical projecting module is configured to project a light beam being a flood light or a pre-defined pattern onto the road being travelled by the autonomous vehicle, and the at least one optical depth sensor is configured to detect the projection of the light beam onto the road to enable retrieving therefrom information that relates to the movements of the autonomous vehicle along the road being travelled.
Abstract:
A portable device configured to detect vertical changes in a surrounding of a moving person, wherein the portable device comprises: a housing; an optical depth sensor; a processor operative to: receive data from the depth sensor; process the data received; identify, based on the processed data, any vertical change that is present in the field of view of the optical depth sensor, and determine whether an identified vertical change forms a potential hazard to the moving person; and a warning generator for generating an indication to that person, upon determining that an identified vertical change forms a potential hazard to the moving person.
Abstract:
An optical passive stereo assembly for generating a three-dimensional image, the optical assembly comprising: two image capturing devices each mounted within the optical passive stereo assembly in a skewed position to the other with respect to the horizontal plane; a processor configured to: process data retrieved from a plurality of pixels comprised within images captured by the two image capturing devices, and generate a point cloud, being a set of data points in a 3D space retrieved from the tilted coordinate system (X′, Y′, Z′); apply a 2D rotation in the X-Y plane to the point cloud, thereby converting the coordinate system of the resulting point cloud to a conventional cartesian coordinate system (X, Y, Z), to enable generating the three-dimensional image.
Abstract:
An optical passive stereo assembly for generating a three-dimensional image, the optical assembly comprising: two image capturing devices each mounted within the optical passive stereo assembly in a skewed position to the other with respect to the horizontal plane; a processor configured to: process data retrieved from a plurality of pixels comprised within images captured by the two image capturing devices, and generate a point cloud, being a set of data points in a 3D space retrieved from the tilted coordinate system (X′, Y′, Z′); apply a 2D rotation in the X-Y plane to the point cloud, thereby converting the coordinate system of the resulting point cloud to a conventional cartesian coordinate system (X, Y, Z), to enable generating the three-dimensional image.