Abstract:
A method for use in a stereoscopic image generating system, wherein the image generating system comprises a hardware module associated with at least one pair of image capturing devices, at least one memory means and at least one processor, wherein information retrieved from the capturing devices is processed by the at least one processor, which is configured to implement a hardware module stereo algorithm for identifying objects included in the captured scene at a distance that is equal to or greater than a minimal distance defined by the geometry and disparity range of said hardware module, and wherein the at least one processor is further configured to implement a software stereo algorithm adapted to identify objects included in the captured scene at a distance that is less than the minimal distance required for detecting objects by the hardware module stereo algorithm.
Abstract:
A computational platform and a method for use in a process of matching pairs of pixels, wherein each of the members of a pixel's pair belong to another image captured by a different image capturing sensor, and wherein the computational platform comprises at least one processor configured to carry out a process of matching pairs of pixels based on selecting a pixel mask to be used by selecting neighboring pixels of a given pixel from among all of its neighboring pixels, will be used for the matching process of said given pixel with the other member of its pixel's pair.
Abstract:
A method is provided for calibrating a structured light system which comprises a projector, a camera and at least one processor, wherein the projector emits light at an unknown pattern. The method comprises projecting by the projector an unknown pattern at at least two different distances relative to the camera's location, capturing by the camera the patterns projected at the different distances, determining vertical disparity between the captured images and estimating a relative orientation between the camera and the projector, thereby enabling calibration of the structured light system.
Abstract:
A system configured to operate in an unknown, possibly texture-less environment, with possibly self-similar surfaces, and comprising a plurality of platforms configured to operate as mobile platforms, where each of these platforms comprises an optical depth sensor, and one platform operates as a static platform and comprising at least one optical projector. Upon operating the system, the static platform projects a pattern onto the environment, wherein each of the mobile platforms detects the pattern or a part thereof by its respective optical depth sensor while moving, and wherein information obtained by the optical depth sensors, is used to determine moving instructions for mobile platforms within that environment. Optionally, the system operates so that every time period another mobile platform from among the plurality of platforms, takes the role of operating as the static platform, while the preceding platform returns to operate as a mobile platform.
Abstract:
A method and apparatus for generating a three-dimensional depth map, are provided. The method comprising the steps of: (i) illuminating a target by a pattern projector having background intensity or by a combination of a pattern projector which does not have a background intensity operative together with a flood projector; (ii) capturing at least one image that comprises one or more objects present at the illuminated target; (iii) converting the at least one captured image into data; (iv) processing the data received from the conversion of the at least one captured image while filtering out the projected pattern from the processed data; (v) detecting edges of at least one of the one or more objects present at the illuminated target; and (vi) generating a three-dimensional depth map that comprises the at least one of the one or more objects whose edges have been determined.
Abstract:
An optical module and a method for its used are provided. The optical module comprises: an image capturing device; an illuminating device; a position measurement unit and a processor configured to: a) identify objects included in at least one frame that has been captured by the image capturing device; b) determine expected position of the one or more identified objects based on data received from the position measurement unit; and c) control operation of the at least one illuminating device while capturing a new image, to illuminate only part of the field of view of the target being acquired by the at least one image capturing device, wherein the one or more identified objects are included.
Abstract:
An optical module and a method for its used are provided. The optical module comprises: an image capturing device; an illuminating device; a position measurement unit and a processor configured to: a) identify objects included in at least one frame that has been captured by the image capturing device; b) determine expected position of the one or more identified objects based on data received from the position measurement unit; and c) control operation of the at least one illuminating device while capturing a new image, to illuminate only part of the field of view of the target being acquired by the at least one image capturing device, wherein the one or more identified objects are included.
Abstract:
A method is provided for use in a stereoscopic image generating system comprising two cameras, a plurality of memory buffers, and at least one processor. The method comprises: identifying at least one common region of interest (ROI) at images obtained from the two cameras; generating a Look Up table (LUT), for holding displacement values of pixels that belong to the at least one common ROI; forwarding data associated with images obtained from the two cameras that relates to the at least one common ROI, to the plurality of memory buffers; processing output lines retrieved from the plurality of memory buffers and propagating data that relates to a YUV image and associated LUTs, wherein the propagating of the data is carried out at a rate associated with a respective memory buffer from which that data is retrieved; and generating a stereoscopic image based on the propagated data.
Abstract:
A peripheral electronic device is described which is configured to communicate with a computing device comprising a display having a screen configured to display a virtual gaze cursor; wherein the peripheral electronic device comprises at least one user interface configured to trigger at least one operational command in response to interaction with a user, wherein the at least one operational command is associated with a current location of the virtual gaze cursor at the screen, and wherein a change at the current location of the virtual gaze cursor being displayed, is determined based on a shift of a user's gaze from a first location at said screen to a different location thereat, or based on a tilt of the user's head, or based on any combination thereof.
Abstract:
An electronic device is provided which comprises a plurality of different sensors, each configured to retrieve data relating to at least one characteristic of a user; a processor configured to: receive data retrieved by the different sensors; and establish features that characterize the user based upon data received from at least two of the different sensors; a storage configured to store information that relates to the features that characterize the user; and wherein the processor is further configured to: receive new data that has been retrieved by the different sensors which relates to the features that characterize the user; retrieve information from the storage that relates to the features that characterize the user and compare the stored information with the newly received data; and based on the comparison, determine whether to generate a user related output and/or replace stored information with information derived from the newly received data.