Abstract:
A stereoscopic optical assembly comprising: a single unit mounting bar having a plurality of openings for receiving a plurality of image capturing optical elements; a plurality of image capturing devices mounted on the single unit mounting bar; wherein the plurality of openings are arranged to enable obtaining a stereoscopic image from the plurality of image capturing devices, and wherein the stereoscopic image is derived from images captured by each of the image capturing devices.
Abstract:
A method and a system for rotating content of an electronic display responsive to rotational human gestures are provided herein. The system that implements the method may include: a display; at least one capturing device configured to capture a body part in front of said display; a processor; a rotational gesture recognition module executed by the processor and configured to detect a predefined rotational gesture and to generate an instruction to rotate the content of the display in approximately 90° only when the rotational gesture displacement goes beyond a predefined threshold.
Abstract:
A system configured to operate in an unknown, possibly texture-less environment, with possibly self-similar surfaces, and comprising a plurality of platforms configured to operate as mobile platforms, where each of these platforms comprises an optical depth sensor, and one platform operates as a static platform and comprising at least one optical projector. Upon operating the system, the static platform projects a pattern onto the environment, wherein each of the mobile platforms detects the pattern or a part thereof by its respective optical depth sensor while moving, and wherein information obtained by the optical depth sensors, is used to determine moving instructions for mobile platforms within that environment. Optionally, the system operates so that every time period another mobile platform from among the plurality of platforms, takes the role of operating as the static platform, while the preceding platform returns to operate as a mobile platform.
Abstract:
A method is provided for calibrating a structured light system. The structured light system comprises, a camera, a processor and a projector that emits light at an unknown pattern. According to this method, the projector emits an unknown pattern at two different distances relative to the camera's location, and the camera captures the pattern that was projected at the different distances The processor then determines the vertical disparity between the two captured images and estimates a relative orientation between the camera and the projector. This estimation in turn is applied in the calibration of the structured light system.
Abstract:
A method and apparatus for generating a three-dimensional depth map, are provided. The method comprising the steps of: (i) illuminating a target by a pattern projector having background intensity or by a combination of a pattern projector which does not have a background intensity operative together with a flood projector; (ii) capturing at least one image that comprises one or more objects present at the illuminated target; (iii) converting the at least one captured image into data; (iv) processing the data received from the conversion of the at least one captured image while filtering out the projected pattern from the processed data; (v) detecting edges of at least one of the one or more objects present at the illuminated target; and (vi) generating a three-dimensional depth map that comprises the at least one of the one or more objects whose edges have been determined.
Abstract:
An autonomous mobile platform is described. The autonomous mobile platform is configured for use in road monitoring, and comprising: a scanning module configured to be used for road scanning in order to enable identifying hazards associated with the road being scanned; a location finder, configured to determine locations of hazards identified at the road being scanned; and at least one transmitter comprised in a respective autonomous mobile platform, and configured to transmit data derived by the scanning module from the road scanning.
Abstract:
A method and a stereoscopic apparatus configured to determine an exposure time period for capturing images. The apparatus comprises at least one image capturing device for capturing pairs of images; a processor configured to: calculate a texture-signal-to-noise ratio (TSNR) metric based on information derived from a pair of captured images; calculate an image saturation metric based on that pair of captured images; calculate a value for an exposure duration that will be implemented by the at least one image capturing device when another pair of images are captured; provide the value of the calculated exposure time period to each at least one image capturing device; and wherein the at least one image capturing device is configured to capture at least one image of the target while implementing each the respective calculated value of the exposure time period provided to the corresponding one of the at least one image capturing device.
Abstract:
An imaging system is disclosed. The system comprises: a first imaging device and a second imaging device being spaced apart and configured to provide partially overlapping field-of-views of a scene over a spectral range from infrared to visible light. The system comprises at least one infrared light source constituted for illuminating at least the overlap with patterned infrared light, and a computer system configured for receiving image data pertaining to infrared and visible light acquired by the imaging devices, and computing three-dimensional information of the scene based on the image data. The image data optionally and preferably comprises the patterned infrared light as acquired by both the imaging devices.
Abstract:
A natural user interface (NUI) computer processor is provided herein. The NUI computer processor may include: at least one computer processing module; and a plurality of sensors, connected with direct, high bandwidth connectors to the at least one computer processing module, wherein the computer processing module is configured to support a full extent of processing power required for simultaneous multi-modal high resolution information handling gathered by said sensors, wherein the computer processing module and the high bandwidth connectors are cooperatively configured to eliminate any non-vital delays, to reduce latency between human user actions captured by said sensors and response by the NUI computer processor.
Abstract:
A method and a device are provided for generating an image. The method comprises the steps of: (i) providing information that relates to an image of a single target captured by at least two image capturing devices; (ii) storing the information provided in at least two input buffers; (iii) sampling the stored information and storing the sampled information at at least two output line buffers, each corresponding to a different image resolution; (iv) processing the sampled information that had been stored at the at least two output line buffers in accordance with pre-defined disparity related information, wherein the pre-defined disparity related information is associated with a respective one of the at least two image capturing devices that had captured the information being currently processed; and (v) retrieving information from the at least two output line buffers and storing the retrieved information at a hybrid row buffer, for generating the image.