Abstract:
A pattern projector is disclosed. The pattern projector comprises a light source, a projection lens, a mask and configured to enable the at least one projection lens to illuminate a target while projecting the pre-defined pattern thereat, and at least one holder, and wherein the pattern projector is characterized in that the at least one light source is a wide area light source, and wherein the area of the at least one mask or the at least one mask active area, is smaller than the area of the at least one light source, enabling to refrain from applying condenser optics or focusing optics between the at least one light source and the at least one mask.
Abstract:
A method is provided for assembling a 3D sensing apparatus that comprises at least two projectors, wherein the assembling of the apparatus is carried out by ensuring that a pattern formed from a combination of images projected by each of the at least two projectors, is not formed along an epi-polar line or part thereof more than once. The method comprises the steps of: placing the at least two projectors at initial approximate physical positions within the 3D sensing apparatus; and, placing one or more projectors' protectors on top of the at least two projectors, thereby changing the projectors initial positions and automatically positioning them accurately in their pre-defined position and orientation by using the one or more projectors' protectors.
Abstract:
An image capturing arrangement is provided which comprises a depth (3D) sensor, a processor, an illuminating device and one or more image capturing devices. The depth sensor is configured to acquire at least a partial representation of the scene whose image is about to be taken, the processor retrieves required information from the acquired representation about distances of objects included in that scene from the sensor, and the illuminating device is configured to provide a non-uniform illumination of the scene whose image is to be captured, based on the retrieved information.
Abstract:
A method is provided for use in a stereoscopic image generating system comprising two cameras, a plurality of memory buffers, and at least one processor. The method comprises: identifying at least one common region of interest (ROI) at images obtained from the two cameras; generating a Look Up table (LUT), for holding displacement values of pixels that belong to the at least one common ROI; forwarding data associated with images obtained from the two cameras that relates to the at least one common ROI, to the plurality of memory buffers; processing output lines retrieved from the plurality of memory buffers and propagating data that relates to a YUV image and associated LUTs, wherein the propagating of the data is carried out at a rate associated with a respective memory buffer from which that data is retrieved; and generating a stereoscopic image based on the propagated data.
Abstract:
A system for video conferencing is disclosed. The system comprises a data processor which receives from a remote location a stream of imagery data of a remote user, and displays an image of the remote user on a display device. The data processor also receives a stream of imagery data of an individual in a local scene in front of the display device, and extracts a gaze direction and/or a head orientation of the individual. The data processor varies a view of the image responsively to the gaze direction and/or the head orientation.
Abstract:
A laser device comprising two lenses, each of which is configured to separately collimate radiant electromagnetic energy in a respective axis, wherein one of the two lenses is a fast-axis collimating (FAC) lens and the other of the two lenses is a slow axis collimating (SAC) lens, and wherein the SAC lens is characterized in that it has a first surface being a negative surface and a second surface being a positive surface.
Abstract:
A method is provided for obtaining a disparity map for reconstructing a three dimensional image. The map is based upon a large range of disparities and is obtained by using a hardware provided with a buffer capable of storing data that relates to substantially less disparities than a data associated with the large disparities' range. The method comprises the steps of: providing a pair of stereoscopic images captured by two image capturing devices; dividing the large disparities' range into N disparity ranges; executing a stereo matching algorithm for a plurality of times, using data retrieved from a pair of captured images, wherein the algorithm is executed each time while using a different disparity range out of the N disparity ranges, thereby obtaining a plurality of individual disparity maps, each corresponding to a different disparity range; and merging the individual disparity maps to generate a map of the large disparities' range.
Abstract:
A method is provided for generating a three dimensional frame. The method comprises the steps of: retrieving information that relates to a plurality of images of a target captured by two image capturing devices; determining data that will be applied for analyzing objects of interests included in the captured images; calculating disparity between groups of corresponding frames, wherein each of said groups comprises frames taken essentially simultaneously by the two image capturing devices; determining an initial estimation of a disparity range for the frames included in the groups of the corresponding frames; evaluating a disparity range value for each proceeding group based on information retrieved on a dynamic basis from frames included therein, and changing the value of said disparity range when required; and applying a current value of the disparity range in a stereo matching algorithm, and generating a three-dimensional frame for each proceeding group of corresponding frames.
Abstract:
A method and apparatus are provided for generating a three dimensional image. The method comprises the steps of: determining deviations that exist between at least two image capturing devices, each configured to capture essentially the same image as the other (s); determining a correction function to enable correcting locations of pixels belonging to each stream of pixels to their true locations within an undistorted image; retrieving two or more streams of pixels, associated with an image captured by a respective image captured device; applying the correction function onto received pixels; applying a stereo matching algorithm for processing data; and generating a three-dimensional image based on the results obtained from the stereo matching algorithm.
Abstract:
A natural user interface (NUI) computer processor is provided herein. The NUI computer processor may include: at least one computer processing module; and a plurality of sensors, connected with direct, high bandwidth connectors to the at least one computer processing module, wherein the computer processing module is configured to support a full extent of processing power required for simultaneous multi-modal high resolution information handling gathered by said sensors, wherein the computer processing module and the high bandwidth connectors are cooperatively configured to eliminate any non-vital delays, to reduce latency between human user actions captured by said sensors and response by the NUI computer processor.