Abstract:
An autonomous mobile platform is described. The autonomous mobile platform is configured for use in road monitoring, and comprising: a scanning module configured to be used for road scanning in order to enable identifying hazards associated with the road being scanned; a location finder, configured to determine locations of hazards identified at the road being scanned; and at least one transmitter comprised in a respective autonomous mobile platform, and configured to transmit data derived by the scanning module from the road scanning.
Abstract:
A calibration arrangement, configured to enable camera calibration, characterized in that it comprises a plurality of lighting zones, each illuminated by respective light sources operative at at least two different wavelengths, wherein the illumination of the plurality of lighting zones ensures that when illuminated, no shadow is casted upon any of the plurality of lighting zones, although at least some of the plurality of lighting zones are positioned orthogonally to each other, wherein currents provided to the plurality of lighting zones are controlled by a respective plurality of zone controllers, and wherein these currents are conveyed to each of the plurality of lighting zones via Ethernet cables.
Abstract:
A fish tape assembly comprising: a fish tape having a first and second opposite ends, configured to be pushed/pulled via a conduit pathway extending within a wall; a location indicator coupled to the fish tape, and configured to enable detecting a current location of the location indicator along a conduit pathway, through which the first end of the fish tape is being pushed; and a monitor, configured to enable receiving information that relates to the current location of the location indicator, thereby allowing generation of a three-dimensional (3D) map of the conduit pathway.
Abstract:
A method and a device are provided for generating an image. The method comprises the steps of: (i) providing information that relates to an image of a single target captured by at least two image capturing devices; (ii) storing the information provided in at least two input buffers; (iii) sampling the stored information and storing the sampled information at at least two output line buffers, each corresponding to a different image resolution; (iv) processing the sampled information that had been stored at the at least two output line buffers in accordance with pre-defined disparity related information, wherein the pre-defined disparity related information is associated with a respective one of the at least two image capturing devices that had captured the information being currently processed; and (v) retrieving information from the at least two output line buffers and storing the retrieved information at a hybrid row buffer, for generating the image.
Abstract:
A monitoring system is provided for monitoring a user's premises. The system comprises a monitoring device that comprises at least one sensor configured to enable determining occurrence of one or more pre-defined events, wherein the monitoring device is characterized in that it is configured to automatically guide itself to move within the user's premises, and in addition the system comprises a docking station for the monitoring device.
Abstract:
An electronic device comprising: a display; one or more gaze detection sensors for determining a portion of the display to which a user's gaze is currently directed to; a timer to measure periods of time associated with the user's current gaze at the display; and one or more processors operative to: receive data relating to periods of time measured by the timer and determine therefrom a characteristic rate at which the user shifts his gaze from one portion of the display to another; determine a portion of the display towards which the user's gaze was directed for a period of time longer than a period of time which is expected in accordance with his characteristic rate; identify an object included in the determined portion of the display; retrieve information that relates to the identified object; and enable displaying information which is based on the retrieved information.
Abstract:
A system configured to operate in an unknown, possibly texture-less environment, with possibly self-similar surfaces, and comprising a plurality of platforms configured to operate as mobile platforms, where each of these platforms comprises an optical depth sensor, and one platform operates as a static platform and comprising at least one optical projector. Upon operating the system, the static platform projects a pattern onto the environment, wherein each of the mobile platforms detects the pattern or a part thereof by its respective optical depth sensor while moving, and wherein information obtained by the optical depth sensors, is used to determine moving instructions for mobile platforms within that environment. Optionally, the system operates so that every time period another mobile platform from among the plurality of platforms, takes the role of operating as the static platform, while the preceding platform returns to operate as a mobile platform.
Abstract:
A method is provided for calibrating a structured light system. The structured light system comprises, a camera, a processor and a projector that emits light at an unknown pattern. According to this method, the projector emits an unknown pattern at two different distances relative to the camera's location, and the camera captures the pattern that was projected at the different distances The processor then determines the vertical disparity between the two captured images and estimates a relative orientation between the camera and the projector. This estimation in turn is applied in the calibration of the structured light system.
Abstract:
A method and apparatus for generating a three-dimensional depth map, are provided. The method comprising the steps of: (i) illuminating a target by a pattern projector having background intensity or by a combination of a pattern projector which does not have a background intensity operative together with a flood projector; (ii) capturing at least one image that comprises one or more objects present at the illuminated target; (iii) converting the at least one captured image into data; (iv) processing the data received from the conversion of the at least one captured image while filtering out the projected pattern from the processed data; (v) detecting edges of at least one of the one or more objects present at the illuminated target; and (vi) generating a three-dimensional depth map that comprises the at least one of the one or more objects whose edges have been determined.
Abstract:
An autonomous mobile platform is described. The autonomous mobile platform is configured for use in road monitoring, and comprising: a scanning module configured to be used for road scanning in order to enable identifying hazards associated with the road being scanned; a location finder, configured to determine locations of hazards identified at the road being scanned; and at least one transmitter comprised in a respective autonomous mobile platform, and configured to transmit data derived by the scanning module from the road scanning.