Generating a distance map based on captured images of a scene

    公开(公告)号:US10510149B2

    公开(公告)日:2019-12-17

    申请号:US15743948

    申请日:2016-07-08

    Abstract: Techniques are described for generating a distance map (e.g., a map of disparity, depth or other distance values) for image elements (e.g., pixels) of an image capture device. The distance map is generated based on an initial distance map (obtained, e.g., using a block or code matching algorithm) and a segmentation map (obtained using a segmentation algorithm). In some instances, the resulting distance map can be less sparse than the initial distance map, can contain more accurate distance values, and can be sufficiently fast for real-time or near real-time applications. The resulting distance map can be converted, for example, to a color-coded distance map of a scene that is presented on a display device.

    Generating a merged, fused three-dimensional point cloud based on captured images of a scene

    公开(公告)号:US10699476B2

    公开(公告)日:2020-06-30

    申请号:US15749825

    申请日:2016-08-04

    Abstract: Presenting a merged, fused three-dimensional point cloud includes acquiring multiple sets of images of a scene from different vantage points, each set of images including respective stereo matched images and a color image. For each respective set of images, a disparity map based on the plurality of stereo images is obtained, data from the color image is fused onto the disparity map so as to generate a fused disparity map, and a three-dimensional fused point cloud is created from the fused disparity map. The respective three-dimensional fused point clouds is merged together so as to obtain a merged, fused three-dimensional point cloud. The techniques can be advantageous even under the constraints of sparseness and low-depth resolution, and are suitable, in some cases, for real-time or near real-time applications in which computing time needs to be reduced.

Patent Agency Ranking