Abstract:
A device includes a first camera and a processor configured to detect one or more first keypoints within a first image captured by the first camera at a first time, detect one or more second keypoints within a second image captured by a second camera at the first time, and detect the one or more first keypoints within a third image captured by the first camera at a second time. The processor is configured to determine a pose estimation based on coordinates of the one or more first keypoints of the first image relative to a common coordinate system, coordinates of the one or more second keypoints of the second image relative to the common coordinate system, and coordinates of the one or more first keypoints of the third image relative to the common coordinate system. The first coordinate system is different than the common coordinate system.
Abstract:
Systems, methods, and devices for generating a depth map of a scene are provided. The method comprises projecting, onto the scene, a codeword pattern including a plurality of light points including a first set of light points projected at a first intensity and a second set of light points projected at a second intensity greater than the first intensity. The method further comprises detecting, from the scene, a reflection of the codeword pattern. The method further comprises generating a first depth map layer at a first resolution. The method further comprises generating a second depth map layer at a second resolution lower than the first resolution. The method further comprises generating the depth map of the scene, wherein the depth map includes depth information for the scene according to each of the first depth map layer and the second depth map layer.
Abstract:
Systems and methods for correcting errors in a depth map generated by a structured light system are disclosed. In one aspect, a method includes dividing a depth map into segments and calculating a density distribution of the depth values for each segment. The method includes detecting error (or “outlier”) values by determining the depth values that fall outside of a range of depth values, the range of depth values representative of the highest density depth values for a given segment. The method includes detecting error values in the depth map as a whole based on the density distribution values for each segment.
Abstract:
An interactive display, including a cover glass having a front surface that includes a viewing area provides an input/output (I/O) interface for a user of an electronic device. An arrangement includes a processor, a light source, and a camera disposed outside the periphery of the viewing area coplanar with or behind the cover glass. The camera receives scattered light resulting from interaction, with an object, of light outputted from the interactive display, the outputted light being received by the cover glass from the object and directed toward the camera. The processor determines, from image data output by the camera, an azimuthal angle of the object with respect to an optical axis of the camera and/or a distance of the object from the camera.
Abstract:
Methods and systems for surface normal estimation are disclosed. In some aspects, a plurality of images or depth maps representing a three dimensional object from multiple viewpoints is received. Surface normals at surface points within a single image of the plurality of images are estimated based on surface points within the single image. An electronic representation of a three dimensional surface of the object is generated based on the surface normals and a point cloud comprised of surface points derived from the plurality of images.
Abstract:
Systems and methods for depth enhanced and content aware video stabilization are disclosed. In one aspect, the method identifies keypoints in images, each keypoint corresponding to a feature. The method then estimates the depth of each keypoint, where depth is the distance from the feature to the camera. The method selects keypoints of within a depth tolerance. The method determines camera positions based on the selected keypoints, each camera position representing the position of the camera when the camera captured one of the images. The method determines a first trajectory of camera positions based on the camera positions, and generates a second trajectory of camera positions based on the first trajectory and adjusted camera positions. The method generates adjusted images by adjusting the images based on the second trajectory of camera positions.