Abstract:
Described are methods and apparatus for adjusting images of a stereoscopic image pair. The methods and apparatus may capture a first and second image with first and second imaging sensors. The two imaging sensors have intrinsic and extrinsic parameters. A normalized focal distance of a reference imaging sensor may also be determined based on intrinsic and extrinsic parameters. A calibration matrix is then adjusted based on the normalized focal distance. The calibration matrix may be applied to an image captured by an image sensor.
Abstract:
Systems and methods for depth enhanced and content aware video stabilization are disclosed. In one aspect, the method identifies keypoints in images, each keypoint corresponding to a feature. The method then estimates the depth of each keypoint, where depth is the distance from the feature to the camera. The method selects keypoints of within a depth tolerance. The method determines camera positions based on the selected keypoints, each camera position representing the position of the camera when the camera captured one of the images. The method determines a first trajectory of camera positions based on the camera positions, and generates a second trajectory of camera positions based on the first trajectory and adjusted camera positions. The method generates adjusted images by adjusting the images based on the second trajectory of camera positions.
Abstract:
Systems and methods for correcting stereo yaw of a stereoscopic image sensor pair using autofocus feedback are disclosed. A stereo depth of an object in an image is estimated from the disparity of the object between the images captured by each sensor of the image sensor pair. An autofocus depth to the object is found from the autofocus lens position. If the difference between the stereo depth and the autofocus depth is non zero, one of the images is warped and the disparity is recalculated until the stereo depth and the autofocus depth to the object is substantially the same.
Abstract:
An electronic imaging device and method for image capture are described. The imaging device includes a camera configured to obtain image information of a scene and that may be focused on a region of interest in the scene. The imaging device also includes a LIDAR unit configured to obtain depth information of at least a portion of the scene at specified scan locations of the scene. The imaging device is configured to detect an object in the scene and provides specified scan locations to the LIDAR unit. The camera is configured to capture an image with an adjusted focus based on depth information, obtained by the LIDAR unit, associated with the detected object.
Abstract:
Aspects of the present disclosure relate to systems and methods for time-of-flight ranging. An example time-of-flight system includes a transmitter including a plurality of light emitters for transmitting focused light, the plurality of light emitters including a first group of light emitters for transmitting focused light with a first field of transmission and a second group of light emitters for transmitting focused light with a second field of transmission. The first field of transmission at a depth from the transmitter is larger than the second field of transmission at the depth from the transmitter. The time-of-flight system also includes a receiver to receive reflections of the transmitted light.
Abstract:
Systems, apparatus, and methods for generating a fused depth map from one or more individual depth maps, wherein the fused depth map is configured to provide robust depth estimation for points within the depth map. The methods, apparatus, or systems may comprise components that identify a field of view (FOV) of an imaging device configured to capture an image of the FOV and select a first depth sensing method. The system or method may sense a depth of the FOV with respect to the imaging device using the first selected depth sensing method and generate a first depth map of the FOV based on the sensed depth of the first selected depth sensing method. The system or method may also identify a region of one or more points of the first depth map having one or more inaccurate depth measurements and determine if additional depth sensing is needed.
Abstract:
Exemplary embodiments are directed to configurable demodulation of image data produced by an image sensor. In some aspects, a method includes receiving information indicating a configuration of the image sensor. In some aspects, the information may indicate a configuration of sensor elements and/or corresponding color filters for the sensor elements. A modulation function may then be generated based on the information. In some aspects, the method also includes demodulating the image data based on the generated modulation function to determine chrominance and luminance components of the image data, and generating the second image based on the determined chrominance and luminance components.
Abstract:
Systems and methods of triggering an event based on meeting a certain depth criteria in an image. One innovation of a method includes a identifying at least one object in a field of view of an imaging device, the imaging device configured to capture at least one image of the field of view, determining a threshold depth level, measuring a depth of the at least one object within the field of view with respect to the imaging device, comparing the measured depth of the at least one object to the threshold depth level, and capturing an image of the object when the depth of the object within the field of view exceeds the threshold depth level.
Abstract:
Systems and methods for correcting stereo yaw of a stereoscopic image sensor pair using autofocus feedback are disclosed. A stereo depth of an object in an image is estimated from the disparity of the object between the images captured by each sensor of the image sensor pair. An autofocus depth to the object is found from the autofocus lens position. If the difference between the stereo depth and the autofocus depth is non zero, one of the images is warped and the disparity is recalculated until the stereo depth and the autofocus depth to the object is substantially the same.
Abstract:
Methods and apparatus for active depth sensing are disclosed. In some aspects, an imaging device may generate first depth information based on an active sensing technology, such as structured light. In some aspects, at least some of the first depth information may be missing or inaccurate, perhaps due to an extended range between the imaging device and a subject of the image. Additional range information may then be generated based on a zero order component of the structured light. The additional range information may then be used alone or combined with the first depth information and used to control one or more parameters of an imaging device, such as an exposure time and/or aperture.