Abstract:
Systems, methods, and computer readable media for calibrating two cameras (image capture units) using a non-standard, and initially unknown, calibration object are described. More particularly, an iterative approach to determine the structure and pose of an target object in an unconstrained environment are disclosed. The target object may be any of a number of predetermined objects such as a specific three dimensional (3D) shape, a specific type of animal (e.g., dogs), or the face of an arbitrary human. Virtually any object whose structure may be expressed in terms of a relatively low dimensional parametrized model may be used as a target object. The identified object (i.e., its pose and shape) may be used as input to a bundle adjustment operation resulting in camera calibration.
Abstract:
Systems, methods, and computer readable media to fuse digital images are described. In general, techniques are disclosed that use multi-band noise reduction techniques to represent input and reference images as pyramids. Once decomposed in this manner, images may be fused using novel low-level (noise dependent) similarity measures. In some implementations similarity measures may be based on intra-level comparisons between reference and input images. In other implementations, similarity measures may be based on inter-level comparisons. In still other implementations, mid-level semantic features such as black-level may be used to inform the similarity measure. In yet other implementations, high-level semantic features such as color or a specified type of region (e.g., moving, stationary, or having a face or other specified shape) may be used to inform the similarity measure.
Abstract:
An image sensor includes a pixel array having a plurality of pixels. Pixels can be summed or binned diagonally in the pixel array in a first diagonal direction and in a different second diagonal direction. The locations of the first and second diagonal summed pairs can be distributed across the pixel array.
Abstract:
Image enhancement is achieved by separating image signals, e.g. YCbCr image signals, into a series of frequency bands and performing locally-adaptive noise reduction on bands below a given frequency but not on bands above that frequency. The bands are summed to develop the image enhanced signals. The YCbCr, multi-band locally-adaptive approach to denoising is able to operate independently—and in an optimized fashion—on both luma and chroma channels. Noise reduction is done based on models developed for both luma and chroma channels by measurements taken for multiple frequency bands, in multiple patches on the ColorChecker chart, and at multiple gain levels, in order to develop a simple yet robust set of models that may be tuned off-line a single time for each camera and then applied to images taken by such cameras in real-time without excessive processing requirements and with satisfactory results across illuminant types and lighting conditions.
Abstract:
Image enhancement is achieved by separating image signals, e.g. YCbCr image signals, into a series of frequency bands and performing noise reduction on bands below a given frequency but not on bands above that frequency. The bands are summed to develop the image enhanced signals. The YCbCr, multi-band approach to denoising is able to operate independently—and in an optimized fashion—on both luma and chroma channels. Noise reduction is done based on models developed for both luma and chroma channels by measurements taken for multiple frequency bands, in multiple patches on the ColorChecker chart, and at multiple gain levels, in order to develop a simple yet robust set of models that may be tuned off-line a single time for each camera and then applied to images taken by such cameras in real-time without excessive processing requirements and with satisfactory results across illuminant types and lighting conditions.
Abstract:
Generating an image with a selected level of background blur includes capturing, by a first image capture device, a plurality of frames of a scene, wherein each of the plurality of frames has a different focus depth, obtaining a depth map of the scene, determining a target object and a background in the scene based on the depth map, determining a goal blur for the background, and selecting, for each pixel in an output image, a corresponding pixel from the focus stack.
Abstract:
Some embodiments include methods and/or systems for using multiple cameras to provide optical zoom to a user. Some embodiments include a first camera unit of a multifunction device capturing a first image of a first visual field. A second camera unit of the multifunction device simultaneously captures a second image of a second visual field. In some embodiments, the first camera unit includes a first optical package with a first focal length. In some embodiments, the second camera unit includes a second optical package with a second focal length. In some embodiments, the first focal length is different from the second focal length, and the first visual field is a subset of the second visual field. In some embodiments, the first image and the second image are preserved to a storage medium as separate data structures.
Abstract:
Pixel binning is performed by summing charge from some pixels positioned diagonally in a pixel array. Pixel signals output from pixels positioned diagonally in the pixel array may be combined on the output lines. A signal representing summed charge produces a binned 2×1 cluster. A signal representing combined voltage signals produces a binned 2×1 cluster. A signal representing summed charge and a signal representing combined pixel signals can be combined digitally to produce a binned 2×2 pixel. Orthogonal binning may be performed on other pixels in the pixel array by summing charge on respective common sense regions and then then combining the voltage signals that represent the summed charge on respective output lines.
Abstract:
A method for dynamically calibrating rotational offset in a device includes obtaining an image captured by a camera of the device. Orientation information of the device at the time of image capture may be associated with the image. Pixel data of the image may be analyzed to determine an image orientation angle for the image. A device orientation angle may be determined from the orientation information. A rotational offset, based on the image orientation angle and the device orientation angle, may be determined. The rotational offset is relative to the camera or orientation sensor. A rotational bias may be determined from statistical analysis of numerous rotational offsets from numerous respective images. In some embodiments, various thresholds and predetermined ranges may be used to exclude some rotational offsets from the statistical analysis or to discontinue processing for that image.
Abstract:
Systems, methods, and computer readable media to improve image stabilization operations are described. A novel combination of image quality and commonality metrics are used to identify a reference frame from a set of commonly captured images which, when the set's other images are combined with it, results in a quality stabilized image. The disclosed image quality and commonality metrics may also be used to optimize the use of a limited amount of image buffer memory during image capture sequences that return more images that the memory may accommodate at one time. Image quality and commonality metrics may also be used to effect the combination of multiple relatively long-exposure images which, when combined with a one or more final (relatively) short-exposure images, yields images exhibiting motion-induced blurring in interesting and visually pleasing ways.