Abstract:
Systems, devices, and methods are described for efficiently super resolving a portion of an image. One embodiment involves capturing, using a camera module of a device, at least one image of a scene, and creating a higher resolution image of a user-selected region of interest. The super resolution of the region of interest may be performed by matching a high resolution grid with a grid that is at the resolution of a device camera, populating the high resolution grid with information from an image from the camera, and then populating the remaining points of the grid that are not yet populated.
Abstract:
Apparatuses and methods for reading a set of images to merge together into a high dynamic range (HDR) output image are described. Images have a respective HDR weight and a respective ghost-free weight. Images are merged together using the weighted average of the set of input images using the ghost-free weight. A difference image is determined based on a difference between each pixel within a HDR output image and each respective pixel within a reference image used to create the HDR output image.
Abstract:
Systems, devices, and methods are described for efficiently super resolving a portion of an image. One embodiment involves capturing, using a camera module of a device, at least one image of a scene, and creating a higher resolution image of a user-selected region of interest. The super resolution of the region of interest may be performed by matching a high resolution grid with a grid that is at the resolution of a device camera, populating the high resolution grid with information from an image from the camera, and then populating the remaining points of the grid that are not yet populated.
Abstract:
Embodiments include devices and methods for automatically calibrating a camera. In various embodiments, an image sensor may capture an image. Locations of one or more points including in the captured image frames may be predicted and detected. Calibration parameters may be calculated based on differences between predicted locations of a selected point within an image frame and observed locations of the selected point within the captured image frame. The automatic camera calibration method may be repeated until the calibration parameters satisfy a calibration quality threshold.
Abstract:
In general, techniques are described that facilitate processing of color image data using both a mono image data and a color image data. A device comprising a monochrome camera, a color camera, and a processor may be configured to perform the techniques. The monochrome camera may capture monochrome image data of a scene. The color camera may capture color image data of the scene. A processor may determine a parallax value indicative of a level of parallax between the monochrome image data and the color image data and determine that the parallax is greater than the parallax threshold. The processor may further combine, in response to the determination that the parallax is greater than the parallax threshold, a luma component of the color image data with a luma component of the monochrome image data to generate a luma component of enhanced color image data.
Abstract:
In general, techniques are described that facilitate processing of color image data using both a mono image data and a color image data. A device comprising a monochrome camera, a color camera, and a processor may be configured to perform the techniques. The monochrome camera may be configured to capture monochrome image data of a scene. The color camera may be configured to capture color image data of the scene. A processor may be configured to match features of the color image data to features of the monochrome image data, and compute a finite number of shift values based on the matched features of the color image data and the monochrome image data. The processor may further be configured to shift the color image data based on the finite number of shift values to generate enhanced color image data.
Abstract:
In general, techniques are described that facilitate processing of color image data using both a mono image data and a color image data. A device comprising a monochrome camera, a color camera and a processor may be configured to perform the techniques. The monochrome camera may be configured to capture monochrome image data of a scene. The color camera may be configured to capture color image data of the scene. The processor may be configured to perform intensity equalization with respect to a luma component of either the color image data or the monochrome image data to correct for differences in intensity between the color camera and the monochrome camera.
Abstract:
In general, techniques are described that facilitate processing of color image data using both a mono image data and a color image data. A device comprising a monochrome camera, a color camera and a processor may be configured to perform the techniques. The monochrome camera may be configured to capture monochrome image data of a scene. The color camera may be configured to capture color image data of the scene. The processor may be configured to perform intensity equalization with respect to a luma component of either the color image data or the monochrome image data to correct for differences in intensity between the color camera and the monochrome camera.
Abstract:
In general, techniques are described that facilitate processing of color image data using both a mono image data and a color image data. A device comprising a monochrome camera, a color camera, and a processor may be configured to perform the techniques. The monochrome camera may capture monochrome image data of a scene. The color camera may capture color image data of the scene. A processor may determine a parallax value indicative of a level of parallax between the monochrome image data and the color image data and determine that the parallax is greater than the parallax threshold. The processor may further combine, in response to the determination that the parallax is greater than the parallax threshold, a luma component of the color image data with a luma component of the monochrome image data to generate a luma component of enhanced color image data.
Abstract:
Techniques disclosed herein involve determining motion occurring in a scene between the capture of two successively-captured images of the scene using intensity gradients of pixels within the images. These techniques can be used alone or with other motion-detection techniques to identify where motion has occurred in the scene, which can be further used to reduce artifacts that may be generated when images are combined.