Abstract:
A temporal filter in an image processing pipeline may be configured to generate a high dynamic range (HDR) image. Image frames captured to generate an HDR image frame be blended together at a temporal filter. An image frame that is part of a group of image frames capture to generate the HDR image may be received for filtering at the temporal filter module. A reference image frame, which may be a previously filtered image frame or an unfiltered image frame may be obtained. A filtered version of the image frame may then be generated according to an HDR blending scheme that blends the reference image frame with the image frame. If the image frame is the last image frame of the group of image frames to be filtered, then the filtered version of the image frame may be provided as the HDR image frame.
Abstract:
In-stream rolling shutter compensation may be utilized to modify image data to compensate for detected camera motion. An image processor may perform motion matching on image data received from a camera sensor to determine whether and how the camera is moving. Strips of image data are analyzed to find matching locations between the current image and a previous image by generating graphical profiles for each image strip. The graphical profiles for the current strip are compared to corresponding profiles from the previous image to determine matching locations between the two frames. A motion vector for the strip may be computed based on spatial distances between the match locations of the current image and corresponding match locations of the previous frame. Image data for the current strip may be modified based on the motion vector to compensate for perceived camera motion as it is written out to memory.
Abstract:
Image tone adjustment using local tone curve computation may be utilized to adjust luminance ranges for images. Image tone adjustment using local tone curve computation may reduce the overall contrast of an image, while maintaining local contrast in smaller areas, such as in images capturing brightly lit scenes where the difference in intensity between brightest and darkest areas is large. A desired brightness representation of the image may be generated including target luminance values for corresponding blocks of the image. For each block, one or more tone adjustment values may be computed, that when jointly applied to the respective histograms for the block and neighboring blocks results in the luminance values that match corresponding target values. The tone adjustment values may be determined by solving an under-constrained optimization problem such that optimization constraints are minimized. The image may then be adjusted according to the computed tone adjustment values.
Abstract:
In one embodiment, a system includes a first device rendering image data, a second device storing the image data, and a display panel that displays the image data stored in the memory. The first device renders multiple frames of the image data, compresses the multiple frames into a single superframe, and transports the single superframe. The second device receives the single superframe, decompresses the single superframe into the multiple frames of image data, and stores the image data on a memory of the second device.
Abstract:
Embodiments relate to image signal processors (ISP) that include binner circuits that down-sample an input image. An input image may include a plurality of pixels. The output image of the binner circuit may include a reduced number of pixels. The binner circuit may include a plurality of different operation modes. In a bin mode, the binner circuit may blend a subset of input pixel values to generate an output pixel quad. In a skip mode, the binner circuit may select one of the input pixel values as the output pixel pixel. The selection may be performed randomly to avoid aliasing. In a luminance mode, the binner circuit may take a weighted average of a subset of pixel values having different colors. In a color value mode, the binner circuit may select one of the colors in a subset of pixel values as an output pixel value.
Abstract:
Some embodiments relate to sharpening segments of an image differently based on content in the image. Content based sharpening is performed by a content image processing circuit that receives luminance values of an image and a content map. The content map identifies categories of content in segments of the image. Based on one or more of the identified categories of content, the circuit determines a content factor associated with a pixel. The content factor may also be based on a texture and/or chroma values. A texture value indicates a likelihood of a category of content and is based on detected edges in the image. A chroma value indicates a likelihood of a category of content and is based on color information of the image. The circuit receives the content factor and applies it to a version of the luminance value of the pixel to generate a sharpened version of the luminance value.
Abstract:
In an embodiment, an electronic device may be configured to capture still frames during video capture but may capture the still frames in the 4×3 aspect ratio and at higher resolution than the 16×9 aspect ratio video frames. The device may interleave high resolution, 4×3 frames and lower resolution 16×9 frames in the video sequence, and may capture the nearest higher resolution, 4×3 frame when the user indicates the capture of a still frame. Alternatively, the device may display 16×9 frames in the video sequence, and then expand to 4×3 frames when a shutter button is pressed. The device may capture the still frame and return to the 16×9 video frames responsive to a release of the shutter button.
Abstract:
Embodiments relate to circuitry for performing fusion of two images captured with two different exposure times to generate a fused image having a higher dynamic range. Information about first keypoints is extracted from the first image by processing pixel values of pixels in the first image. A model describing correspondence between the first image and the second image is then built by processing at least the information about first keypoints. A processed version of the first image is warped using mapping information in the model to generate a warped version of the first image spatially more aligned to the second image than to the first image. The warped version of the first image is fused with a processed version of the second image to generate the fused image.
Abstract:
Embodiments relate to circuitry for performing fusion of two images captured with two different exposure times to generate a fused image having a higher dynamic range. Information about first keypoints is extracted from the first image by processing pixel values of pixels in the first image. A model describing correspondence between the first image and the second image is then built by processing at least the information about first keypoints. A processed version of the first image is warped using mapping information in the model to generate a warped version of the first image spatially more aligned to the second image than to the first image. The warped version of the first image is fused with a processed version of the second image to generate the fused image.
Abstract:
An image processing pipeline may dynamically determine filtering strengths for noise filtering of image data. Statistics may be collected for an image at an image processing pipeline. The statistics may be accessed and evaluated to generate a filter strength model that maps respective filtering strengths to different portions of the image. A noise filter may determine a filtering strength for image data received at the noise filter according to the filter strength model. The noise filter may then apply a filtering technique according to the determined filtering strength.