Abstract:
Systems and methods described herein can compensate for aberrations produced by a moving object in an image captured using a flash. In some embodiments, a method includes capturing a first image at time t−Δt1, where Δt1 represents the time difference between capturing the first image and capturing the second image, capturing the second image at a time t, the second image captured using a flash. The method also includes capturing a third image at a time t+Δt2, where Δt2 represents the time difference between capturing the second image and capturing the third image, determining motion information of an object that is depicted in the first, second and third image, and modifying at least one portion of the second image using the motion information and a portion of the first image, a portion of the third image, or a portion of the first image and a portion of the third image.
Abstract:
Apparatuses and methods for reading a set of images to merge together into a high dynamic range (HDR) output image are described. Images have a respective HDR weight and a respective ghost-free weight. Images are merged together using the weighted average of the set of input images using the ghost-free weight. A difference image is determined based on a difference between each pixel within a HDR output image and each respective pixel within a reference image used to create the HDR output image.
Abstract:
Methods, devices, and computer program products for capturing images with reduced blurriness in low light conditions are contained herein. In one aspect, a method of capturing an image is disclosed. The method includes capturing a plurality of first images with a first exposure length. The method further includes aligning each of the plurality of first images with each other and combining the aligned plurality of first images into a combined first image. The method further includes capturing a second image with a second exposure length, wherein the second exposure length is longer than the first exposure length and using the second image to adjust the brightness of the combined first image.
Abstract:
Techniques are described for generating an all-in focus image with a capability to refocus. One example includes obtaining a first depth map associated with a plurality of captured images of a scene. The plurality of captured images may include images having different focal lengths. The method further includes obtaining a second depth map associated with the plurality of captured images, generating a composite image showing different portions of the scene in focus (based on the plurality of captured images and the first depth map), and generating a refocused image showing a selected portion of the scene in focus (based on the composite image and the second depth map).
Abstract:
Systems, devices, and methods are described for efficiently super resolving a portion of an image. One embodiment involves capturing, using a camera module of a device, at least one image of a scene, and creating a higher resolution image of a user-selected region of interest. The super resolution of the region of interest may be performed by matching a high resolution grid with a grid that is at the resolution of a device camera, populating the high resolution grid with information from an image from the camera, and then populating the remaining points of the grid that are not yet populated.
Abstract:
Embodiments described herein relate to connected-state radio session transfer in wireless communications. A target access network controller may create a radio session associated with an access terminal, the radio session corresponding with a source radio session at a source access network controller. The target access network controller may also establish a communication route between a data network and the access terminal via the target access network controller. The target access network controller may further receive a frozen state associated with the source radio session from the source access network controller. In an aspect, the frozen state may include a snapshot of any data being communicated through the source radio session when freezing occurred. The target access network controller may subsequently unfreeze the received state.
Abstract:
Systems, methods, and computer-readable media are provided for transferring color information from a no-flash image of a scene to a flash image of the scene using a localized tonal mapping algorithm. In some aspects, an example method can include obtaining a no-flash image and a flash image, mapping color information from the no-flash image to the flash image, and generating an output image including the flash image modified based on the mapping to include at least a portion of the color information from the no-flash image.
Abstract:
Techniques and systems are provided for segmenting one or more frames. For example, incremental optical flow maps can be determined between adjacent frames of a plurality of frames. Using the incremental optical flow maps, a cumulative optical flow map can be determined between a first frame of the plurality of frames and a last frame of the plurality of frames. A segmentation mask can be determined using the first frame of the plurality of frames. Foreground pixels of the segmentation mask for the last frame of the plurality of frames can then be adjusted relative to corresponding foreground pixels for the first frame. The foreground pixels can be adjusted using the cumulative optical flow map between the first frame and the last frame of the plurality of frames.
Abstract:
Techniques and systems are described herein for determining dynamic lighting for objects in images. Using such techniques and systems, a lighting condition of one or more captured images can be adjusted. Techniques and systems are also described herein for determining depth values for one or more objects in an image. In some cases, the depth values (and the lighting values) can be determined using only a single camera and a single image, in which case one or more depth sensors are not needed to produce the depth values.
Abstract:
A method for compositing images by an electronic device is described. The method includes obtaining a first composite image that is based on a first image from a first lens with a first focal length and a second image from a second lens with a different second focal length. The method also includes downsampling the first composite image to produce a downsampled first composite image. The method further includes downsampling the first image to produce a downsampled first image. The method additionally includes producing a reduced detail blended image based on the downsampled first composite image and the downsampled first image. The method also includes producing an upsampled image based on the reduced detail blended image and the downsampled first composite image. The method further includes adding detail from the first composite image to the upsampled image to produce a second composite image.