Abstract:
Local IP access is provided in a wireless network to facilitate access to one or more local services. In some implementations, different IP interfaces are used for accessing different services (e.g., local services and operator network services). A list that maps packet destinations to IP interfaces may be employed to determine which IP interface is to be used for sending a given packet. In some implementations an access point provides a proxy function (e.g., a proxy ARP function) for an access terminal. In some implementations an access point provides an agent function (e.g., a DHCP function) for an access terminal. NAT operations may be performed at an access point to enable the access terminal to access local services. In some aspects, an access point may determine whether to send a packet from an access terminal via a protocol tunnel based on the destination of the packet.
Abstract:
Apparatuses and methods for reading a set of images to merge together into a high dynamic range (HDR) output image are described. Images have a respective HDR weight and a respective ghost-free weight. Images are merged together using the weighted average of the set of input images using the ghost-free weight. A difference image is determined based on a difference between each pixel within a HDR output image and each respective pixel within a reference image used to create the HDR output image.
Abstract:
Exemplary methods, apparatuses, and systems for image processing are described. One or more reference images are selected based on image quality scores. At least a portion of each reference image is merged to create an output image. An output image with motion artifacts is compared to a target to correct the motion artifacts of the output image.
Abstract:
In a wireless communication system where different frequency bands are deployed to generate various communication zones, pilot signal set management for a plurality of pilot signals generated from an additional coverage zone is based on identifying a preselected signal set from the plurality of pilot signals and determining whether a predetermined criterion is met
Abstract:
Exemplary methods, apparatuses, and systems for image processing are described. One or more reference images are selected based on image quality scores. At least a portion of each reference image is merged to create an output image. An output image with motion artifacts is compared to a target to correct the motion artifacts of the output image.
Abstract:
A method for compositing images by an electronic device is described. The method includes obtaining a first composite image that is based on a first image from a first lens with a first focal length and a second image from a second lens with a different second focal length. The method also includes downsampling the first composite image to produce a downsampled first composite image. The method further includes downsampling the first image to produce a downsampled first image. The method additionally includes producing a reduced detail blended image based on the downsampled first composite image and the downsampled first image. The method also includes producing an upsampled image based on the reduced detail blended image and the downsampled first composite image. The method further includes adding detail from the first composite image to the upsampled image to produce a second composite image.
Abstract:
Techniques are described for generating an all-in focus image with a capability to refocus. One example includes obtaining a first depth map associated with a plurality of captured images of a scene. The plurality of captured images may include images having different focal lengths. The method further includes obtaining a second depth map associated with the plurality of captured images, generating a composite image showing different portions of the scene in focus (based on the plurality of captured images and the first depth map), and generating a refocused image showing a selected portion of the scene in focus (based on the composite image and the second depth map).
Abstract:
Techniques disclosed herein involve determining motion occurring in a scene between the capture of two successively-captured images of the scene using intensity gradients of pixels within the images. These techniques can be used alone or with other motion-detection techniques to identify where motion has occurred in the scene, which can be further used to reduce artifacts that may be generated when images are combined.
Abstract:
Methods, devices, and computer program products for capturing images with reduced blurriness in low light conditions are contained herein. In one aspect, a method of capturing an image is disclosed. The method includes capturing a plurality of first images with a first exposure length. The method further includes aligning each of the plurality of first images with each other and combining the aligned plurality of first images into a combined first image. The method further includes capturing a second image with a second exposure length, wherein the second exposure length is longer than the first exposure length and using the second image to adjust the brightness of the combined first image.
Abstract:
A mobile device detects a moveable foreground object in captured images, e.g., a series of video frames without depth information. The object may be one or more of the user's fingers. The object may be detected by warping one of a captured image of a scene that includes the object and a reference image of the scene without the object so they have the same view and comparing the captured image and the reference image after warping. A mask may be used to segment the object from the captured image. Pixels are detected in the extracted image of the object and the pixels are used to detect the point of interest on the foreground object. The object may then be tracked in subsequent images. Augmentations may be rendered and interacted with or temporal gestures may be detected and desired actions performed accordingly.