Abstract:
Implementations of a color filter array comprising a plurality of tiled minimal repeating units. Each minimal repeating unit includes at least a first set of filters comprising three or more color filters, the first set including at least one color filter with a first spectral photoresponse, at least one color filter with a second spectral photoresponse, and at least one color filter with a third spectral photoresponse; and a second set of filters comprising one or more broadband filters positioned among the color filters of the first set, wherein each of the one or more broadband filters has a fourth spectral photoresponse with a broader spectrum than any of the first, second, and third spectral photoresponses, and wherein the individual filters of the second set have a smaller area than any of the individual filters in the first set. Other implementations are disclosed and claimed.
Abstract:
A brightness-sensitive automatic white balance method includes (a) determining the brightness of a scene captured in an electronic color image, (b) selecting a color-weighting map based upon the brightness of the scene, (c) extracting auto white balance parameters from the color-weighting map, and (d) white balancing the electronic color image according to the auto white balance parameters. An adaptive automatic white balance method includes (a) refining, based upon a first electronic color image of a scene illuminated by an illuminant of a first spectral type, a color-weighting probability distribution for the illuminant of the first spectral type, wherein the color-weighting probability distribution may be brightness-specific, (b) extracting auto white balance parameters from the refined color-weighting probability distribution, and (c) white balancing, according to the auto white balance parameters, an electronic color image of a scene illuminated by the illuminant of the first spectral type.
Abstract:
Embodiments are disclosed of a process for high dynamic range (HDR) images using an image sensor with pixel array comprising a plurality of pixels to capture a first image having a first exposure time, a second image having a second exposure time, and a third image having a third exposure time, wherein of the first, second, and third exposure times the second exposure time is the shortest. The first, second, and third images are combined into a high-dynamic-range (HDR) image. Other embodiments are disclosed and claimed.
Abstract:
An image transformation and multi-view output system and associated method generates output view data from raw image data using a coordinate mapping that reverse maps pixels of the output view data onto the raw image data. The coordinate mapping is stored in a lookup table and incorporates perspective correction and/or distortion correction for a wide angle lens used to capture the raw image data. The use of the lookup table with reverse mapping improves performance of the image transformation and multi-view output system to allow multi-view video streaming of images corrected for one or both of perspective and distortion.
Abstract:
A method for communicating from a mobile platform includes arranging a plurality of regions in a communication screen on a first mobile platform. Each one of the plurality of regions in the communication screen is populated with communication data. The communication data includes at least one or more of text data, image data, and video data. The communication screen is sent from the first mobile platform to a second mobile platform. A display of the communication screen on the second mobile platform appears substantially identical to a display of the communication screen on the first mobile platform.
Abstract:
An imaging system includes a primary imager and plurality of 3A-control sensors. The primary imager has a first field of view and includes a primary image sensor and a primary imaging lens with a first optical axis. The primary image sensor has a primary pixel array and control circuitry communicatively coupled thereto. The plurality of 3A-control sensors includes at least one of a peripheral imager and a 3A-control sensor. The peripheral imager, if included, has a second field of view including (i) at least part of the first field of view and (ii) a phase-difference auto-focus (PDAF) sensor and a peripheral imaging lens, the PDAF sensor being separate from the primary image sensor. The 3A-control sensor, if included, is separate from the primary pixel array and communicatively connected to the control circuitry to provide one of auto-white balance and exposure control for the primary pixel array.
Abstract:
A system and method for generating an image includes a plurality of imaging units coupled together and a system controller coupled to the plurality of imaging units for providing at least one signal to each of the plurality of imaging units. Each of the imaging units comprises: an image sensing unit for generating an in-situ image, each in-situ image being a portion of the image; an input for receiving the in-situ image; a composition unit for receiving a first composite image and producing a second composite image, the second composite image being a combination of the first composite image and the in-situ image; and an output at which the second composite image is provided.
Abstract:
A method for communicating from a mobile platform includes arranging a plurality of regions in a communication screen on a first mobile platform. Each one of the plurality of regions in the communication screen is populated with communication data. The communication data includes at least one or more of text data, image data, and video data. The communication screen is sent from the first mobile platform to a second mobile platform. A display of the communication screen on the second mobile platform appears substantially identical to a display of the communication screen on the first mobile platform.
Abstract:
A system for obtaining image depth information for at least one object in a scene includes (a) an imaging objective having a first portion for forming a first optical image of the scene, and a second portion for forming a second optical image of the scene, the first portion being different from the second portion, (b) an image sensor for capturing the first and second optical images and generating respective first and second electronic images therefrom, and (c) a processing module for processing the first and second electronic images to determine the depth information. A method for obtaining image depth information for at least one object in a scene includes forming first and second images of the scene, using respective first and second portions of an imaging objective, on a single image sensor, and determining the depth information from a spatial shift between the first and second images.
Abstract:
A method for embedding stereo imagery includes (a) transforming a foreground stereo image, extracted from a source stereo image captured by a first stereo camera, from a scale associated with the first stereo camera to a scale associated with a second stereo camera, to form a transformed foreground stereo image, and (b) embedding the transformed foreground stereo image into a target stereo image, captured by the second stereo camera, to form an embedded stereo image.