Abstract:
Reducing consumption of image sensor processor bandwidth includes capturing an image containing subject matter with an image sensor and cropping the image to generate a cropped image. Cropping the image is performed by the image sensor in response to coordinates received from an image sensor processor. The cropped image is sent from the image sensor to the image sensor processor and new coordinates based on a position of the subject matter in the cropped image are determined with the image sensor processor. The new coordinates are then sent to the image sensor.
Abstract:
A method determines a pixel value in a high dynamic range image from two images of different brightness by obtaining corresponding input pixel intensities from the two images, determining combination weights, and calculating the pixel value in the high dynamic range image as a weighted average of the input pixel intensities. Another method determines a pixel value in a high dynamic range image from more than two images by forming pairs of corresponding input pixel intensities, determining relative combination weights for the input pixels intensities for each pair, applying a normalization condition to determine absolute combination weights, and calculating the pixel value in the high dynamic range image as a weighted average of the input pixel intensities. Systems for generating high dynamic range image generation from two or more input images include a processor, a memory, a combination weight module, and a pixel value calculation module.
Abstract:
A method for generating a control signal to control an information technology device includes the following steps: (1) capturing, using an image sensor, a current control image of a light source of a remote controller positioned within a field of view of the image sensor; (2) identifying, within the current control image, a current location of light emitted from the light source; (3) determining movement between (a) the current location of the light emitted from the light source and (b) a previous location of the light emitted from the light source determined from a previously captured image; (4) generating a movement control signal based upon the movement; and (5) sending the movement control signal to the information technology device. The method is executed, for example, by a movement control module of an information technology device input system.
Abstract:
A mobile computing device includes a first video camera on a first side of the mobile computing device producing a first camera video stream. A second video camera is on a second side of the mobile computing device producing a second camera video stream. A video processor is coupled to the first video camera and the second video camera to receive the first camera video stream and the second camera video stream, respectively. The video processor is coupled to merge the first camera video stream and the second camera video stream to generate a merged video stream. The video processor includes a network interface coupled to upload the merged video stream to a server in real-time using an Internet wireless network. The server broadcasts the merged video stream to a plurality of receivers in real-time.
Abstract:
A mobile computing device includes first, second and third cameras coupled to produce first, second and third camera video streams, respectively. The first camera is on a first side of the mobile computing device, and the second and third cameras are included in a stereo camera on a second side of the mobile computing device. A video processor is coupled to generate an output video stream including a first video layer generated from the first camera video stream. The video processor is further coupled to generate the output video stream to include second and third video layers from the second camera video stream in response to the second and the third camera video streams. The video processor is further coupled to overlay the first video layer between the second video layer and the third video layer in the output video stream.
Abstract:
An imaging system includes a primary imager and plurality of 3A-control sensors. The primary imager has a first field of view and includes a primary image sensor and a primary imaging lens with a first optical axis. The primary image sensor has a primary pixel array and control circuitry communicatively coupled thereto. The plurality of 3A-control sensors includes at least one of a peripheral imager and a 3A-control sensor. The peripheral imager, if included, has a second field of view including (i) at least part of the first field of view and (ii) a phase-difference auto-focus (PDAF) sensor and a peripheral imaging lens, the PDAF sensor being separate from the primary image sensor. The 3A-control sensor, if included, is separate from the primary pixel array and communicatively connected to the control circuitry to provide one of auto-white balance and exposure control for the primary pixel array.
Abstract:
An imaging system with on-chip phase-detection includes an image sensor with symmetric multi-pixel phase-difference detectors. Each symmetric multi-pixel phase-difference detector includes (a) a plurality of pixels forming an array and each having a respective color filter thereon, each color filter having a transmission spectrum and (b) a microlens at least partially above each of the plurality of pixels and having an optical axis intersecting the array. The array, by virtue of each transmission spectrum, has reflection symmetry with respect to both (a) a first plane that includes the optical axis and (b) a second plane that is orthogonal to the first plane. The imaging system includes a phase-detection row pair, which includes a plurality of symmetric multi-pixel phase-difference detectors in a pair of adjacent pixel rows and a pair, and an analogous phase-detection column pair.
Abstract:
A system for obtaining image depth information for at least one object in a scene includes (a) an imaging objective having a first portion for forming a first optical image of the scene, and a second portion for forming a second optical image of the scene, the first portion being different from the second portion, (b) an image sensor for capturing the first and second optical images and generating respective first and second electronic images therefrom, and (c) a processing module for processing the first and second electronic images to determine the depth information. A method for obtaining image depth information for at least one object in a scene includes forming first and second images of the scene, using respective first and second portions of an imaging objective, on a single image sensor, and determining the depth information from a spatial shift between the first and second images.
Abstract:
Implementations of a color filter array comprising a plurality of tiled minimal repeating units. Each minimal repeating unit includes at least a first set of filters comprising three or more color filters, the first set including at least one color filter with a first spectral photoresponse, at least one color filter with a second spectral photoresponse, and at least one color filter with a third spectral photoresponse; and a second set of filters comprising one or more broadband filters positioned among the color filters of the first set, wherein each of the one or more broadband filters has a fourth spectral photoresponse with a broader spectrum than any of the first, second, and third spectral photoresponses, and wherein the individual filters of the second set have a smaller area than any of the individual filters in the first set. Other implementations are disclosed and claimed.
Abstract:
Embodiments of an apparatus and process are described. The process includes capturing a first image frame using an image sensor, the image sensor including a pixel array comprising a plurality of pixels arranged in rows and columns and a color filter array optically coupled to the pixel array. A region of interest within the first image frame is determined, and the exposure time of the image sensor is adjusted to eliminate a substantial fraction of the visible light captured by the image sensor. A rolling shutter procedure is used with the pixel array to capture at least one subsequent frame using the adjusted exposure time, and a source of invisible radiation is activated while the rolling shutter enters the region of interest and deactivated when the rolling shutter exits the region of interest. Finally, an image of the region of interest is output. Other embodiments are disclosed and claimed.