Abstract:
A method operational on a receiver device for decoding a codeword is provided. At least a portion of a composite code mask is obtained, via a receiver sensor, and projected on the surface of a target object. The composite code mask may be defined by a code layer and a carrier layer. A code layer of uniquely identifiable spatially-coded codewords may be defined by a plurality of symbols. A carrier layer may be independently ascertainable and distinct from the code layer and may include a plurality of reference objects that are robust to distortion upon projection. At least one of the code layer and carrier layer may have been pre-shaped by a synthetic point spread function prior to projection. The code layer may be adjusted, at a processing circuit, for distortion based on the reference objects within the portion of the composite code mask.
Abstract:
An electronic imaging device and method for image capture are described. The imaging device includes a camera configured to obtain image information of a scene and that may be focused on a region of interest in the scene. The imaging device also includes a LIDAR unit configured to obtain depth information of at least a portion of the scene at specified scan locations of the scene. The imaging device is configured to detect an object in the scene and provides specified scan locations to the LIDAR unit. The camera is configured to capture an image with an adjusted focus based on depth information, obtained by the LIDAR unit, associated with the detected object.
Abstract:
Aspects of the present disclosure relate to systems and methods for determining a resampler for resampling or converting non-Bayer patter color filter array image data to Bayer pattern image data. An example device may include a camera having an image sensor with a non-Bayer pattern color filter array configured to capture non-Bayer pattern image data for an image. The example device also may include a memory and a processor coupled to the memory. The processor may be configured to receive the non-Bayer pattern image data from the image sensor, divide the non-Bayer pattern image data into portions, determine a sampling filter corresponding to the portions, and determine, based on the determined sampling filter, a resampler for converting non-Bayer pattern image data to Bayer-pattern image data.
Abstract:
A device includes a first camera and a processor configured to detect one or more first keypoints within a first image captured by the first camera at a first time, detect one or more second keypoints within a second image captured by a second camera at the first time, and detect the one or more first keypoints within a third image captured by the first camera at a second time. The processor is configured to determine a pose estimation based on coordinates of the one or more first keypoints of the first image relative to a common coordinate system, coordinates of the one or more second keypoints of the second image relative to the common coordinate system, and coordinates of the one or more first keypoints of the third image relative to the common coordinate system. The first coordinate system is different than the common coordinate system.
Abstract:
Aspects of the present disclosure relate to systems and methods for structured light depth systems. An example active depth system may include a receiver to receive reflections of transmitted light and a transmitter including one or more light sources to transmit light in a spatial distribution. The spatial distribution of transmitted light may include a first region of a first plurality of light points and a second region of a second plurality of light points. A first density of the first plurality of light points is greater than a second density of the second plurality of light points when a first distance between a center of the spatial distribution and a center of the first region is less than a second distance between the center of the spatial distribution and the center of the second region.
Abstract:
An electronic device for generating a corrected depth map is described. The electronic device includes a processor. The processor is configured to obtain a first depth map. The first depth map includes first depth information of a first portion of a scene sampled by the depth sensor at a first sampling. The processor is also configured to obtain a second depth map. The second depth map includes second depth information of a second portion of the scene sampled by the depth sensor at a second sampling. The processor is additionally configured to obtain displacement information indicative of a displacement of the depth sensor between the first sampling and the second sampling. The processor is also configured to generate a corrected depth map by correcting erroneous depth information based on the first depth information, the second depth information, and the displacement information.
Abstract:
Methods, systems, and apparatuses are provided to compensate for a misalignment of optical devices within an imaging system. For example, the methods receive image data captured by a first optical device having a first optical axis and a second optical device having a second optical axis. The methods also receive sensor data indicative of a deflection of a substrate that supports the first and second optical devices. The deflection can result from a misalignment of the first optical axis relative to the second optical axis. The methods generate a depth value based on the captured image data and the sensor data. The depth value can reflect a compensation for the misalignment of the first optical axis relative to the second optical axis.
Abstract:
Devices and methods for providing seamless preview images for multi-camera devices having two or more asymmetric cameras. A multi-camera device may include two asymmetric cameras disposed to image a target scene. The multi-camera device further includes a processor coupled to a memory component and a display, the processor configured to retrieve an image generated by a first camera from the memory component, retrieve an image generated by a second camera from the memory component, receive input corresponding to a preview zoom level, retrieve spatial transform information and photometric transform information from memory, modify at least one image received from the first and second cameras by the spatial transform and the photometric transform, and provide on the display a preview image comprising at least a portion of the at least one modified image and a portion of either the first image or the second image based on the preview zoom level.
Abstract:
An electronic device for selecting a transform is described. The electronic device includes at least one image sensor, a memory, and a processor coupled to the memory and to the at least one image sensor. The processor is configured to obtain at least two images from the at least one image sensor. The processor is also configured to characterize structural content of each of the at least two images to produce a characterization for each image that is relevant to transform performance. The processor is further configured to select at least one transform from a set of transforms based on the characterization. The processor is additionally configured to apply the at least one transform to at least one of the images to substantially align the at least two images.
Abstract:
Certain aspects relate to systems and techniques for performing local intensity equalization on images in a set of images exhibiting local intensity variations. For example, the local intensity equalization can be used to perform accurate region matching and alignment of the images. The images can be partitioned into regions of pixel blocks, for instance based on location, shape, and size of identified keypoints in the images. Regions depicting the same feature in the images can be equalized with respect to intensity. Region matching based on the keypoints in the intensity-equalized regions can be performed with accuracy even in images captured by asymmetric sensors or exhibiting spatially varying intensity.