Abstract:
A video stream of a scene for a virtual reality or augmented reality experience may be captured by one or more image capture devices. Data from the video stream may be retrieved, including base vantage data with base vantage color data depicting the scene from a base vantage location, and target vantage data with target vantage color data depicting the scene from a target vantage location. The base vantage data may be reprojected to the target vantage location to obtain reprojected target vantage data. The reprojected target vantage data may be compared with the target vantage data to obtain residual data. The residual data may be compressed by removing a subset of the residual data that is likely to be less viewer-discernable than a remainder of the residual data. A compressed video stream may be stored, including the base vantage data and the compressed residual data.
Abstract:
An image such as a light-field image may be processed to provide depth-based blurring. The image may be received in a data store. At an input device, first and second user input may be received to designate a first focus depth and a second focus depth different from the first focus depth, respectively. A processor may identify one or more foreground portions of the image that have one or more foreground portion depths, each of which is less than the first focus depth. The processor may also identify one or more background portions of the image that have one or more background portion depths, each of which is greater than the second focus depth. The processor may also apply blurring to the one or more foreground portions and the one or more background portions to generate a processed image, which may be displayed on a display device.
Abstract:
According to various embodiments of the present invention, the optical systems of light field capture devices are optimized so as to improve captured light field image data. Optimizing optical systems of light field capture devices can result in captured light field image data (both still and video) that is cheaper and/or easier to process. Optical systems can be optimized to yield improved quality or resolution when using cheaper processing approaches whose computational costs fit within various processing and/or resource constraints. As such, the optical systems of light field cameras can be optimized to reduce size and/or cost and/or increase the quality of such optical systems.
Abstract:
Depths of one or more objects in a scene may be measured with enhanced accuracy through the use of a light-field camera and a depth sensor. The light-field camera may capture a light-field image of the scene. The depth sensor may capture depth sensor data of the scene. Light-field depth data may be extracted from the light-field image and used, in combination with the sensor depth data, to generate a depth map indicative of distance between the light-field camera and one or more objects in the scene. The depth sensor may be an active depth sensor that transmits electromagnetic energy toward the scene; the electromagnetic energy may be reflected off of the scene and detected by the active depth sensor. The active depth sensor may have a 360° field of view; accordingly, one or more mirrors may be used to direct the electromagnetic energy between the active depth sensor and the scene.
Abstract:
According to various embodiments, a light-field image may be compressed and/or decompressed to facilitate storage, transmission, or other functions related to the light-field image. A light-field image may be captured by a light-field image capture device having an image sensor and a microlens array. The light-field image may be received in a data store. A processor may generate a first refocus image pool with a plurality of refocus images based on the light-field image. The processor may further use the first refocus image pool to compress the light-field image to generate a bitstream, smaller than the light-field image, which is representative of the light-field image. The processor or a different processor may also be used to generate a second refocus image pool with a second plurality of images based on the bitstream. The second refocus image pool may be used to decompress the bitstream to generate a reconstructed light-field image.
Abstract:
According to various embodiments, the system and method disclosed herein serve to at least partially compensate for departures of an actual main lens of a light-field camera from the properties of an ideal main lens. Light-field data may be captured and processed through the use of product calibration data and unit calibration data. The product calibration data may be descriptive of departure of a main lens design of the light-field camera from an ideal main lens design. The unit calibration data may be descriptive of departure of the actual main lens of the light-field camera from the main lens design. Corrected light-field data may be generated as a result of the processing, and may be used to generate a light-field image.
Abstract:
An image such as a light-field image may be captured with a light-field image capture device with a microlens array. The image may be received in a data store along with a depth map that indicates depths at which objects in different portions of the image are disposed. A function may be applied to the depth map to generate a mask that defines a gradual transition between the different depths. An effect may be applied to the image through the use of the mask such that applicability of the effect is determined by the mask. A processed image may be generated. The first effect may be present in the processed image, as applied previously. The processed image may be displayed on a display device. If desired, multiple effects may be applied through the generation of multiple masks, depth maps, and/or intermediate images prior to generation of the processed image.
Abstract:
According to various embodiments, the system and method disclosed herein process light-field image data so as to mitigate lens flare effects. A light-field image may be captured with a light-field image capture device with a microlens array and received in a data store. A plurality of flare-affected pixels may be identified in the light-field image. The flare-affected pixels may have flare-affected pixel values. Flare-corrected pixel values may be generated for the flare-affected pixels. Relative to the flare-affected pixel values, the flare-corrected pixel values may at least partially remove the lens flare effects. The flare-corrected pixel values may be used to generate a corrected light-field image in which the lens flare effects are at least partially corrected. The corrected light-field image may be displayed on a display screen.
Abstract:
A light-field camera may generate four-dimensional light-field data indicative of incoming light. The light-field camera may have an aperture configured to receive the incoming light, an image sensor, and a microlens array configured to redirect the incoming light at the image sensor. The image sensor may receive the incoming light and, based on the incoming light, generate the four-dimensional light-field data, which may have first and second spatial dimensions and first and second angular dimensions. The first angular dimension may have a first resolution higher than a second resolution of the second angular dimension.
Abstract:
Microlens positions for a light-field capture device may be calibrated. A calibration light-field image may be captured, with a microlens portion corresponding to each microlens of the light-field capture device. Interstitial spaces between the microlens portions may be identified and used to locate one or more center locations of the microlens portions. The center locations may be used to generate a model that indicates the microlens positions. Additionally or alternatively, the calibration light field image may be used to select one or more contour samples from among multiple contour samples of the microlens portions. The contour sample may be fitted to a circle centered at a center location of a microlens portion to identify the center location, which may then be used to generate a model that indicates the microlens positions. Multiple iterations may be used to enhance the accuracy of the models.