Abstract:
A video stream of a scene for a virtual reality or augmented reality experience may be captured by one or more image capture devices. Data from the video stream may be retrieved, including base vantage data with base vantage color data depicting the scene from a base vantage location, and target vantage data with target vantage color data depicting the scene from a target vantage location. The base vantage data may be reprojected to the target vantage location to obtain reprojected target vantage data. The reprojected target vantage data may be compared with the target vantage data to obtain residual data. The residual data may be compressed by removing a subset of the residual data that is likely to be less viewer-discernable than a remainder of the residual data. A compressed video stream may be stored, including the base vantage data and the compressed residual data.
Abstract:
According to various embodiments of the present invention, the optical systems of light field capture devices are optimized so as to improve captured light field image data. Optimizing optical systems of light field capture devices can result in captured light field image data (both still and video) that is cheaper and/or easier to process. Optical systems can be optimized to yield improved quality or resolution when using cheaper processing approaches whose computational costs fit within various processing and/or resource constraints. As such, the optical systems of light field cameras can be optimized to reduce size and/or cost and/or increase the quality of such optical systems.
Abstract:
According to various embodiments, the system and method disclosed herein serve to at least partially compensate for departures of an actual main lens of a light-field camera from the properties of an ideal main lens. Light-field data may be captured and processed through the use of product calibration data and unit calibration data. The product calibration data may be descriptive of departure of a main lens design of the light-field camera from an ideal main lens design. The unit calibration data may be descriptive of departure of the actual main lens of the light-field camera from the main lens design. Corrected light-field data may be generated as a result of the processing, and may be used to generate a light-field image.
Abstract:
An image such as a light-field image may be captured with a light-field image capture device with a microlens array. The image may be received in a data store along with a depth map that indicates depths at which objects in different portions of the image are disposed. A function may be applied to the depth map to generate a mask that defines a gradual transition between the different depths. An effect may be applied to the image through the use of the mask such that applicability of the effect is determined by the mask. A processed image may be generated. The first effect may be present in the processed image, as applied previously. The processed image may be displayed on a display device. If desired, multiple effects may be applied through the generation of multiple masks, depth maps, and/or intermediate images prior to generation of the processed image.
Abstract:
A light-field camera may generate four-dimensional light-field data indicative of incoming light. The light-field camera may have an aperture configured to receive the incoming light, an image sensor, and a microlens array configured to redirect the incoming light at the image sensor. The image sensor may receive the incoming light and, based on the incoming light, generate the four-dimensional light-field data, which may have first and second spatial dimensions and first and second angular dimensions. The first angular dimension may have a first resolution higher than a second resolution of the second angular dimension.
Abstract:
According to various embodiments of the present invention, the optical systems of light field capture devices are optimized so as to improve captured light field image data. Optimizing optical systems of light field capture devices can result in captured light field image data (both still and video) that is cheaper and/or easier to process. Optical systems can be optimized to yield improved quality or resolution when using cheaper processing approaches whose computational costs fit within various processing and/or resource constraints. As such, the optical systems of light field cameras can be optimized to reduce size and/or cost and/or increase the quality of such optical systems.
Abstract:
A dual-mode light field camera or plenoptic camera is enabled to perform both 3D light field imaging and conventional high-resolution 2D imaging, depending on the selected mode. In particular, an active system is provided that enables the microlenses to be optically or effectively turned on or turned off, allowing the camera to selectively operate as a 2D imaging camera or a 3D light field camera.
Abstract:
According to various embodiments, the system and method disclosed herein facilitate the design of plenoptic camera lens systems to enhance camera resolution. A first configuration for the plenoptic camera may first be selected, with a first plurality of variables that define attributes of the plenoptic camera. The attributes may include a main lens attribute of a main lens of the plenoptic camera and/or a phase mask attribute of a phase mask of the plenoptic camera. A merit function may be applied by simulating receipt of light through the main lens and the plurality of microlenses of the first configuration to calculate a first merit function value. The main lens attribute and/or the phase mask attribute may be iteratively perturbed, and the merit function may be re-applied. An optimal set of variables may be identified by comparing results of successive applications of the merit function.
Abstract:
In various embodiments, the present invention relates to methods, systems, architectures, algorithms, designs, and user interfaces for capturing, processing, analyzing, displaying, annotating, modifying, and/or interacting with light-field data on a light-field capture device. In at least one embodiment, the light-field capture device communicates to the user information about the scene during live-view to aid him or her in capturing light-field images that provide increased refocusing ability, increased parallax and perspective shifting ability, increased stereo disparity, and/or more dramatic post-capture effects. Additional embodiments present a standard 2D camera interface to software running on the light-field capture device to enable such software to function normally even though the device is actually capturing light-field data. Additional embodiments provide the ability to control camera optical elements to facilitate ease of composition and capture of light-field data, and/or generating a plurality of 2D video streams derived from a stream of light-field data.
Abstract:
According to various embodiments, the system and method of the present invention process light-field image data in a manner that reduces artifacts and that yields 2-D images with extended depth of field, and with variable placement of the center of perspective. Center of perspective can be varied based on user input or on pre-specified parameters. Various techniques for improving the presentation of light-field images with variable center of perspective are described, and for performing other effects in connection with projection of light-field images.