Three-dimensional, 360-degree virtual reality exposure control

    公开(公告)号:US10200624B2

    公开(公告)日:2019-02-05

    申请号:US15096128

    申请日:2016-04-11

    Applicant: Facebook, Inc.

    Abstract: A camera system is configured to capture, via a plurality of cameras, 360 degree image information of a local area, at least a portion of which is in stereo. The camera system determines respective exposure settings for the plurality of cameras. A minimum shutter speed and a maximum shutter speed are determined from the determined exposure settings. A set of test exposure settings is determined using the determined minimum shutter speed and maximum shutter speed. A set of test images is captured using the plurality of cameras at each test exposure setting in the set of test exposure settings. Each set of test images includes images from each of the plurality of cameras that are captured using a same respective test exposure setting. A global exposure setting is selected based on the captured sets of test images. The selected global exposure setting is applied to the plurality of cameras.

    CAMERA CALIBRATION SYSTEM
    4.
    发明申请

    公开(公告)号:US20190098287A1

    公开(公告)日:2019-03-28

    申请号:US16205109

    申请日:2018-11-29

    Applicant: Facebook, Inc.

    Abstract: A camera calibration system jointly calibrates multiple cameras in a camera rig system. The camera calibration system obtains configuration information about the multiple cameras in the camera rig system, such as position and orientation for each camera relative to other cameras. The camera calibration system estimates calibration parameters (e.g., rotation and translation) for the multiple cameras based on the obtained configuration information. The camera calibration system receives 2D images of a test object captured by the multiple cameras and obtains known information about the test object such as location, size, texture and detailed information of visually distinguishable points of the test object. The camera calibration system then generates a 3D model of the test object based on the received 2D images and the estimated calibration parameters. The generated 3D model is evaluated in comparison with the actual test object to determine a calibration error. The calibration parameters for the cameras are updated to reduce the calibration error for the multiple cameras.

    Generating intermediate views using optical flow

    公开(公告)号:US10057562B2

    公开(公告)日:2018-08-21

    申请号:US15096165

    申请日:2016-04-11

    Applicant: Facebook, Inc.

    Abstract: A canvas generation system generates a canvas view of a scene based on a set of original camera views depicting the scene, for example to recreate a scene in virtual reality. Canvas views can be generated based on a set of synthetic views generated from a set of original camera views. Synthetic views can be generated, for example, by shifting and blending relevant original camera views based on an optical flow across multiple original camera views. An optical flow can be generated using an iterative method which individually optimizes the optical flow vector for each pixel of a camera view and propagates changes in the optical flow to neighboring optical flow vectors.

    CAMERA CALIBRATION SYSTEM
    8.
    发明申请

    公开(公告)号:US20170295358A1

    公开(公告)日:2017-10-12

    申请号:US15096149

    申请日:2016-04-11

    Applicant: Facebook, Inc.

    Abstract: A camera calibration system jointly calibrates multiple cameras in a camera rig system. The camera calibration system obtains configuration information about the multiple cameras in the camera rig system, such as position and orientation for each camera relative to other cameras. The camera calibration system estimates calibration parameters (e.g., rotation and translation) for the multiple cameras based on the obtained configuration information. The camera calibration system receives 2D images of a test object captured by the multiple cameras and obtains known information about the test object such as location, size, texture and detailed information of visually distinguishable points of the test object. The camera calibration system then generates a 3D model of the test object based on the received 2D images and the estimated calibration parameters. The generated 3D model is evaluated in comparison with the actual test object to determine a calibration error. The calibration parameters for the cameras are updated to reduce the calibration error for the multiple cameras.

Patent Agency Ranking