-
公开(公告)号:US20200334894A1
公开(公告)日:2020-10-22
申请号:US16388187
申请日:2019-04-18
Applicant: Adobe Inc.
Inventor: MAI LONG , Simon Niklaus , Jimei Yang
Abstract: Systems and methods are described for generating a three dimensional (3D) effect from a two dimensional (2D) image. The methods may include generating a depth map based on a 2D image, identifying a camera path, generating one or more extremal views based on the 2D image and the camera path, generating a global point cloud by inpainting occlusion gaps in the one or more extremal views, generating one or more intermediate views based on the global point cloud and the camera path, and combining the one or more extremal views and the one or more intermediate views to produce a 3D motion effect.
-
公开(公告)号:US12169909B2
公开(公告)日:2024-12-17
申请号:US17714356
申请日:2022-04-06
Applicant: Adobe Inc.
Inventor: Simon Niklaus , Ping Hu
IPC: G06T3/4007 , G06T3/18 , G06T5/50
Abstract: Digital synthesis techniques are described to synthesize a digital image at a target time between a first digital image and a second digital image. To begin, an optical flow generation module is employed to generate optical flows. The digital images and optical flows are then received as an input by a motion refinement system. The motion refinement system is configured to generate data describing many-to-many relationships mapped for pixels in the plurality of digital images and reliability scores of the many-to-many relationships. The reliability scores are then used to resolve overlaps of pixels that are mapped to a same location by a synthesis module to generate a synthesized digital image.
-
公开(公告)号:US11798180B2
公开(公告)日:2023-10-24
申请号:US17186436
申请日:2021-02-26
Applicant: Adobe Inc.
Inventor: Wei Yin , Jianming Zhang , Oliver Wang , Simon Niklaus , Mai Long , Su Chen
CPC classification number: G06T7/50 , G06T7/13 , G06T7/143 , G06T7/30 , G06T7/521 , G06T7/593 , G06T2207/10028 , G06T2207/20081 , G06T2207/20084
Abstract: This disclosure describes one or more implementations of a depth prediction system that generates accurate depth images from single input digital images. In one or more implementations, the depth prediction system enforces different sets of loss functions across mix-data sources to generate a multi-branch architecture depth prediction model. For instance, in one or more implementations, the depth prediction model utilizes different data sources having different granularities of ground truth depth data to robustly train a depth prediction model. Further, given the different ground truth depth data granularities from the different data sources, the depth prediction model enforces different combinations of loss functions including an image-level normalized regression loss function and/or a pair-wise normal loss among other loss functions.
-
公开(公告)号:US20220277514A1
公开(公告)日:2022-09-01
申请号:US17186522
申请日:2021-02-26
Applicant: Adobe Inc.
Inventor: Wei Yin , Jianming Zhang , Oliver Wang , Simon Niklaus , Mai Long , Su Chen
Abstract: This disclosure describes implementations of a three-dimensional (3D) scene recovery system that reconstructs a 3D scene representation of a scene portrayed in a single digital image. For instance, the 3D scene recovery system trains and utilizes a 3D point cloud model to recover accurate intrinsic camera parameters from a depth map of the digital image. Additionally, the 3D point cloud model may include multiple neural networks that target specific intrinsic camera parameters. For example, the 3D point cloud model may include a depth 3D point cloud neural network that recovers the depth shift as well as include a focal length 3D point cloud neural network that recovers the camera focal length. Further, the 3D scene recovery system may utilize the recovered intrinsic camera parameters to transform the single digital image into an accurate and realistic 3D scene representation, such as a 3D point cloud.
-
-
-