-
公开(公告)号:US20230306637A1
公开(公告)日:2023-09-28
申请号:US17656796
申请日:2022-03-28
Applicant: ADOBE INC.
Inventor: Jianming ZHANG , Linyi JIN , Kevin MATZEN , Oliver WANG , Yannick HOLD-GEOFFROY
CPC classification number: G06T7/80 , G06T9/002 , G06T11/00 , G06N3/0454 , G06T2207/20081 , G06T2207/20084 , G06V10/764
Abstract: Systems and methods for image dense field based view calibration are provided. In one embodiment, an input image is applied to a dense field machine learning model that generates a vertical vector dense field (VVF) and a latitude dense field (LDF) from the input image. The VVF comprises a vertical vector of a projected vanishing point direction for each of the pixels of the input image. The latitude dense field (LDF) comprises a projected latitude value for the pixels of the input image. A dense field map for the input image comprising the VVF and the LDF can be directly or indirectly used for a variety of image processing manipulations. The VVF and LDF can be optionally used to derive traditional camera calibration parameters from uncontrolled images that have undergone undocumented or unknown manipulations.
-
公开(公告)号:US20240290022A1
公开(公告)日:2024-08-29
申请号:US18176267
申请日:2023-02-28
Applicant: ADOBE INC.
Inventor: Yijun LI , Yannick HOLD-GEOFFROY , Manuel Rodriguez Ladron DE GUEVARA , Jose Ignacio Echevarria VALLESPI , Daichi ITO , Cameron Younger SMITH
IPC: G06T13/40 , G06N3/0455 , G06N3/0895
CPC classification number: G06T13/40 , G06N3/0455 , G06N3/0895
Abstract: Avatar generation from an image is performed using semi-supervised machine learning. An image space model undergoes unsupervised training from images to generate latent image vectors responsive to image inputs. An avatar parameter space model undergoes unsupervised training from avatar parameter values for avatar parameters to generate latent avatar parameter vectors responsive to avatar parameter value inputs. A cross-modal mapping model undergoes supervised training on image-avatar parameter pair inputs corresponding to the latent image vectors and the latent avatar parameter vectors. The trained image space model generates a latent image vector from an image input. The trained cross-modal mapping model translates the latent image vector to a latent avatar parameter vector. The trained avatar parameter space model generates avatar parameter values from the latent avatar parameter vector. The latent avatar parameter vector can be used to render an avatar having features corresponding to the input image.
-
公开(公告)号:US20240177399A1
公开(公告)日:2024-05-30
申请号:US18426084
申请日:2024-01-29
Applicant: Adobe Inc.
Inventor: Zexiang XU , Yannick HOLD-GEOFFROY , Milos HASAN , Kalyan SUNKAVALLI , Fanbo XIANG
Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.
-
公开(公告)号:US20220198738A1
公开(公告)日:2022-06-23
申请号:US17559867
申请日:2021-12-22
Applicant: Adobe Inc.
Inventor: Zexiang XU , Yannick HOLD-GEOFFROY , Milos HASAN , Kalyan SUNKAVALLI , Fanbo XIANG
Abstract: Embodiments are disclosed for neural texture mapping. In some embodiments, a method of neural texture mapping includes obtaining a plurality of images of an object, determining volumetric representation of a scene of the object using a first neural network, mapping 3D points of the scene to a 2D texture space using a second neural network, and determining radiance values for each 2D point in the 2D texture space from a plurality of viewpoints using a second neural network to generate a 3D appearance representation of the object.
-
公开(公告)号:US20240404090A1
公开(公告)日:2024-12-05
申请号:US18205413
申请日:2023-06-02
Applicant: ADOBE INC. , Université Laval
Abstract: In various examples, a set of camera parameters associated with an input image are determined based on a disparity map and a signed defocus map. For example, a disparity model generates the disparity map indicating disparity values associated with pixels of the input image and a defocus model generates a signed defocus map indicating blur values associated with the pixels of the input image.
-
公开(公告)号:US20210158139A1
公开(公告)日:2021-05-27
申请号:US16691110
申请日:2019-11-21
Applicant: ADOBE INC.
Inventor: Long MAI , Yannick HOLD-GEOFFROY , Naoto INOUE , Daichi ITO , Brian Lynn PRICE
Abstract: Embodiments of the present invention provide systems, methods, and non-transitory computer storage media for generating an ambient occlusion (AO) map for a 2D image that can be combined with the 2D image to adjust the contrast of the 2D image based on the geometric information in the 2D image. In embodiments, using a trained neural network, an AO map for a 2D image is automatically generated without any predefined 3D scene information. Optimizing the neural network to generate an estimated AO map for a 2D image requires training, testing, and validating the neural network using a synthetic dataset comprised of pairs of images and ground truth AO maps rendered from 3D scenes. By using an estimated AO map to adjust the contrast of a 2D image, the contrast of the image can be adjusted to make the image appear lifelike by modifying the shadows and shading in the image based on the ambient lighting present in the image.
-
-
-
-
-