-
公开(公告)号:US20240412444A1
公开(公告)日:2024-12-12
申请号:US18207923
申请日:2023-06-09
Applicant: Adobe Inc.
Inventor: Julien Philip , Valentin Deschaintre
Abstract: Methods and systems disclosed herein relate generally to radiance field gradient scaling for unbiased near-camera training. In a method, a processing device accesses an input image of a three-dimensional environment comprising a plurality of pixels, each pixel comprising a pixel color. The processing device determines a camera location based on the input image and a ray from the camera location in a direction of a pixel. The processing device integrates sampled information from a volumetric representation along the ray from the camera location to obtain an integrated color. The processing device trains a machine learning model configured to predict a density and a color, comprising minimizing a loss function using a scaling factor that is determined based on a distance between the camera location and a point along the ray. The processing device outputs the trained ML model for use in rendering an output image.
-
公开(公告)号:US20250139883A1
公开(公告)日:2025-05-01
申请号:US18499673
申请日:2023-11-01
Applicant: ADOBE INC.
Inventor: Milos Hasan , Iliyan Georgiev , Sai Bi , Julien Philip , Kalyan K. Sunkavalli , Xin Sun , Fujun Luan , Kevin James Blackburn-Matzen , Zexiang Xu , Kai Zhang
IPC: G06T17/00 , G06T7/90 , H04N13/279
Abstract: Embodiments are configured to render 3D models using an importance sampling method. First, embodiments obtain a 3D model including a plurality of density values corresponding to a plurality of locations in a 3D space, respectively. Embodiments then sample the color information from within a random subset of the plurality of locations using a probability distribution based on the plurality of density values. Embodiments have a higher probability to sample each location within the random subset of locations if the location has a higher density probability. Embodiments then an image depicting a view of the 3D model based on the sampling within the random subset of the plurality of locations.
-
公开(公告)号:US11972512B2
公开(公告)日:2024-04-30
申请号:US17583600
申请日:2022-01-25
Applicant: Adobe Inc.
Inventor: Julien Philip , David Nicholson Griffiths
CPC classification number: G06T11/60 , G06T7/50 , G06T7/90 , G06T15/06 , G06T15/50 , G06T2207/20081 , G06T2207/20084
Abstract: Directional propagation editing techniques are described, in one example, a digital image, a depth map, and a direction are obtained by an image editing system. The image editing system then generates features. To do so, the image editing system generates features from the digital image and the depth map for each pixel based on the direction, e.g., until an edge of the digital image is reached. In an implementation, instead of storing a value of the depth directly, a ratio is stored based on a depth in the depth map and a depth of a point along the direction. The image editing system then forms a feature volume using the features, e.g., as three dimensionally stacked features. The feature volume is employed by the image editing system as part of editing the digital image to form an edited digital image.
-
公开(公告)号:US20250078408A1
公开(公告)日:2025-03-06
申请号:US18458032
申请日:2023-08-29
Applicant: Adobe Inc.
Inventor: Valentin Mathieu Deschaintre , Vladimir Kim , Thibault Groueix , Julien Philip
IPC: G06T17/20 , G06T7/40 , H04N13/279
Abstract: Implementations of systems and methods for determining viewpoints suitable for performing one or more digital operations on a three-dimensional object are disclosed. Accordingly, a set of candidate viewpoints is established. The subset of candidate viewpoints provides views of an outer surface of a three-dimensional object and those views provide overlapping surface data. A subset of activated viewpoints is determined from the set of candidate viewpoints, the subset of activated viewpoints providing less of the overlapping surface data. The subset of activated viewpoints is used to perform one or more digital operation on the three-dimensional object.
-
公开(公告)号:US20230237718A1
公开(公告)日:2023-07-27
申请号:US17583600
申请日:2022-01-25
Applicant: Adobe Inc.
Inventor: Julien Philip , David Nicholson Griffiths
CPC classification number: G06T11/60 , G06T7/50 , G06T7/90 , G06T15/50 , G06T15/06 , G06T2207/20081 , G06T2207/20084
Abstract: Directional propagation editing techniques are described, in one example, a digital image, a depth map, and a direction are obtained by an image editing system. The image editing system then generates features. To do so, the image editing system generates features from the digital image and the depth map for each pixel based on the direction, e.g., until an edge of the digital image is reached. In an implementation, instead of storing a value of the depth directly, a ratio is stored based on a depth in the depth map and a depth of a point along the direction. The image editing system then forms a feature volume using the features, e.g., as three dimensionally stacked features. The feature volume is employed by the image editing system as part of editing the digital image to form an edited digital image.
-
6.
公开(公告)号:US20240273813A1
公开(公告)日:2024-08-15
申请号:US18168995
申请日:2023-02-14
Applicant: Adobe Inc.
Inventor: Jianming Zhang , Yichen Sheng , Julien Philip , Yannick Hold-Geoffroy , Xin Sun , He Zhang
CPC classification number: G06T15/60 , G06T7/60 , G06V10/60 , G06V10/761 , G06V10/82
Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that generates object shadows for digital images utilizing corresponding geometry-aware buffer channels. For instance, in one or more embodiments, the disclosed systems generate, utilizing a height prediction neural network, an object height map for a digital object portrayed in a digital image and a background height map for a background portrayed in the digital image. The disclosed systems also generate, from the digital image, a plurality of geometry-aware buffer channels using the object height map and the background height map. Further, the disclosed systems modify the digital image to include a soft object shadow for the digital object using the plurality of geometry-aware buffer channels.
-
公开(公告)号:US20240013477A1
公开(公告)日:2024-01-11
申请号:US17861199
申请日:2022-07-09
Applicant: Adobe Inc.
Inventor: Zexiang Xu , Zhixin Shu , Sai Bi , Qiangeng Xu , Kalyan Sunkavalli , Julien Philip
CPC classification number: G06T15/205 , G06T15/80 , G06T15/06 , G06T2207/10028
Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.
-
公开(公告)号:US20240404181A1
公开(公告)日:2024-12-05
申请号:US18799247
申请日:2024-08-09
Applicant: Adobe Inc.
Inventor: Zexiang Xu , Zhixin Shu , Sai Bi , Qiangeng Xu , Kalyan Sunkavalli , Julien Philip
Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.
-
公开(公告)号:US12073507B2
公开(公告)日:2024-08-27
申请号:US17861199
申请日:2022-07-09
Applicant: Adobe Inc.
Inventor: Zexiang Xu , Zhixin Shu , Sai Bi , Qiangeng Xu , Kalyan Sunkavalli , Julien Philip
CPC classification number: G06T15/205 , G06T15/06 , G06T15/80 , G06T2207/10028
Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.
-
-
-
-
-
-
-
-