RADIANCE FIELD GRADIENT SCALING FOR UNBIASED NEAR-CAMERA TRAINING

    公开(公告)号:US20240412444A1

    公开(公告)日:2024-12-12

    申请号:US18207923

    申请日:2023-06-09

    Applicant: Adobe Inc.

    Abstract: Methods and systems disclosed herein relate generally to radiance field gradient scaling for unbiased near-camera training. In a method, a processing device accesses an input image of a three-dimensional environment comprising a plurality of pixels, each pixel comprising a pixel color. The processing device determines a camera location based on the input image and a ray from the camera location in a direction of a pixel. The processing device integrates sampled information from a volumetric representation along the ray from the camera location to obtain an integrated color. The processing device trains a machine learning model configured to predict a density and a color, comprising minimizing a loss function using a scaling factor that is determined based on a distance between the camera location and a point along the ray. The processing device outputs the trained ML model for use in rendering an output image.

    Directional editing of digital images

    公开(公告)号:US11972512B2

    公开(公告)日:2024-04-30

    申请号:US17583600

    申请日:2022-01-25

    Applicant: Adobe Inc.

    Abstract: Directional propagation editing techniques are described, in one example, a digital image, a depth map, and a direction are obtained by an image editing system. The image editing system then generates features. To do so, the image editing system generates features from the digital image and the depth map for each pixel based on the direction, e.g., until an edge of the digital image is reached. In an implementation, instead of storing a value of the depth directly, a ratio is stored based on a depth in the depth map and a depth of a point along the direction. The image editing system then forms a feature volume using the features, e.g., as three dimensionally stacked features. The feature volume is employed by the image editing system as part of editing the digital image to form an edited digital image.

    VIEWPOINTS DETERMINATION FOR THREE-DIMENSIONAL OBJECTS

    公开(公告)号:US20250078408A1

    公开(公告)日:2025-03-06

    申请号:US18458032

    申请日:2023-08-29

    Applicant: Adobe Inc.

    Abstract: Implementations of systems and methods for determining viewpoints suitable for performing one or more digital operations on a three-dimensional object are disclosed. Accordingly, a set of candidate viewpoints is established. The subset of candidate viewpoints provides views of an outer surface of a three-dimensional object and those views provide overlapping surface data. A subset of activated viewpoints is determined from the set of candidate viewpoints, the subset of activated viewpoints providing less of the overlapping surface data. The subset of activated viewpoints is used to perform one or more digital operation on the three-dimensional object.

    Directional Editing of Digital Images
    5.
    发明公开

    公开(公告)号:US20230237718A1

    公开(公告)日:2023-07-27

    申请号:US17583600

    申请日:2022-01-25

    Applicant: Adobe Inc.

    Abstract: Directional propagation editing techniques are described, in one example, a digital image, a depth map, and a direction are obtained by an image editing system. The image editing system then generates features. To do so, the image editing system generates features from the digital image and the depth map for each pixel based on the direction, e.g., until an edge of the digital image is reached. In an implementation, instead of storing a value of the depth directly, a ratio is stored based on a depth in the depth map and a depth of a point along the direction. The image editing system then forms a feature volume using the features, e.g., as three dimensionally stacked features. The feature volume is employed by the image editing system as part of editing the digital image to form an edited digital image.

    POINT-BASED NEURAL RADIANCE FIELD FOR THREE DIMENSIONAL SCENE REPRESENTATION

    公开(公告)号:US20240013477A1

    公开(公告)日:2024-01-11

    申请号:US17861199

    申请日:2022-07-09

    Applicant: Adobe Inc.

    CPC classification number: G06T15/205 G06T15/80 G06T15/06 G06T2207/10028

    Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.

    POINT-BASED NEURAL RADIANCE FIELD FOR THREE DIMENSIONAL SCENE REPRESENTATION

    公开(公告)号:US20240404181A1

    公开(公告)日:2024-12-05

    申请号:US18799247

    申请日:2024-08-09

    Applicant: Adobe Inc.

    Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.

    Point-based neural radiance field for three dimensional scene representation

    公开(公告)号:US12073507B2

    公开(公告)日:2024-08-27

    申请号:US17861199

    申请日:2022-07-09

    Applicant: Adobe Inc.

    CPC classification number: G06T15/205 G06T15/06 G06T15/80 G06T2207/10028

    Abstract: A scene modeling system receives a plurality of input two-dimensional (2D) images corresponding to a plurality of views of an object and a request to display a three-dimensional (3D) scene that includes the object. The scene modeling system generates an output 2D image for a view of the 3D scene by applying a scene representation model to the input 2D images. The scene representation model includes a point cloud generation model configured to generate, based on the input 2D images, a neural point cloud representing the 3D scene. The scene representation model includes a neural point volume rendering model configured to determine, for each pixel of the output image and using the neural point cloud and a volume rendering process, a color value. The scene modeling system transmits, responsive to the request, the output 2D image. Each pixel of the output image includes the respective determined color value.

Patent Agency Ranking