GENERATING ADAPTIVE THREE-DIMENSIONAL MESHES OF TWO-DIMENSIONAL IMAGES

    公开(公告)号:US20240161320A1

    公开(公告)日:2024-05-16

    申请号:US18055594

    申请日:2022-11-15

    Applicant: Adobe Inc.

    CPC classification number: G06T7/55 H04N2013/0074

    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input. Specifically, the disclosed system maps the three-dimensional mesh to the two-dimensional image, modifies the three-dimensional mesh in response to a displacement input, and updates the two-dimensional image.

    Generating differentiable procedural materials

    公开(公告)号:US11688109B2

    公开(公告)日:2023-06-27

    申请号:US17513747

    申请日:2021-10-28

    Applicant: Adobe Inc.

    CPC classification number: G06T11/001 G06N3/084 G06T11/40 G06T15/04

    Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.

    DEFORMATION WITH META-HANDLES OF 3D MESHES

    公开(公告)号:US20220284677A1

    公开(公告)日:2022-09-08

    申请号:US17195099

    申请日:2021-03-08

    Applicant: ADOBE INC.

    Abstract: This disclosure includes technologies for deformation of 3D shapes using meta-handles. The disclosed 3D conditional generative system takes control points with biharmonic coordinates as deformation handles for a shape to train a network to learn a set of meta-handles for the shape. Further, each deformation axis of the latent space of deformation is explicitly associated with a meta-handle from a set of disentangled meta-handles, and the disentangled meta-handles factorize plausible deformations of the shape. Advantageously, an intuitive deformation of the shape may be generated by manipulating coefficients of the meta-handles, e.g., via a user interface.

    System for automatic object mask and hotspot tracking

    公开(公告)号:US11367199B2

    公开(公告)日:2022-06-21

    申请号:US16900483

    申请日:2020-06-12

    Applicant: ADOBE INC.

    Abstract: Systems and methods provide editing operations in a smart editing system that may generate a focal point within a mask of an object for each frame of a video segment and perform editing effects on the frames of the video segment to quickly provide users with natural video editing effects. An eye-gaze network may produce a hotspot map of predicted focal points in a video frame. These predicted focal points may then be used by a gaze-to-mask network to determine objects in the image and generate an object mask for each of the detected objects. This process may then be repeated to effectively track the trajectory of objects and object focal points in videos. Based on the determined trajectory of an object in a video clip and editing parameters, the editing engine may produce editing effects relative to an object for the video clip.

    Digital Content Editing using a Procedural Model

    公开(公告)号:US20220130086A1

    公开(公告)日:2022-04-28

    申请号:US17079915

    申请日:2020-10-26

    Applicant: Adobe Inc.

    Abstract: Procedural model digital content editing techniques are described that overcome the limitations of conventional techniques to make procedural models available for interaction by a wide range of users without requiring specialized knowledge and do so without “breaking” the underlying model. In the techniques described herein, an inverse procedural model system receives a user input that specifies an edit to digital content generated by a procedural model. Input parameters from these candidate input parameters are selected by the system which cause the digital content generated by the procedural model to incorporate the edit.

    SYSTEM FOR AUTOMATIC VIDEO REFRAMING

    公开(公告)号:US20210392278A1

    公开(公告)日:2021-12-16

    申请号:US16900435

    申请日:2020-06-12

    Applicant: ADOBE INC.

    Abstract: Systems and methods provide reframing operations in a smart editing system that may generate a focal point within a mask of an object for each frame of a video segment and perform editing effects on the frames of the video segment to quickly provide users with natural video editing effects. A reframing engine may processes video clips using a segmentation and hotspot module to determine a salient region of an object, generate a mask of the object, and track the trajectory of an object in the video clips. The reframing engine may then receive reframing parameters from a crop suggestion module and a user interface. Based on the determined trajectory of an object in a video clip and reframing parameters, the reframing engine may use reframing logic to produce temporally consistent reframing effects relative to an object for the video clip.

    SYSTEM FOR AUTOMATIC OBJECT MASK AND HOTSPOT TRACKING

    公开(公告)号:US20210390710A1

    公开(公告)日:2021-12-16

    申请号:US16900483

    申请日:2020-06-12

    Applicant: Adobe Inc.

    Abstract: Systems and methods provide editing operations in a smart editing system that may generate a focal point within a mask of an object for each frame of a video segment and perform editing effects on the frames of the video segment to quickly provide users with natural video editing effects. An eye-gaze network may produce a hotspot map of predicted focal points in a video frame. These predicted focal points may then be used by a gaze-to-mask network to determine objects in the image and generate an object mask for each of the detected objects. This process may then be repeated to effectively track the trajectory of objects and object focal points in videos. Based on the determined trajectory of an object in a video clip and editing parameters, the editing engine may produce editing effects relative to an object for the video clip.

    GENERATING PROCEDURAL MATERIALS FROM DIGITAL IMAGES

    公开(公告)号:US20210343051A1

    公开(公告)日:2021-11-04

    申请号:US16863540

    申请日:2020-04-30

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.

    Generative Shape Creation and Editing

    公开(公告)号:US20210264649A1

    公开(公告)日:2021-08-26

    申请号:US17317246

    申请日:2021-05-11

    Applicant: Adobe Inc.

    Abstract: Generative shape creation and editing is leveraged in a digital medium environment. An object editor system represents a set of training shapes as sets of visual elements known as “handles,” and converts sets of handles into signed distance field (SDF) representations. A handle processor model is then trained using the SDF representations to enable the handle processor model to generate new shapes that reflect salient visual features of the training shapes. The trained handle processor model, for instance, generates new sets of handles based on salient visual features learned from the training handle set. Thus, utilizing the described techniques, accurate characterizations of a set of shapes can be learned and used to generate new shapes. Further, generated shapes can be edited and transformed in different ways.

    INTUITIVE EDITING OF THREE-DIMENSIONAL MODELS

    公开(公告)号:US20210256775A1

    公开(公告)日:2021-08-19

    申请号:US17208627

    申请日:2021-03-22

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.

Patent Agency Ranking