Drawing curves in space guided by 3-D objects

    公开(公告)号:US11069099B2

    公开(公告)日:2021-07-20

    申请号:US16855328

    申请日:2020-04-22

    Applicant: Adobe Inc.

    Abstract: Various embodiments enable curves to be drawn around 3-D objects by intelligently determining or inferring how the curve flows in the space around the outside of the 3-D object. The various embodiments enable such curves to be drawn without having to constantly rotate the 3-D object. In at least some embodiments, curve flow is inferred by employing a vertex position discovery process, a path discovery process, and a final curve construction process.

    Utilizing deep learning to rate attributes of digital images

    公开(公告)号:US10515443B2

    公开(公告)日:2019-12-24

    申请号:US15981166

    申请日:2018-05-16

    Applicant: Adobe Inc.

    Abstract: Systems and methods are disclosed for estimating aesthetic quality of digital images using deep learning. In particular, the disclosed systems and methods describe training a neural network to generate an aesthetic quality score digital images. In particular, the neural network includes a training structure that compares relative rankings of pairs of training images to accurately predict a relative ranking of a digital image. Additionally, in training the neural network, an image rating system can utilize content-aware and user-aware sampling techniques to identify pairs of training images that have similar content and/or that have been rated by the same or different users. Using content-aware and user-aware sampling techniques, the neural network can be trained to accurately predict aesthetic quality ratings that reflect subjective opinions of most users as well as provide aesthetic scores for digital images that represent the wide spectrum of aesthetic preferences of various users.

    Convolutional neural network joint training

    公开(公告)号:US10467529B2

    公开(公告)日:2019-11-05

    申请号:US15177121

    申请日:2016-06-08

    Applicant: Adobe Inc.

    Abstract: In embodiments of convolutional neural network joint training, a computing system memory maintains different data batches of multiple digital image items, where the digital image items of the different data batches have some common features. A convolutional neural network (CNN) receives input of the digital image items of the different data batches, and classifier layers of the CNN are trained to recognize the common features in the digital image items of the different data batches. The recognized common features are input to fully-connected layers of the CNN that distinguish between the recognized common features of the digital image items of the different data batches. A scoring difference is determined between item pairs of the digital image items in a particular one of the different data batches. A piecewise ranking loss algorithm maintains the scoring difference between the item pairs, and the scoring difference is used to train CNN regression functions.

    SMART GUIDE TO CAPTURE DIGITAL IMAGES THAT ALIGN WITH A TARGET IMAGE MODEL

    公开(公告)号:US20190253614A1

    公开(公告)日:2019-08-15

    申请号:US15897951

    申请日:2018-02-15

    Applicant: Adobe Inc

    Abstract: The present disclosure includes systems, methods, and non-transitory computer readable media that can guide a user to align a camera feed captured by a user client device with a target digital image. In particular, the systems described herein can analyze a camera feed to determine image attributes for the camera feed. The systems can compare the image attributes of the camera feed with corresponding target image attributes of a target digital image. Additionally, the systems can generate and provide instructions to guide a user to align the image attributes of the camera feed with the target image attributes of the target digital image.

    CONTROLLING DEPTH SENSITIVITY IN CONDITIONAL TEXT-TO-IMAGE

    公开(公告)号:US20250166307A1

    公开(公告)日:2025-05-22

    申请号:US18948089

    申请日:2024-11-14

    Applicant: ADOBE INC.

    Abstract: A method, apparatus, non-transitory computer readable medium, and system for image processing include obtaining a condition input and an adherence parameter, where the condition input indicates an image attribute and the adherence parameter indicates a level of the condition input, generating an intermediate output based on the condition input and the adherence parameter, where the intermediate output includes the image attribute, and generating a synthetic image based on the intermediate output, where the synthetic image includes the image attribute based on the level indicated by the adherence parameter.

    Modifying two-dimensional images utilizing segmented three-dimensional object meshes of the two-dimensional images

    公开(公告)号:US12277652B2

    公开(公告)日:2025-04-15

    申请号:US18055585

    申请日:2022-11-15

    Applicant: Adobe Inc.

    Abstract: Methods, systems, and non-transitory computer readable storage media are disclosed for generating three-dimensional meshes representing two-dimensional images for editing the two-dimensional images. The disclosed system utilizes a first neural network to determine density values of pixels of a two-dimensional image based on estimated disparity. The disclosed system samples points in the two-dimensional image according to the density values and generates a tessellation based on the sampled points. The disclosed system utilizes a second neural network to estimate camera parameters and modify the three-dimensional mesh based on the estimated camera parameters of the pixels of the two-dimensional image. In one or more additional embodiments, the disclosed system generates a three-dimensional mesh to modify a two-dimensional image according to a displacement input. Specifically, the disclosed system maps the three-dimensional mesh to the two-dimensional image, modifies the three-dimensional mesh in response to a displacement input, and updates the two-dimensional image.

    EXTRACTING 3D SHAPES FROM LARGE-SCALE UNANNOTATED IMAGE DATASETS

    公开(公告)号:US20250061660A1

    公开(公告)日:2025-02-20

    申请号:US18451961

    申请日:2023-08-18

    Applicant: ADOBE INC.

    Abstract: Systems and methods for extracting 3D shapes from unstructured and unannotated datasets are described. Embodiments are configured to obtain a first image and a second image, where the first image depicts an object and the second image includes a corresponding object of a same object category as the object. Embodiments are further configured to generate, using an image encoder, image features for portions of the first image and for portions of the second image; identify a keypoint correspondence between a first keypoint in the first image and a second keypoint in the second image by clustering the image features corresponding to the portions of the first image and the portions of the second image; and generate, using an occupancy network, a 3D model of the object based on the keypoint correspondence.

Patent Agency Ranking