Learned model-based image rendering

    公开(公告)号:US11113578B1

    公开(公告)日:2021-09-07

    申请号:US16847270

    申请日:2020-04-13

    Applicant: Adobe Inc.

    Abstract: A non-photorealistic image rendering system and related techniques are described herein that train and implement machine learning models to reproduce digital images in accordance with various painting styles and constraints. The image rendering system can include a machine learning system that utilizes actor-critic based reinforcement learning techniques to train painting agents (e.g., models that include one or more neural networks) how to transform images into various artistic styles with minimal loss between the original images and the transformed images. The image rendering system can generate constrained painting agents, which correspond to painting agents that are further trained to reproduce images in accordance with one or more constraints. The constraints may include limitations of the color, width, size, and/or position of brushstrokes within reproduced images. These constrained painting agents may provide users with robust, flexible, and customizable non-photorealistic painting systems.

    Capturing digital images that align with a target image model

    公开(公告)号:US10958829B2

    公开(公告)日:2021-03-23

    申请号:US16743976

    申请日:2020-01-15

    Applicant: Adobe Inc.

    Abstract: The present disclosure includes systems, methods, and non-transitory computer readable media that can guide a user to align a camera feed captured by a user client device with a target digital image. In particular, the systems described herein can analyze a camera feed to determine image attributes for the camera feed. The systems can compare the image attributes of the camera feed with corresponding target image attributes of a target digital image. Additionally, the systems can generate and provide instructions to guide a user to align the image attributes of the camera feed with the target image attributes of the target digital image.

    Three-dimensional mesh deformation using deep learning neural networks

    公开(公告)号:US10916054B2

    公开(公告)日:2021-02-09

    申请号:US16184149

    申请日:2018-11-08

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for deforming a 3D source mesh to resemble a target object representation which may be a 2D image or another 3D mesh. A methodology implementing the techniques according to an embodiment includes extracting a set of one or more source features from a source 3D mesh. The source 3D mesh includes a plurality of source points representing a source object, and the extracting of the set of source features is independent of an ordering of the source points. The method also includes extracting a set of one or more target features from the target object representation, and decoding a concatenation of the set of source features and the set of target features to predict vertex offsets for application to the source 3D mesh to generate a deformed 3D mesh based on the target object. The feature extractions and the vertex offset predictions may employ Deep Neural Networks.

    CAPTURING DIGITAL IMAGES THAT ALIGN WITH A TARGET IMAGE MODEL

    公开(公告)号:US20200154037A1

    公开(公告)日:2020-05-14

    申请号:US16743976

    申请日:2020-01-15

    Applicant: Adobe Inc.

    Abstract: The present disclosure includes systems, methods, and non-transitory computer readable media that can guide a user to align a camera feed captured by a user client device with a target digital image. In particular, the systems described herein can analyze a camera feed to determine image attributes for the camera feed. The systems can compare the image attributes of the camera feed with corresponding target image attributes of a target digital image. Additionally, the systems can generate and provide instructions to guide a user to align the image attributes of the camera feed with the target image attributes of the target digital image.

    THREE-DIMENSIONAL MESH DEFORMATION USING DEEP LEARNING NEURAL NETWORKS

    公开(公告)号:US20200151952A1

    公开(公告)日:2020-05-14

    申请号:US16184149

    申请日:2018-11-08

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for deforming a 3D source mesh to resemble a target object representation which may be a 2D image or another 3D mesh. A methodology implementing the techniques according to an embodiment includes extracting a set of one or more source features from a source 3D mesh. The source 3D mesh includes a plurality of source points representing a source object, and the extracting of the set of source features is independent of an ordering of the source points. The method also includes extracting a set of one or more target features from the target object representation, and decoding a concatenation of the set of source features and the set of target features to predict vertex offsets for application to the source 3D mesh to generate a deformed 3D mesh based on the target object. The feature extractions and the vertex offset predictions may employ Deep Neural Networks.

    Event image curation
    36.
    发明授权

    公开(公告)号:US10565472B2

    公开(公告)日:2020-02-18

    申请号:US15935816

    申请日:2018-03-26

    Applicant: Adobe Inc.

    Abstract: In embodiments of event image curation, a computing device includes memory that stores a collection of digital images associated with a type of event, such as a digital photo album of digital photos associated with the event, or a video of image frames and the video is associated with the event. A curation application implements a convolutional neural network, which receives the digital images and a designation of the type of event. The convolutional neural network can then determine an importance rating of each digital image within the collection of the digital images based on the type of the event. The importance rating of a digital image is representative of an importance of the digital image to a person in context of the type of the event. The convolutional neural network generates an output of representative digital images from the collection based on the importance rating of each digital image.

    Image Cropping Suggestion Using Multiple Saliency Maps

    公开(公告)号:US20190244327A1

    公开(公告)日:2019-08-08

    申请号:US16384593

    申请日:2019-04-15

    Applicant: Adobe Inc.

    CPC classification number: G06T3/40 G06K9/4671 G06T3/0012 G06T11/60 G06T2210/22

    Abstract: Image cropping suggestion using multiple saliency maps is described. In one or more implementations, component scores, indicative of visual characteristics established for visually-pleasing croppings, are computed for candidate image croppings using multiple different saliency maps. The visual characteristics on which a candidate image cropping is scored may be indicative of its composition quality, an extent to which it preserves content appearing in the scene, and a simplicity of its boundary. Based on the component scores, the croppings may be ranked with regard to each of the visual characteristics. The rankings may be used to cluster the candidate croppings into groups of similar croppings, such that croppings in a group are different by less than a threshold amount and croppings in different groups are different by at least the threshold amount. Based on the clustering, croppings may then be chosen, e.g., to present them to a user for selection.

    3D model generation from 2D images
    38.
    发明授权

    公开(公告)号:US10318102B2

    公开(公告)日:2019-06-11

    申请号:US15005927

    申请日:2016-01-25

    Applicant: Adobe Inc.

    Abstract: Techniques and systems are described to generate a three-dimensional model from two-dimensional images. A plurality of inputs is received, formed through user interaction with a user interface. Each of the plurality of inputs define a respective user-specified point on the object in a respective one of the plurality of images. A plurality of estimated points on the object are generated automatically and without user intervention. Each of the plurality of estimated points corresponds to a respective user-specified point for other ones of the plurality of images. The plurality of estimated points is displayed for the other ones of the plurality of images in the user interface by a computing device. A mesh of the three-dimensional model of the object is generated by the computing device by mapping respective ones of the user-specified points to respective ones of the estimated points in the plurality of images.

    System for automatic object mask and hotspot tracking

    公开(公告)号:US12223661B2

    公开(公告)日:2025-02-11

    申请号:US17735728

    申请日:2022-05-03

    Applicant: ADOBE INC.

    Abstract: Systems and methods provide editing operations in a smart editing system that may generate a focal point within a mask of an object for each frame of a video segment and perform editing effects on the frames of the video segment to quickly provide users with natural video editing effects. An eye-gaze network may produce a hotspot map of predicted focal points in a video frame. These predicted focal points may then be used by a gaze-to-mask network to determine objects in the image and generate an object mask for each of the detected objects. This process may then be repeated to effectively track the trajectory of objects and object focal points in videos. Based on the determined trajectory of an object in a video clip and editing parameters, the editing engine may produce editing effects relative to an object for the video clip.

    Generating differentiable procedural materials

    公开(公告)号:US12198231B2

    公开(公告)日:2025-01-14

    申请号:US18341618

    申请日:2023-06-26

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to using end-to-end differentiable pipeline for optimizing parameters of a base procedural material to generate a procedural material corresponding to a target physical material. For example, the disclosed systems can receive a digital image of a target physical material. In response, the disclosed systems can retrieve a differentiable procedural material for use as a base procedural material in response. The disclosed systems can compare a digital image of the base procedural material with the digital image of the target physical material using a loss function, such as a style loss function that compares visual appearance. Based on the determined loss, the disclosed systems can modify the parameters of the base procedural material to determine procedural material parameters for the target physical material. The disclosed systems can generate a procedural material corresponding to the base procedural material using the determined procedural material parameters.

Patent Agency Ranking