Compositing Aware Digital Image Search
    201.
    发明申请

    公开(公告)号:US20200349189A1

    公开(公告)日:2020-11-05

    申请号:US16929429

    申请日:2020-07-15

    Applicant: Adobe Inc.

    Abstract: Compositing aware digital image search techniques and systems are described that leverage machine learning. In one example, a compositing aware image search system employs a two-stream convolutional neural network (CNN) to jointly learn feature embeddings from foreground digital images that capture a foreground object and background digital images that capture a background scene. In order to train models of the convolutional neural networks, triplets of training digital images are used. Each triplet may include a positive foreground digital image and a positive background digital image taken from the same digital image. The triplet also contains a negative foreground or background digital image that is dissimilar to the positive foreground or background digital image that is also included as part of the triplet.

    Object Detection In Images
    202.
    发明申请

    公开(公告)号:US20200272822A1

    公开(公告)日:2020-08-27

    申请号:US16874114

    申请日:2020-05-14

    Applicant: Adobe Inc.

    Abstract: In implementations of object detection in images, object detectors are trained using heterogeneous training datasets. A first training dataset is used to train an image tagging network to determine an attention map of an input image for a target concept. A second training dataset is used to train a conditional detection network that accepts as conditional inputs the attention map and a word embedding of the target concept. Despite the conditional detection network being trained with a training dataset having a small number of seen classes (e.g., classes in a training dataset), it generalizes to novel, unseen classes by concept conditioning, since the target concept propagates through the conditional detection network via the conditional inputs, thus influencing classification and region proposal. Hence, classes of objects that can be detected are expanded, without the need to scale training databases to include additional classes.

    HIERARCHICAL SCALE MATCHING AND PATCH ESTIMATION FOR IMAGE STYLE TRANSFER WITH ARBITRARY RESOLUTION

    公开(公告)号:US20200258204A1

    公开(公告)日:2020-08-13

    申请号:US16271058

    申请日:2019-02-08

    Applicant: Adobe Inc.

    Abstract: A style of a digital image is transferred to another digital image of arbitrary resolution. A high-resolution (HR) content image is segmented into several low-resolution (LR) patches. The resolution of a style image is matched to have the same resolution as the LR content image patches. Style transfer is then performed on a patch-by-patch basis using, for example, a pair of feature transforms—whitening and coloring. The patch-by-patch style transfer process is then repeated at several increasing resolutions, or scale levels, of both the content and style images. The results of the style transfer at each scale level are incorporated into successive scale levels up to and including the original HR scale. As a result, style transfer can be performed with images having arbitrary resolutions to produce visually pleasing results with good spatial consistency.

    IDENTIFYING VISUALLY SIMILAR DIGITAL IMAGES UTILIZING DEEP LEARNING

    公开(公告)号:US20200210763A1

    公开(公告)日:2020-07-02

    申请号:US16817234

    申请日:2020-03-12

    Applicant: ADOBE INC.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for utilizing a deep neural network-based model to identify similar digital images for query digital images. For example, the disclosed systems utilize a deep neural network-based model to analyze query digital images to generate deep neural network-based representations of the query digital images. In addition, the disclosed systems can generate results of visually-similar digital images for the query digital images based on comparing the deep neural network-based representations with representations of candidate digital images. Furthermore, the disclosed systems can identify visually similar digital images based on user-defined attributes and image masks to emphasize specific attributes or portions of query digital images.

    Image-blending via alignment or photometric adjustments computed by a neural network

    公开(公告)号:US10600171B2

    公开(公告)日:2020-03-24

    申请号:US15914659

    申请日:2018-03-07

    Applicant: Adobe Inc.

    Abstract: Certain embodiments involve blending images using neural networks to automatically generate alignment or photometric adjustments that control image blending operations. For instance, a foreground image and a background image data are provided to an adjustment-prediction network that has been trained, using a reward network, to compute alignment or photometric adjustments that optimize blending reward scores. An adjustment action (e.g., an alignment or photometric adjustment) is computed by applying the adjustment-prediction network to the foreground image and the background image data. A target background region is extracted from the background image data by applying the adjustment action to the background image data. The target background region is blended with the foreground image, and the resultant blended image is outputted.

    UTILIZING DEEP LEARNING TO RATE ATTRIBUTES OF DIGITAL IMAGES

    公开(公告)号:US20200065956A1

    公开(公告)日:2020-02-27

    申请号:US16670314

    申请日:2019-10-31

    Applicant: Adobe Inc.

    Abstract: Systems and methods are disclosed for estimating aesthetic quality of digital images using deep learning. In particular, the disclosed systems and methods describe training a neural network to generate an aesthetic quality score digital images. In particular, the neural network includes a training structure that compares relative rankings of pairs of training images to accurately predict a relative ranking of a digital image. Additionally, in training the neural network, an image rating system can utilize content-aware and user-aware sampling techniques to identify pairs of training images that have similar content and/or that have been rated by the same or different users. Using content-aware and user-aware sampling techniques, the neural network can be trained to accurately predict aesthetic quality ratings that reflect subjective opinions of most users as well as provide aesthetic scores for digital images that represent the wide spectrum of aesthetic preferences of various users.

    JOINT BLUR MAP ESTIMATION AND BLUR DESIRABILITY CLASSIFICATION FROM AN IMAGE

    公开(公告)号:US20190362199A1

    公开(公告)日:2019-11-28

    申请号:US15989436

    申请日:2018-05-25

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for blur classification. The techniques utilize an image content feature map, a blur map, and an attention map, thereby combining low-level blur estimation with a high-level understanding of important image content in order to perform blur classification. The techniques allow for programmatically determining if blur exists in an image, and determining what type of blur it is (e.g., high blur, low blur, middle or neutral blur, or no blur). According to one example embodiment, if blur is detected, an estimate of spatially-varying blur amounts is performed and blur desirability is categorized in terms of image quality.

    Training a classifier algorithm used for automatically generating tags to be applied to images

    公开(公告)号:US10430689B2

    公开(公告)日:2019-10-01

    申请号:US15680282

    申请日:2017-08-18

    Applicant: Adobe Inc.

    Abstract: This disclosure relates to training a classifier algorithm that can be used for automatically selecting tags to be applied to a received image. For example, a computing device can group training images together based on the training images having similar tags. The computing device trains a classifier algorithm to identify the training images as semantically similar to one another based on the training images being grouped together. The trained classifier algorithm is used to determine that an input image is semantically similar to an example tagged image. A tag is generated for the input image using tag content from the example tagged image based on determining that the input image is semantically similar to the tagged image.

    Generating a compact video feature representation in a digital medium environment

    公开(公告)号:US10430661B2

    公开(公告)日:2019-10-01

    申请号:US15384831

    申请日:2016-12-20

    Applicant: Adobe Inc.

    Abstract: Techniques and systems are described to generate a compact video feature representation for sequences of frames in a video. In one example, values of features are extracted from each frame of a plurality of frames of a video using machine learning, e.g., through use of a convolutional neural network. A video feature representation is generated of temporal order dynamics of the video, e.g., through use of a recurrent neural network. For example, a maximum value is maintained of each feature of the plurality of features that has been reached for the plurality of frames in the video. A timestamp is also maintained as indicative of when the maximum value is reached for each feature of the plurality of features. The video feature representation is then output as a basis to determine similarity of the video with at least one other video based on the video feature representation.

Patent Agency Ranking