TEXT TO COLOR PALETTE GENERATOR
    33.
    发明申请

    公开(公告)号:US20220277039A1

    公开(公告)日:2022-09-01

    申请号:US17186625

    申请日:2021-02-26

    Applicant: ADOBE INC.

    Abstract: The present disclosure describes systems and methods for information retrieval. Embodiments of the disclosure provide a color embedding network trained using machine learning techniques to generate embedded color representations for color terms included in a text search query. For example, techniques described herein are used to represent color text in a same space as color embeddings (e.g., an embedding space created by determining a histogram of LAB based colors in a three-dimensional (3D) space). Further, techniques are described for indexing color palettes for all the searchable images in the search space. Accordingly, color terms in a text query are directly converted into a color palette and an image search system can return one or more search images with corresponding color palettes that are relevant to (e.g., within a threshold distance from) the color palette of the text query.

    DETERMINING FINE-GRAIN VISUAL STYLE SIMILARITIES FOR DIGITAL IMAGES BY EXTRACTING STYLE EMBEDDINGS DISENTANGLED FROM IMAGE CONTENT

    公开(公告)号:US20220092108A1

    公开(公告)日:2022-03-24

    申请号:US17025041

    申请日:2020-09-18

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly identifying digital images with similar style to a query digital image using fine-grain style determination via weakly supervised style extraction neural networks. For example, the disclosed systems can extract a style embedding from a query digital image using a style extraction neural network such as a novel two-branch autoencoder architecture or a weakly supervised discriminative neural network. The disclosed systems can generate a combined style embedding by combining complementary style embeddings from different style extraction neural networks. Moreover, the disclosed systems can search a repository of digital images to identify digital images with similar style to the query digital image. The disclosed systems can also learn parameters for one or more style extraction neural network through weakly supervised training without a specifically labeled style ontology for sample digital images.

    Generating contextual tags for digital content

    公开(公告)号:US11232147B2

    公开(公告)日:2022-01-25

    申请号:US16525366

    申请日:2019-07-29

    Applicant: Adobe Inc.

    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for determining multi-term contextual tags for digital content and propagating the multi-term contextual tags to additional digital content. For instance, the disclosed systems can utilize search query supervision to determine and associate multi-term contextual tags (e.g., tags that represent a specific concept based on the order of the terms in the tag) with digital content. Furthermore, the disclosed systems can propagate the multi-term contextual tags determined for the digital content to additional digital content based on similarities between the digital content and additional digital content (e.g., utilizing clustering techniques). Additionally, the disclosed systems can provide digital content as search results based on the associated multi-term contextual tags.

    Image search persona techniques and systems

    公开(公告)号:US10592548B2

    公开(公告)日:2020-03-17

    申请号:US14828085

    申请日:2015-08-17

    Applicant: Adobe Inc.

    Abstract: Image search persona techniques and systems are described. In one or more implementations, a digital medium environment is described for controlling image searches by one or more computing devices. An image search request and an indication of one or more personas of one or more respective users associated with the image search request is received by the one or more computing devices. The one or more personas specify characteristics of the one or more respective users themselves. A plurality of images are obtained by the one or more computing devices based on the image search request. The plurality of images are filtered by the one or more computing devices based on the one or more personas and a search result is generated by the one or more computing devices from the filtered plurality of images.

    High resolution conditional face generation

    公开(公告)号:US11887216B2

    公开(公告)日:2024-01-30

    申请号:US17455796

    申请日:2021-11-19

    Applicant: ADOBE INC.

    CPC classification number: G06T11/00 G06N3/08 G06V40/168 G06V40/172

    Abstract: The present disclosure describes systems and methods for image processing. Embodiments of the present disclosure include an image processing apparatus configured to generate modified images (e.g., synthetic faces) by conditionally changing attributes or landmarks of an input image. A machine learning model of the image processing apparatus encodes the input image to obtain a joint conditional vector that represents attributes and landmarks of the input image in a vector space. The joint conditional vector is then modified, according to the techniques described herein, to form a latent vector used to generate a modified image. In some cases, the machine learning model is trained using a generative adversarial network (GAN) with a normalization technique, followed by joint training of a landmark embedding and attribute embedding (e.g., to reduce inference time).

Patent Agency Ranking