Finding similar persons in images
    11.
    发明授权

    公开(公告)号:US11436865B1

    公开(公告)日:2022-09-06

    申请号:US17207178

    申请日:2021-03-19

    Applicant: Adobe Inc.

    Abstract: Embodiments are disclosed for finding similar persons in images. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an image query, the image query including an input image that includes a representation of a person, generating a first cropped image including a representation of the person's face and a second cropped image including a representation of the person's body, generating an image embedding for the input image by combining a face embedding corresponding to the first cropped image and a body embedding corresponding to the second cropped image, and querying an image repository in embedding space by comparing the image embedding to a plurality of image embeddings associated with a plurality of images in the image repository to obtain one or more images based on similarity to the input image in the embedding space.

    TEXT ADJUSTED VISUAL SEARCH
    12.
    发明申请

    公开(公告)号:US20220138247A1

    公开(公告)日:2022-05-05

    申请号:US17090150

    申请日:2020-11-05

    Applicant: ADOBE INC.

    Abstract: Embodiments of the technology described herein, provide improved visual search results by combining a visual similarity and a textual similarity between images. In an embodiment, the visual similarity is quantified as a visual similarity score and the textual similarity is quantified as a textual similarity score. The textual similarity is determined based on text, such as a title, associated with the image. The overall similarity of two images is quantified as a weighted combination of the textual similarity score and the visual similarity score. In an embodiment, the weighting between the textual similarity score and the visual similarity score is user configurable through a control on the search interface. In one embodiment, the aggregate similarity score is the sum of a weighted visual similarity score and a weighted textual similarity score.

    Finding similar persons in images
    13.
    发明授权

    公开(公告)号:US11915520B2

    公开(公告)日:2024-02-27

    申请号:US17902349

    申请日:2022-09-02

    Applicant: Adobe Inc.

    CPC classification number: G06V40/172 G06F18/00 G06F18/29 G06V30/194 G06V40/10

    Abstract: Embodiments are disclosed for finding similar persons in images. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an image query, the image query including an input image that includes a representation of a person, generating a first cropped image including a representation of the person's face and a second cropped image including a representation of the person's body, generating an image embedding for the input image by combining a face embedding corresponding to the first cropped image and a body embedding corresponding to the second cropped image, and querying an image repository in embedding space by comparing the image embedding to a plurality of image embeddings associated with a plurality of images in the image repository to obtain one or more images based on similarity to the input image in the embedding space.

    TEXT TO COLOR PALETTE GENERATOR
    16.
    发明申请

    公开(公告)号:US20220277039A1

    公开(公告)日:2022-09-01

    申请号:US17186625

    申请日:2021-02-26

    Applicant: ADOBE INC.

    Abstract: The present disclosure describes systems and methods for information retrieval. Embodiments of the disclosure provide a color embedding network trained using machine learning techniques to generate embedded color representations for color terms included in a text search query. For example, techniques described herein are used to represent color text in a same space as color embeddings (e.g., an embedding space created by determining a histogram of LAB based colors in a three-dimensional (3D) space). Further, techniques are described for indexing color palettes for all the searchable images in the search space. Accordingly, color terms in a text query are directly converted into a color palette and an image search system can return one or more search images with corresponding color palettes that are relevant to (e.g., within a threshold distance from) the color palette of the text query.

    DETERMINING FINE-GRAIN VISUAL STYLE SIMILARITIES FOR DIGITAL IMAGES BY EXTRACTING STYLE EMBEDDINGS DISENTANGLED FROM IMAGE CONTENT

    公开(公告)号:US20220092108A1

    公开(公告)日:2022-03-24

    申请号:US17025041

    申请日:2020-09-18

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly identifying digital images with similar style to a query digital image using fine-grain style determination via weakly supervised style extraction neural networks. For example, the disclosed systems can extract a style embedding from a query digital image using a style extraction neural network such as a novel two-branch autoencoder architecture or a weakly supervised discriminative neural network. The disclosed systems can generate a combined style embedding by combining complementary style embeddings from different style extraction neural networks. Moreover, the disclosed systems can search a repository of digital images to identify digital images with similar style to the query digital image. The disclosed systems can also learn parameters for one or more style extraction neural network through weakly supervised training without a specifically labeled style ontology for sample digital images.

    Text-to-Visual Machine Learning Embedding Techniques

    公开(公告)号:US20210365727A1

    公开(公告)日:2021-11-25

    申请号:US17398317

    申请日:2021-08-10

    Applicant: Adobe Inc.

    Abstract: Text-to-visual machine learning embedding techniques are described that overcome the challenges of conventional techniques in a variety of ways. These techniques include use of query-based training data which may expand availability and types of training data usable to train a model. Generation of negative digital image samples is also described that may increase accuracy in training the model using machine learning. A loss function is also described that also supports increased accuracy and computational efficiency by losses separately, e.g., between positive or negative sample embeddings a text embedding.

    Text-to-visual machine learning embedding techniques

    公开(公告)号:US11144784B2

    公开(公告)日:2021-10-12

    申请号:US16426264

    申请日:2019-05-30

    Applicant: Adobe Inc.

    Abstract: Text-to-visual machine learning embedding techniques are described that overcome the challenges of conventional techniques in a variety of ways. These techniques include use of query-based training data which may expand availability and types of training data usable to train a model. Generation of negative digital image samples is also described that may increase accuracy in training the model using machine learning. A loss function is also described that also supports increased accuracy and computational efficiency by losses separately, e.g., between positive or negative sample embeddings a text embedding.

Patent Agency Ranking