GENERATING EMBEDDINGS FOR TEXT AND IMAGE QUERIES WITHIN A COMMON EMBEDDING SPACE FOR VISUAL-TEXT IMAGE SEARCHES

    公开(公告)号:US20230418861A1

    公开(公告)日:2023-12-28

    申请号:US17809503

    申请日:2022-06-28

    Applicant: Adobe Inc.

    CPC classification number: G06F16/535 G06F16/532 G06F16/3334

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that implements related image search and image modification processes using various search engines and a consolidated graphical user interface. For instance, in one or more embodiments, the disclosed systems receive an input digital image and search input and further modify the input digital image using the image search results retrieved in response to the search input. In some cases, the search input includes a multi-modal search input having multiple queries (e.g., an image query and a text query), and the disclosed systems retrieve the image search results utilizing a weighted combination of the queries. In some implementations, the disclosed systems generate an input embedding for the search input (e.g., the multi-modal search input) and retrieve the image search results using the input embedding.

    GENERATING SCALABLE FONTS UTILIZING MULTI-IMPLICIT NEURAL FONT REPRESENTATIONS

    公开(公告)号:US20230110114A1

    公开(公告)日:2023-04-13

    申请号:US17499611

    申请日:2021-10-12

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for accurately and flexibly generating scalable fonts utilizing multi-implicit neural font representations. For instance, the disclosed systems combine deep learning with differentiable rasterization to generate a multi-implicit neural font representation of a glyph. For example, the disclosed systems utilize an implicit differentiable font neural network to determine a font style code for an input glyph as well as distance values for locations of the glyph to be rendered based on a glyph label and the font style code. Further, the disclosed systems rasterize the distance values utilizing a differentiable rasterization model and combines the rasterized distance values to generate a permutation-invariant version of the glyph corresponding glyph set.

    AUTOMATIC MAKEUP TRANSFER USING SEMI-SUPERVISED LEARNING

    公开(公告)号:US20210295045A1

    公开(公告)日:2021-09-23

    申请号:US16822878

    申请日:2020-03-18

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, computer-implemented methods, and non-transitory computer readable medium for automatically transferring makeup from a reference face image to a target face image using a neural network trained using semi-supervised learning. For example, the disclosed systems can receive, at a neural network, a target face image and a reference face image, where the target face image is selected by a user via a graphical user interface (GUI) and the reference face image has makeup. The systems transfer, by the neural network, the makeup from the reference face image to the target face image, where the neural network is trained to transfer the makeup from the reference face image to the target face image using semi-supervised learning. The systems output for display the makeup on the target face image.

    Super-resolution with reference images

    公开(公告)号:US10885608B2

    公开(公告)日:2021-01-05

    申请号:US16001656

    申请日:2018-06-06

    Applicant: Adobe Inc.

    Abstract: In implementations of super-resolution with reference images, a super-resolution image is generated based on reference images. Reference images are not constrained to have same or similar content as a low-resolution image being super-resolved. Texture features indicating high-frequency content are extracted into texture feature maps, and patches of texture feature maps of reference images are determined based on texture feature similarity. A content feature map indicating low-frequency content of an image is adaptively fused with a swapped texture feature map including patches of reference images with a neural network based on similarity of texture features. A user interfaces allows a user to select regions of multiple reference images to use for super-resolution. Hence, a super-resolution image can be generated with rich texture details incorporated from multiple reference images, even in the absence of reference images having similar content to an image being upscaled.

Patent Agency Ranking