LEARNING COPY SPACE USING REGRESSION AND SEGMENTATION NEURAL NETWORKS

    公开(公告)号:US20200160111A1

    公开(公告)日:2020-05-21

    申请号:US16191724

    申请日:2018-11-15

    Applicant: ADOBE INC.

    Abstract: Techniques are disclosed for characterizing and defining the location of a copy space in an image. A methodology implementing the techniques according to an embodiment includes applying a regression convolutional neural network (CNN) to an image. The regression CNN is configured to predict properties of the copy space such as size and type (natural or manufactured). The prediction is conditioned on a determination of the presence of the copy space in the image. The method further includes applying a segmentation CNN to the image. The segmentation CNN is configured to generate one or more pixel-level masks to define the location of copy spaces in the image, whether natural or manufactured, or to define the location of a background region of the image. The segmentation CNN may include a first stage comprising convolutional layers and a second stage comprising pairs of boundary refinement layers and bilinear up-sampling layers.

    Textual design agent
    2.
    发明授权

    公开(公告)号:US11886793B2

    公开(公告)日:2024-01-30

    申请号:US17466679

    申请日:2021-09-03

    Applicant: ADOBE INC.

    CPC classification number: G06F40/109 G06F40/103 G06F40/106 G06F40/166

    Abstract: Embodiments of the technology described herein, are an intelligent system that aims to expedite a text design process by providing text design predictions interactively. The system works with a typical text design scenario comprising a background image and one or more text strings as input. In the design scenario, the text string is to be placed on top of the background. The textual design agent may include a location recommendation model that recommends a location on the background image to place the text. The textual design agent may also include a font recommendation model, a size recommendation model, and a color recommendation model. The output of these four models may be combined to generate draft designs that are evaluated as a whole (combination of color, font, and size) for the best designs. The top designs may be output to the user.

    Learning copy space using regression and segmentation neural networks

    公开(公告)号:US10970599B2

    公开(公告)日:2021-04-06

    申请号:US16191724

    申请日:2018-11-15

    Applicant: ADOBE INC.

    Abstract: Techniques are disclosed for characterizing and defining the location of a copy space in an image. A methodology implementing the techniques according to an embodiment includes applying a regression convolutional neural network (CNN) to an image. The regression CNN is configured to predict properties of the copy space such as size and type (natural or manufactured). The prediction is conditioned on a determination of the presence of the copy space in the image. The method further includes applying a segmentation CNN to the image. The segmentation CNN is configured to generate one or more pixel-level masks to define the location of copy spaces in the image, whether natural or manufactured, or to define the location of a background region of the image. The segmentation CNN may include a first stage comprising convolutional layers and a second stage comprising pairs of boundary refinement layers and bilinear up-sampling layers.

    Learning copy space using regression and segmentation neural networks

    公开(公告)号:US11605168B2

    公开(公告)日:2023-03-14

    申请号:US17215067

    申请日:2021-03-29

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for characterizing and defining the location of a copy space in an image. A methodology implementing the techniques according to an embodiment includes applying a regression convolutional neural network (CNN) to an image. The regression CNN is configured to predict properties of the copy space such as size and type (natural or manufactured). The prediction is conditioned on a determination of the presence of the copy space in the image. The method further includes applying a segmentation CNN to the image. The segmentation CNN is configured to generate one or more pixel-level masks to define the location of copy spaces in the image, whether natural or manufactured, or to define the location of a background region of the image. The segmentation CNN may include a first stage comprising convolutional layers and a second stage comprising pairs of boundary refinement layers and bilinear up-sampling layers.

    DETERMINING FINE-GRAIN VISUAL STYLE SIMILARITIES FOR DIGITAL IMAGES BY EXTRACTING STYLE EMBEDDINGS DISENTANGLED FROM IMAGE CONTENT

    公开(公告)号:US20220092108A1

    公开(公告)日:2022-03-24

    申请号:US17025041

    申请日:2020-09-18

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly identifying digital images with similar style to a query digital image using fine-grain style determination via weakly supervised style extraction neural networks. For example, the disclosed systems can extract a style embedding from a query digital image using a style extraction neural network such as a novel two-branch autoencoder architecture or a weakly supervised discriminative neural network. The disclosed systems can generate a combined style embedding by combining complementary style embeddings from different style extraction neural networks. Moreover, the disclosed systems can search a repository of digital images to identify digital images with similar style to the query digital image. The disclosed systems can also learn parameters for one or more style extraction neural network through weakly supervised training without a specifically labeled style ontology for sample digital images.

    TEXTUAL DESIGN AGENT
    10.
    发明申请

    公开(公告)号:US20230070390A1

    公开(公告)日:2023-03-09

    申请号:US17466679

    申请日:2021-09-03

    Applicant: ADOBE INC.

    Abstract: Embodiments of the technology described herein, are an intelligent system that aims to expedite a text design process by providing text design predictions interactively. The system works with a typical text design scenario comprising a background image and one or more text strings as input. In the design scenario, the text string is to be placed on top of the background. The textual design agent may include a location recommendation model that recommends a location on the background image to place the text. The textual design agent may also include a font recommendation model, a size recommendation model, and a color recommendation model. The output of these four models may be combined to generate draft designs that are evaluated as a whole (combination of color, font, and size) for the best designs. The top designs may be output to the user.

Patent Agency Ranking