Generating Images for Virtual Try-On and Pose Transfer

    公开(公告)号:US20230267663A1

    公开(公告)日:2023-08-24

    申请号:US17678237

    申请日:2022-02-23

    Applicant: Adobe Inc.

    CPC classification number: G06T11/60 G06T7/70 G06T7/11 G06N3/0454

    Abstract: In implementations of systems for generating images for virtual try-on and pose transfer, a computing device implements a generator system to receive input data describing a first digital image that depicts a person in a pose and a second digital image that depicts a garment. Candidate appearance flow maps are computed that warp the garment based on the pose at different pixel-block sizes using a first machine learning model. The generator system generates a warped garment image by combining the candidate appearance flow maps as an aggregate per-pixel displacement map using a convolutional gated recurrent network. A conditional segment mask is predicted that segments portions of a geometry of the person using a second machine learning model. The generator system outputs a digital image that depicts the person in the pose wearing the garment based on the warped garment image and the conditional segmentation mask using a third machine learning model.

    Deep learning based visual compatibility prediction for bundle recommendations

    公开(公告)号:US11640634B2

    公开(公告)日:2023-05-02

    申请号:US16865572

    申请日:2020-05-04

    Applicant: ADOBE INC.

    Abstract: Systems, methods, and computer storage media are disclosed for predicting visual compatibility between a bundle of catalog items (e.g., a partial outfit) and a candidate catalog item to add to the bundle. Visual compatibility prediction may be jointly conditioned on item type, context, and style by determining a first compatibility score jointly conditioned on type (e.g., category) and context, determining a second compatibility score conditioned on outfit style, and combining the first and second compatibility scores into a unified visual compatibility score. A unified visual compatibility score may be determined for each of a plurality of candidate items, and the candidate item with the highest unified visual compatibility score may be selected to add to the bundle (e.g., fill the in blank for the partial outfit).

    IDENTIFYING DIGITAL ATTRIBUTES FROM MULTIPLE ATTRIBUTE GROUPS WITHIN TARGET DIGITAL IMAGES UTILIZING A DEEP COGNITIVE ATTRIBUTION NEURAL NETWORK

    公开(公告)号:US20210073267A1

    公开(公告)日:2021-03-11

    申请号:US16564831

    申请日:2019-09-09

    Applicant: Adobe, Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating tags for an object portrayed in a digital image based on predicted attributes of the object. For example, the disclosed systems can utilize interleaved neural network layers of alternating inception layers and dilated convolution layers to generate a localization feature vector. Based on the localization feature vector, the disclosed systems can generate attribute localization feature embeddings, for example, using some pooling layer such as a global average pooling layer. The disclosed systems can then apply the attribute localization feature embeddings to corresponding attribute group classifiers to generate tags based on predicted attributes. In particular, attribute group classifiers can predict attributes as associated with a query image (e.g., based on a scoring comparison with other potential attributes of an attribute group). Based on the generated tags, the disclosed systems can respond to tag queries and search queries.

    Entropy based synthetic data generation for augmenting classification system training data

    公开(公告)号:US11423264B2

    公开(公告)日:2022-08-23

    申请号:US16659147

    申请日:2019-10-21

    Applicant: Adobe Inc.

    Abstract: A data classification system is trained to classify input data into multiple classes. The system is initially trained by adjusting weights within the system based on a set of training data that includes multiple tuples, each being a training instance and corresponding training label. Two training instances, one from a minority class and one from a majority class, are selected from the set of training data based on entropies for the training instances. A synthetic training instance is generated by combining the two selected training instances and a corresponding training label is generated. A tuple including the synthetic training instance and the synthetic training label is added to the set of training data, resulting in an augmented training data set. One or more such synthetic training instances can be added to the augmented training data set and the system is then re-trained on the augmented training data set.

    Similarity propagation for one-shot and few-shot image segmentation

    公开(公告)号:US11367271B2

    公开(公告)日:2022-06-21

    申请号:US16906954

    申请日:2020-06-19

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for one-shot and few-shot image segmentation on classes of objects that were not represented during training. In some embodiments, a dual prediction scheme may be applied in which query and support masks are jointly predicted using a shared decoder, which aids in similarity propagation between the query and support features. Additionally or alternatively, foreground and background attentive fusion may be applied to utilize cues from foreground and background feature similarities between the query and support images. Finally, to prevent overfitting on class-conditional similarities across training classes, input channel averaging may be applied for the query image during training. Accordingly, the techniques described herein may be used to achieve state-of-the-art performance for both one-shot and few-shot segmentation tasks.

    SIMILARITY PROPAGATION FOR ONE-SHOT AND FEW-SHOT IMAGE SEGMENTATION

    公开(公告)号:US20210397876A1

    公开(公告)日:2021-12-23

    申请号:US16906954

    申请日:2020-06-19

    Applicant: ADOBE INC.

    Abstract: Embodiments of the present invention provide systems, methods, and computer storage media for one-shot and few-shot image segmentation on classes of objects that were not represented during training. In some embodiments, a dual prediction scheme may be applied in which query and support masks are jointly predicted using a shared decoder, which aids in similarity propagation between the query and support features. Additionally or alternatively, foreground and background attentive fusion may be applied to utilize cues from foreground and background feature similarities between the query and support images. Finally, to prevent overfitting on class-conditional similarities across training classes, input channel averaging may be applied for the query image during training. Accordingly, the techniques described herein may be used to achieve state-of-the-art performance for both one-shot and few-shot segmentation tasks.

    Cloth warping using multi-scale patch adversarial loss

    公开(公告)号:US11080817B2

    公开(公告)日:2021-08-03

    申请号:US16673574

    申请日:2019-11-04

    Applicant: Adobe Inc.

    Abstract: Generating a synthesized image of a person wearing clothing is described. A two-dimensional reference image depicting a person wearing an article of clothing and a two-dimensional image of target clothing in which the person is to be depicted as wearing are received. To generate the synthesized image, a warped image of the target clothing is generated via a geometric matching module, which implements a machine learning model trained to recognize similarities between warped and non-warped clothing images using multi-scale patch adversarial loss. The multi-scale patch adversarial loss is determined by sampling patches of different sizes from corresponding locations of warped and non-warped clothing images. The synthesized image is generated on a per-person basis, such that the target clothing fits the particular body shape, pose, and unique characteristics of the person.

    Cloth Warping Using Multi-Scale Patch Adversarial Loss

    公开(公告)号:US20210133919A1

    公开(公告)日:2021-05-06

    申请号:US16673574

    申请日:2019-11-04

    Applicant: Adobe Inc.

    Abstract: Generating a synthesized image of a person wearing clothing is described. A two-dimensional reference image depicting a person wearing an article of clothing and a two-dimensional image of target clothing in which the person is to be depicted as wearing are received. To generate the synthesized image, a warped image of the target clothing is generated via a geometric matching module, which implements a machine learning model trained to recognize similarities between warped and non-warped clothing images using multi-scale patch adversarial loss. The multi-scale patch adversarial loss is determined by sampling patches of different sizes from corresponding locations of warped and non-warped clothing images. The synthesized image is generated on a per-person basis, such that the target clothing fits the particular body shape, pose, and unique characteristics of the person.

Patent Agency Ranking