Generating images for virtual try-on and pose transfer

    公开(公告)号:US11861772B2

    公开(公告)日:2024-01-02

    申请号:US17678237

    申请日:2022-02-23

    Applicant: Adobe Inc.

    CPC classification number: G06T11/60 G06N3/045 G06T7/11 G06T7/70

    Abstract: In implementations of systems for generating images for virtual try-on and pose transfer, a computing device implements a generator system to receive input data describing a first digital image that depicts a person in a pose and a second digital image that depicts a garment. Candidate appearance flow maps are computed that warp the garment based on the pose at different pixel-block sizes using a first machine learning model. The generator system generates a warped garment image by combining the candidate appearance flow maps as an aggregate per-pixel displacement map using a convolutional gated recurrent network. A conditional segment mask is predicted that segments portions of a geometry of the person using a second machine learning model. The generator system outputs a digital image that depicts the person in the pose wearing the garment based on the warped garment image and the conditional segmentation mask using a third machine learning model.

    Model training with retrospective loss

    公开(公告)号:US11797823B2

    公开(公告)日:2023-10-24

    申请号:US16793551

    申请日:2020-02-18

    Applicant: Adobe Inc.

    Abstract: Generating a machine learning model that is trained using retrospective loss is described. A retrospective loss system receives an untrained machine learning model and a task for training the model. The retrospective loss system initially trains the model over warm-up iterations using task-specific loss that is determined based on a difference between predictions output by the model during training on input data and a ground truth dataset for the input data. Following the warm-up training iterations, the retrospective loss system continues to train the model using retrospective loss, which is model-agnostic and constrains the model such that a subsequently output prediction is more similar to the ground truth dataset than the previously output prediction. After determining that the model's outputs are within a threshold similarity to the ground truth dataset, the model is output with its current parameters as a trained model.

    Generating combined feature embedding for minority class upsampling in training machine learning models with imbalanced samples

    公开(公告)号:US11631029B2

    公开(公告)日:2023-04-18

    申请号:US16564531

    申请日:2019-09-09

    Applicant: Adobe, Inc.

    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for generating combined feature embeddings for minority class upsampling in training machine learning models with imbalanced training samples. For example, the disclosed systems can select training sample values from a set of training samples and a combination ratio value from a continuous probability distribution. Additionally, the disclosed systems can generate a combined synthetic training sample value by modifying the selected training sample values using the combination ratio value and combining the modified training sample values. Moreover, the disclosed systems can generate a combined synthetic ground truth label based on the combination ratio value. In addition, the disclosed systems can utilize the combined synthetic training sample value and the combined synthetic ground truth label to generate a combined synthetic training sample and utilize the combined synthetic training sample to train a machine learning model.

    TEXT-CONDITIONED IMAGE SEARCH BASED ON TRANSFORMATION, AGGREGATION, AND COMPOSITION OF VISIO-LINGUISTIC FEATURES

    公开(公告)号:US20220245391A1

    公开(公告)日:2022-08-04

    申请号:US17160893

    申请日:2021-01-28

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for text-conditioned image searching. A methodology implementing the techniques includes decomposing a source image into visual feature vectors associated with different levels of granularity. The method also includes decomposing a text query (defining a target image attribute) into feature vectors associated with different levels of granularity including a global text feature vector. The method further includes generating image-text embeddings based on the visual feature vectors and the text feature vectors to encode information from visual and textual features. The method further includes composing a visio-linguistic representation based on a hierarchical aggregation of the image-text embeddings to encode visual and textual information at multiple levels of granularity. The method further includes identifying a target image that includes the visio-linguistic representation and the global text feature vector, so that the target image relates to the target image attribute, and providing the target image as an image search result.

    Accurately generating virtual try-on images utilizing a unified neural network framework

    公开(公告)号:US11030782B2

    公开(公告)日:2021-06-08

    申请号:US16679165

    申请日:2019-11-09

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a virtual try-on digital image utilizing a unified neural network framework. For example, the disclosed systems can utilize a coarse-to-fine warping process to generate a warped version of a product digital image to fit a model digital image. In addition, the disclosed systems can utilize a texture transfer process to generate a corrected segmentation mask indicating portions of a model digital image to replace with a warped product digital image. The disclosed systems can further generate a virtual try-on digital image based on a warped product digital image, a model digital image, and a corrected segmentation mask. In some embodiments, the disclosed systems can train one or more neural networks to generate accurate outputs for various stages of generating a virtual try-on digital image.

    Entropy Based Synthetic Data Generation For Augmenting Classification System Training Data

    公开(公告)号:US20210117718A1

    公开(公告)日:2021-04-22

    申请号:US16659147

    申请日:2019-10-21

    Applicant: Adobe Inc.

    Abstract: A data classification system is trained to classify input data into multiple classes. The system is initially trained by adjusting weights within the system based on a set of training data that includes multiple tuples, each being a training instance and corresponding training label. Two training instances, one from a minority class and one from a majority class, are selected from the set of training data based on entropies for the training instances. A synthetic training instance is generated by combining the two selected training instances and a corresponding training label is generated. A tuple including the synthetic training instance and the synthetic training label is added to the set of training data, resulting in an augmented training data set. One or more such synthetic training instances can be added to the augmented training data set and the system is then re-trained on the augmented training data set.

Patent Agency Ranking