IDENTIFYING DIGITAL ATTRIBUTES FROM MULTIPLE ATTRIBUTE GROUPS UTILIZING A DEEP COGNITIVE ATTRIBUTION NEURAL NETWORK

    公开(公告)号:US20220309093A1

    公开(公告)日:2022-09-29

    申请号:US17806922

    申请日:2022-06-14

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media for generating tags for an object portrayed in a digital image based on predicted attributes of the object. For example, the disclosed systems can utilize interleaved neural network layers of alternating inception layers and dilated convolution layers to generate a localization feature vector. Based on the localization feature vector, the disclosed systems can generate attribute localization feature embeddings, for example, using some pooling layer such as a global average pooling layer. The disclosed systems can then apply the attribute localization feature embeddings to corresponding attribute group classifiers to generate tags based on predicted attributes. In particular, attribute group classifiers can predict attributes as associated with a query image (e.g., based on a scoring comparison with other potential attributes of an attribute group). Based on the generated tags, the disclosed systems can respond to tag queries and search queries.

    TEXT CONDITIONED IMAGE SEARCH BASED ON DUAL-DISENTANGLED FEATURE COMPOSITION

    公开(公告)号:US20220237406A1

    公开(公告)日:2022-07-28

    申请号:US17160862

    申请日:2021-01-28

    Applicant: Adobe Inc.

    Abstract: Techniques are disclosed for text conditioned image searching. A methodology implementing the techniques according to an embodiment includes receiving a source image and a text query defining a target image attribute. The method also includes decomposing the source image into image content and style feature vectors and decomposing the text query into text content and style feature vectors, wherein image style is descriptive of image content and text style is descriptive of text content. The method further includes composing a global content feature vector based on the text content feature vector and the image content feature vector and composing a global style feature vector based on the text style feature vector and the image style feature vector. The method further includes identifying a target image that relates to the global content feature vector and the global style feature vector so that the target image relates to the target image attribute.

    Model Training with Retrospective Loss

    公开(公告)号:US20210256387A1

    公开(公告)日:2021-08-19

    申请号:US16793551

    申请日:2020-02-18

    Applicant: Adobe Inc.

    Abstract: Generating a machine learning model that is trained using retrospective loss is described. A retrospective loss system receives an untrained machine learning model and a task for training the model. The retrospective loss system initially trains the model over warm-up iterations using task-specific loss that is determined based on a difference between predictions output by the model during training on input data and a ground truth dataset for the input data. Following the warm-up training iterations, the retrospective loss system continues to train the model using retrospective loss, which is model-agnostic and constrains the model such that a subsequently output prediction is more similar to the ground truth dataset than the previously output prediction. After determining that the model's outputs are within a threshold similarity to the ground truth dataset, the model is output with its current parameters as a trained model.

    ACCURATELY GENERATING VIRTUAL TRY-ON IMAGES UTILIZING A UNIFIED NEURAL NETWORK FRAMEWORK

    公开(公告)号:US20210142539A1

    公开(公告)日:2021-05-13

    申请号:US16679165

    申请日:2019-11-09

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a virtual try-on digital image utilizing a unified neural network framework. For example, the disclosed systems can utilize a coarse-to-fine warping process to generate a warped version of a product digital image to fit a model digital image. In addition, the disclosed systems can utilize a texture transfer process to generate a corrected segmentation mask indicating portions of a model digital image to replace with a warped product digital image. The disclosed systems can further generate a virtual try-on digital image based on a warped product digital image, a model digital image, and a corrected segmentation mask. In some embodiments, the disclosed systems can train one or more neural networks to generate accurate outputs for various stages of generating a virtual try-on digital image.

    GENERATING COMBINED FEATURE EMBEDDING FOR MINORITY CLASS UPSAMPLING IN TRAINING MACHINE LEARNING MODELS WITH IMBALANCED SAMPLES

    公开(公告)号:US20210073671A1

    公开(公告)日:2021-03-11

    申请号:US16564531

    申请日:2019-09-09

    Applicant: Adobe, Inc.

    Abstract: Systems, methods, and non-transitory computer-readable media are disclosed for generating combined feature embeddings for minority class upsampling in training machine learning models with imbalanced training samples. For example, the disclosed systems can select training sample values from a set of training samples and a combination ratio value from a continuous probability distribution. Additionally, the disclosed systems can generate a combined synthetic training sample value by modifying the selected training sample values using the combination ratio value and combining the modified training sample values. Moreover, the disclosed systems can generate a combined synthetic ground truth label based on the combination ratio value. In addition, the disclosed systems can utilize the combined synthetic training sample value and the combined synthetic ground truth label to generate a combined synthetic training sample and utilize the combined synthetic training sample to train a machine learning model.

Patent Agency Ranking