CONTEXTUAL QUERY GENERATION
    1.
    发明申请

    公开(公告)号:US20240427998A1

    公开(公告)日:2024-12-26

    申请号:US18339694

    申请日:2023-06-22

    Applicant: Adobe Inc.

    Abstract: Contextual query generation techniques are described that enable generation of a contextual query for output to a question-answering (QA) model. A content processing system, for instance, configures a language model using in-context learning to generate queries based on semantic contexts of input documents, e.g., based on one or more linguistic cues from text of the input documents. The content processing system receives an input that includes a document having text and a reference query. The content processing system leverages the language model to generate a contextual query based on a semantic context of the text of the document and the reference query. The content processing system then outputs the contextual query and the document to a QA model. Using the QA model, the content processing system generates a response as an answer to the contextual query based on the contextual query and the document.

    UTILIZING A GENERATIVE NEURAL NETWORK TO INTERACTIVELY CREATE AND MODIFY DIGITAL IMAGES BASED ON NATURAL LANGUAGE FEEDBACK

    公开(公告)号:US20230230198A1

    公开(公告)日:2023-07-20

    申请号:US17576091

    申请日:2022-01-14

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement a neural network framework for interactive multi-round image generation from natural language inputs. Specifically, the disclosed systems provide an intelligent framework (i.e., a text-based interactive image generation model) that facilitates a multi-round image generation and editing workflow that comports with arbitrary input text and synchronous interaction. In particular embodiments, the disclosed systems utilize natural language feedback for conditioning a generative neural network that performs text-to-image generation and text-guided image modification. For example, the disclosed systems utilize a trained model to inject textual features from natural language feedback into a unified joint embedding space for generating text-informed style vectors. In turn, the disclosed systems can generate an image with semantically meaningful features that map to the natural language feedback. Moreover, the disclosed systems can persist these semantically meaningful features throughout a refinement process and across generated images.

    DIALOGUE SKELETON ASSISTED PROMPT TRANSFER FOR DIALOGUE SUMMARIZATION

    公开(公告)号:US20250028751A1

    公开(公告)日:2025-01-23

    申请号:US18355901

    申请日:2023-07-20

    Applicant: Adobe Inc.

    Abstract: Dialogue skeleton assisted prompt transfer for dialogue summarization techniques are described that support training of a language model to perform dialogue summarization in a few-shot scenario. A processing device, for instance, receives a training dataset that includes training dialogues. The processing device then generates dialogue skeletons based on the training dialogues using one or more perturbation-based probes. The processing device trains a language model using prompt transfer between a source task, e.g., dialogue state tracking, and a target task, e.g., dialogue summarization, using the dialogue skeletons as supervision. The processing device then receives an input dialogue and uses the trained language model to generate a summary of the input dialogue.

    TEACHING A MACHINE CLASSIFIER TO RECOGNIZE A NEW CLASS

    公开(公告)号:US20230143721A1

    公开(公告)日:2023-05-11

    申请号:US17524282

    申请日:2021-11-11

    Applicant: ADOBE INC.

    CPC classification number: G06F40/295 G06N20/00

    Abstract: Embodiments of the technology described herein describe a machine classifier capable of continually learning new classes through a continual few-shot learning approach. A natural language processing (NLP) machine classifier may initially be trained to identify a plurality of other classes through a conventional training process. In order to learn a new class, natural-language training data for a new class is generated. The training data for the new class may be few-shot training data. The training also uses synthetic training data that represents each of the plurality of other classes. The synthetic training data may be generated through a model inversion of the original classifier. The synthetic training data and the natural-language training data are used to retrain the NLP classifier to identify text in the plurality of other classes and the new class using.

    TEACHING A MACHINE CLASSIFIER TO RECOGNIZE A NEW CLASS

    公开(公告)号:US20240273296A1

    公开(公告)日:2024-08-15

    申请号:US18625884

    申请日:2024-04-03

    Applicant: Adobe Inc.

    CPC classification number: G06F40/295 G06N20/00

    Abstract: Embodiments of the technology described herein describe a machine classifier capable of continually learning new classes through a continual few-shot learning approach. A natural language processing (NLP) machine classifier may initially be trained to identify a plurality of other classes through a conventional training process. In order to learn a new class, natural-language training data for a new class is generated. The training data for the new class may be few-shot training data. The training also uses synthetic training data that represents each of the plurality of other classes. The synthetic training data may be generated through a model inversion of the original classifier. The synthetic training data and the natural-language training data are used to retrain the NLP classifier to identify text in the plurality of other classes and the new class using.

    Teaching a machine classifier to recognize a new class

    公开(公告)号:US11995403B2

    公开(公告)日:2024-05-28

    申请号:US17524282

    申请日:2021-11-11

    Applicant: ADOBE INC.

    CPC classification number: G06F40/295 G06N20/00

    Abstract: Embodiments of the technology described herein describe a machine classifier capable of continually learning new classes through a continual few-shot learning approach. A natural language processing (NLP) machine classifier may initially be trained to identify a plurality of other classes through a conventional training process. In order to learn a new class, natural-language training data for a new class is generated. The training data for the new class may be few-shot training data. The training also uses synthetic training data that represents each of the plurality of other classes. The synthetic training data may be generated through a model inversion of the original classifier. The synthetic training data and the natural-language training data are used to retrain the NLP classifier to identify text in the plurality of other classes and the new class using.

    DIALOGUE STATE AWARE DIALOGUE SUMMARIZATION

    公开(公告)号:US20250005289A1

    公开(公告)日:2025-01-02

    申请号:US18343389

    申请日:2023-06-28

    Applicant: Adobe Inc.

    Abstract: Dialogue state aware dialogue summarization techniques are described that enable generation of dialogue summaries from target domains with limited training data. A content processing system, for instance, generates one or more clusters based on training dialogues from one or more source domains. The clusters represent domain-specific features of the training dialogues and are further based on dialogue states of the training dialogues. The content processing system trains a machine learning model to generate summaries of dialogues by using the one or more clusters as prefixes in a prefix-tuning approach. The content processing system receives an input that includes a dialogue from a target domain. The content processing system generates an input prompt based on the dialogue and the one or more clusters, and the model generates a summary of the dialogue based on the input prompt.

    Utilizing a generative neural network to interactively create and modify digital images based on natural language feedback

    公开(公告)号:US12148119B2

    公开(公告)日:2024-11-19

    申请号:US17576091

    申请日:2022-01-14

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods that implement a neural network framework for interactive multi-round image generation from natural language inputs. Specifically, the disclosed systems provide an intelligent framework (i.e., a text-based interactive image generation model) that facilitates a multi-round image generation and editing workflow that comports with arbitrary input text and synchronous interaction. In particular embodiments, the disclosed systems utilize natural language feedback for conditioning a generative neural network that performs text-to-image generation and text-guided image modification. For example, the disclosed systems utilize a trained model to inject textual features from natural language feedback into a unified joint embedding space for generating text-informed style vectors. In turn, the disclosed systems can generate an image with semantically meaningful features that map to the natural language feedback. Moreover, the disclosed systems can persist these semantically meaningful features throughout a refinement process and across generated images.

Patent Agency Ranking