DISTANCE-BASED LEARNING CONFIDENCE MODEL

    公开(公告)号:US20230120894A1

    公开(公告)日:2023-04-20

    申请号:US18045722

    申请日:2022-10-11

    Applicant: Google LLC

    Abstract: A method includes receiving a training data set including a plurality of training data subsets. From two or more training data subsets in the training data set, the method includes selecting a support set of training examples and a query set of training examples. The method includes determining, using the classification model, a centroid value for each respective class. For each training example in the query set of training examples, the method includes generating, using the classification model, a query encoding, determining a class distance measure, determining a ground-truth distance, and updating parameters of the classification model. For each training example in the query set of training examples identified as being misclassified, the method further includes generating a standard deviation value, sampling a new query, and updating parameters of the confidence model based on the new query encoding.

    Complementary Prompting For Rehearsal-Free Continual Learning

    公开(公告)号:US20230274143A1

    公开(公告)日:2023-08-31

    申请号:US18173985

    申请日:2023-02-24

    Applicant: Google LLC

    CPC classification number: G06N3/08

    Abstract: A method for rehearsal-free continual learning includes obtaining a set of training samples where training sample in the set of training samples is associated with a respective task of a plurality of different tasks. The method includes obtaining a task-invariant prompt representative of learned knowledge common to each respective task of the plurality of different tasks. The method includes, for each respective task of the plurality of different tasks, obtaining a respective task-specific prompt representative of learned knowledge specific to the respective task. The method includes, during each of one or more training iterations, for each respective training sample in the set of training samples, selecting the respective task-specific prompt representative of the respective task of the respective training sample and training a model using the task-invariant prompt and the selected respective task-specific prompt.

    GENERATING HIGH-RESOLUTION IMAGES USING SELF-ATTENTION

    公开(公告)号:US20240265586A1

    公开(公告)日:2024-08-08

    申请号:US18564841

    申请日:2022-05-27

    Applicant: Google LLC

    CPC classification number: G06T11/00 G06T3/4046

    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating high-resolution images using self-attention based neural networks. One of the systems includes a neural network configured to generate images, the neural network comprising a sequence of one or more first network blocks followed by a sequence of one or more second network blocks, wherein: each first network block is configured to perform operations comprising: applying a self-attention mechanism over at least a subset of first elements of a first block input to generate an updated first block input; and upsampling the updated first block input to generate a first block output; and each second network block is configured to perform operations comprising: processing a second block input using one or more neural network layers to generate an updated second block input; and upsampling the updated second block input to generate a second block output.

    DISTANCE-BASED LEARNING CONFIDENCE MODEL

    公开(公告)号:US20210279517A1

    公开(公告)日:2021-09-09

    申请号:US17031144

    申请日:2020-09-24

    Applicant: Google LLC

    Abstract: A method for jointly training a classification model and a confidence model. The method includes receiving a training data set including a plurality of training data subsets. From two or more training data subsets in the training data set, the method includes selecting a support set of training examples and a query set of training examples. The method includes determining, using the classification model, a centroid value for each respective class. For each training example in the query set of training examples, the method includes generating, using the classification model, a query encoding, determining a class distance measure, determining a ground-truth distance, and updating parameters of the classification model. For each training example in the query set of training examples identified as being misclassified, the method further includes generating a standard deviation value, sampling a new query, and updating parameters of the confidence model based on the new query encoding.

    ROBUST TRAINING IN THE PRESENCE OF LABEL NOISE

    公开(公告)号:US20210089964A1

    公开(公告)日:2021-03-25

    申请号:US17026225

    申请日:2020-09-19

    Applicant: Google LLC

    Abstract: A method for training a model comprises obtaining a set of labeled training samples each associated with a given label. For each labeled training sample, the method includes generating a pseudo label and estimating a weight of the labeled training sample indicative of an accuracy of the given label. The method also includes determining whether the weight of the labeled training sample satisfies a weight threshold. When the weight of the labeled training sample satisfies the weight threshold, the method includes adding the labeled training sample to a set of cleanly labeled training samples. Otherwise, the method includes adding the labeled training sample to a set of mislabeled training samples. The method includes training the model with the set of cleanly labeled training samples using corresponding given labels and the set of mislabeled training samples using corresponding pseudo labels.

    ACTIVE LEARNING VIA A SAMPLE CONSISTENCY ASSESSMENT

    公开(公告)号:US20210056417A1

    公开(公告)日:2021-02-25

    申请号:US17000094

    申请日:2020-08-21

    Applicant: Google LLC

    Abstract: A method for active learning includes obtaining a set of unlabeled training samples and for each unlabeled training sample, perturbing the unlabeled training sample to generate an augmented training sample. The method includes generating, using a machine learning model, a predicted label for both samples and determining an inconsistency value for the unlabeled training sample that represents variance between the predicted labels for the unlabeled and augmented training samples. The method includes sorting the unlabeled training samples based on the inconsistency values and obtaining, for a threshold number of samples selected from the sorted unlabeled training samples, a ground truth label. The method includes selecting a current set of labeled training samples including each selected unlabeled training samples paired with the corresponding ground truth label. The method includes training, using the current set and a proper subset of unlabeled training samples, the machine learning model.

    Active learning via a sample consistency assessment

    公开(公告)号:US12271822B2

    公开(公告)日:2025-04-08

    申请号:US17000094

    申请日:2020-08-21

    Applicant: Google LLC

    Abstract: A method for active learning includes obtaining a set of unlabeled training samples and for each unlabeled training sample, perturbing the unlabeled training sample to generate an augmented training sample. The method includes generating, using a machine learning model, a predicted label for both samples and determining an inconsistency value for the unlabeled training sample that represents variance between the predicted labels for the unlabeled and augmented training samples. The method includes sorting the unlabeled training samples based on the inconsistency values and obtaining, for a threshold number of samples selected from the sorted unlabeled training samples, a ground truth label. The method includes selecting a current set of labeled training samples including each selected unlabeled training samples paired with the corresponding ground truth label. The method includes training, using the current set and a proper subset of unlabeled training samples, the machine learning model.

    Distance-based learning confidence model

    公开(公告)号:US12039443B2

    公开(公告)日:2024-07-16

    申请号:US18045722

    申请日:2022-10-11

    Applicant: Google LLC

    Abstract: A method includes receiving a training data set including a plurality of training data subsets. From two or more training data subsets in the training data set, the method includes selecting a support set of training examples and a query set of training examples. The method includes determining, using the classification model, a centroid value for each respective class. For each training example in the query set of training examples, the method includes generating, using the classification model, a query encoding, determining a class distance measure, determining a ground-truth distance, and updating parameters of the classification model. For each training example in the query set of training examples identified as being misclassified, the method further includes generating a standard deviation value, sampling a new query, and updating parameters of the confidence model based on the new query encoding.

    ZERO-SHOT FORM ENTITY QUERY FRAMEWORK
    9.
    发明公开

    公开(公告)号:US20240153297A1

    公开(公告)日:2024-05-09

    申请号:US18501982

    申请日:2023-11-03

    Applicant: Google LLC

    CPC classification number: G06V30/24 G06F16/211 G06V30/19147 G06V30/412

    Abstract: A method for extracting entities comprises obtaining a document that includes a series of textual fields that includes a plurality of entities. Each entity represents information associated with a predefined category. The method includes generating, using the document, a series of tokens representing the series of textual fields. The method includes generating an entity prompt that includes the series of tokens and one of the plurality of entities and generating a schema prompt that includes a schema associated with the document. The method includes generating a model query that includes the entity prompt and the schema prompt and determining, using an entity extraction model and the model query, a location of the one of the plurality of entities among the series of tokens. The method includes extracting, from the document, the one of the plurality of entities using the location of the one of the plurality of entities.

Patent Agency Ranking