-
公开(公告)号:US20240428015A1
公开(公告)日:2024-12-26
申请号:US18386343
申请日:2023-11-02
Applicant: Google LLC
Inventor: Jinsung Yoon , Jiefeng Chen , Sayna Ebrahimi , Sercan Omer Arik
IPC: G06F40/40
Abstract: Aspects of the disclosure are directed to methods, systems, and computer readable media for adaptation with self-evaluation to improve selective prediction in large language models (LLMs), generally referred to as ASPIRE. ASPIRE includes training LLMs on a portion of training data from a question answering task to learn self-evaluation, e.g., learn to distinguish whether a generated answer is correct or not. ASPIRE further includes a selection score that combines a likelihood of that generated answer is correct with a self-evaluation score for selective prediction. ASPIRE demonstrates improved selective prediction performance with less computational cost.
-
公开(公告)号:US20240144005A1
公开(公告)日:2024-05-02
申请号:US18404881
申请日:2024-01-04
Applicant: Google LLC
Inventor: Sercan Omer Arik , Tomas Jon Pfister
Abstract: A method of interpreting tabular data includes receiving, at a deep tabular data learning network (TabNet) executing on data processing hardware, a set of features. For each of multiple sequential processing steps, the method also includes: selecting, using a sparse mask of the TabNet, a subset of relevant features of the set of features; processing using a feature transformer of the TabNet, the subset of relevant features to generate a decision step output and information for a next processing step in the multiple sequential processing steps; and providing the information to the next processing step. The method also includes determining a final decision output by aggregating the decision step outputs generated for the multiple sequential processing steps.
-
公开(公告)号:US11823058B2
公开(公告)日:2023-11-21
申请号:US17026145
申请日:2020-09-18
Applicant: Google LLC
Inventor: Sercan Omer Arik , Jinsung Yoon , Tomas Jon Pfister
Abstract: A method includes obtaining a set of training samples. During each of a plurality of training iterations, the method also includes sampling a batch of training samples from the set of training samples. The method includes, for each training sample in the batch of training samples, determining, using a data value estimator, a selection probability. The selection probability for the training sample is based on estimator parameter values of the data value estimator. The method also includes selecting, based on the selection probabilities of each training sample, a subset of training samples from the batch of training samples, and determining, using a predictor model with the subset of training samples, performance measurements. The method also includes adjusting model parameter values of the predictor model based on the performance measurements, and updating the estimator parameter values of the data value estimator based on the performance measurements.
-
公开(公告)号:US20230120894A1
公开(公告)日:2023-04-20
申请号:US18045722
申请日:2022-10-11
Applicant: Google LLC
Inventor: Sercan Omer Arik , Chen Xing , Zizhao Zhang , Tomas Jon Pfister
IPC: G06N3/08 , G06N3/04 , G06F18/214 , G06F18/2413 , G06F18/2431
Abstract: A method includes receiving a training data set including a plurality of training data subsets. From two or more training data subsets in the training data set, the method includes selecting a support set of training examples and a query set of training examples. The method includes determining, using the classification model, a centroid value for each respective class. For each training example in the query set of training examples, the method includes generating, using the classification model, a query encoding, determining a class distance measure, determining a ground-truth distance, and updating parameters of the classification model. For each training example in the query set of training examples identified as being misclassified, the method further includes generating a standard deviation value, sampling a new query, and updating parameters of the confidence model based on the new query encoding.
-
公开(公告)号:US20210034977A1
公开(公告)日:2021-02-04
申请号:US16945898
申请日:2020-08-02
Applicant: Google LLC
Inventor: Sercan Omer Arik , Tomas Jon Pfister
Abstract: A method of interpreting tabular data includes receiving, at a deep tabular data learning network (TabNet) executing on data processing hardware, a set of features. For each of multiple sequential processing steps, the method also includes: selecting, using a sparse mask of the TabNet, a subset of relevant features of the set of features; processing using a feature transformer of the TabNet, the subset of relevant features to generate a decision step output and information for a next processing step in the multiple sequential processing steps; and providing the information to the next processing step. The method also includes determining a final decision output by aggregating the decision step outputs generated for the multiple sequential processing steps.
-
公开(公告)号:US20250110940A1
公开(公告)日:2025-04-03
申请号:US18905090
申请日:2024-10-02
Applicant: Google LLC
Inventor: Xin Yang Yak , Sercan Omer Arik , Yihe Dong , Javier Gonzalvo Fructuoso
Abstract: Methods, systems, and apparatuses, including computer programs encoded on computer storage media, for implementing a neural network that can perform one or more machine learning tasks on an input that includes data that represents a given data structure. In particular, implementing a language model to encode the data and a foundation neural network with an attention-based architecture to generate the task output. Because of how language model generated embeddings are defined and cached, the described techniques demonstrate significant improvements in required computational resources for training and inference while also exceeding prediction performance on a variety of prediction tasks over conventional approaches.
-
公开(公告)号:US12106223B2
公开(公告)日:2024-10-01
申请号:US18333301
申请日:2023-06-12
Applicant: Google LLC
Inventor: Sercan Omer Arik , Jinsung Yoon , Tomas Pfister
Abstract: A method includes obtaining a batch of training samples. For each particular training sample in the batch of training samples, the method includes generating, using a data value estimator model and the particular training sample, a corresponding predicted value of the particular training sample when used to train a machine learning model. The method includes selecting, based on the corresponding predicted values, a subset of the batch of training samples. For each particular training sample in the subset of the batch of training samples, the method includes determining, using the machine learning model and the particular training sample, a corresponding prediction performance measurement. The method includes adjusting one or more estimator parameter values of the data value estimator model based on the corresponding prediction performance measurements.
-
公开(公告)号:US20240185043A1
公开(公告)日:2024-06-06
申请号:US18389010
申请日:2023-11-13
Applicant: Google LLC
Inventor: Jinsung Yoon , Michel Jonathan Mizrahi , Nahid Farhady Ghalaty , Thomas Dunn Henry Jarvinen , Ashwin Sura Ravi , Peter Robert Brune , Fanyu Kong , David Roger Anderson , George Lee , Farhana Bandukwala , Eliezer Yosef Kanal , Sercan Omer Arik , Tomas Pfister
IPC: G06N3/0475 , G06N3/0455
CPC classification number: G06N3/0475 , G06N3/0455
Abstract: The present disclosure provides a generative modeling framework for generating highly realistic and privacy preserving synthetic records for heterogenous time-series data, such as electronic health record data, financial data, etc. The generative modeling framework is based on a two-stage model that includes sequential encoder-decoder networks and generative adversarial networks (GANs).
-
公开(公告)号:US20230237260A1
公开(公告)日:2023-07-27
申请号:US18150277
申请日:2023-01-05
Applicant: Google LLC
Inventor: Jinsung Yoon , Kihyuk Sohn , Chun-Liang Li , Sercan Omer Arik
IPC: G06F40/216 , G06N5/022
CPC classification number: G06F40/216 , G06N5/022
Abstract: Aspects of the disclosure are directed to a Semi-supervised Pseudo-labeler Anomaly Detection with Ensembling (SPADE) framework that is not limited by the assumption that labeled and unlabeled data come from the same distribution. SPADE utilizes an ensemble of one-class classifiers as the pseudo-labeler to improve the robustness of pseudo-labeling with distribution mismatch. Partial matching automatically selects critical hyper-parameters for pseudo-labeling without validation data, which is crucial with a limited amount of labeled data.
-
公开(公告)号:US20210089870A1
公开(公告)日:2021-03-25
申请号:US17026145
申请日:2020-09-18
Applicant: Google LLC
Inventor: Sercan Omer Arik , Jinsung Yoon , Tomas Jon Pfister
Abstract: A method includes obtaining a set of training samples. During each of a plurality of training iterations, the method also includes sampling a batch of training samples from the set of training samples. The method includes, for each training sample in the batch of training samples, determining, using a data value estimator, a selection probability. The selection probability for the training sample is based on estimator parameter values of the data value estimator. The method also includes selecting, based on the selection probabilities of each training sample, a subset of training samples from the batch of training samples, and determining, using a predictor model with the subset of training samples, performance measurements. The method also includes adjusting model parameter values of the predictor model based on the performance measurements, and updating the estimator parameter values of the data value estimator based on the performance measurements.
-
-
-
-
-
-
-
-
-