-
公开(公告)号:US20240289926A1
公开(公告)日:2024-08-29
申请号:US18564915
申请日:2022-05-27
Applicant: Google LLC
Inventor: Carlos Riquelme Ruiz , André Susano Pinto , Basil Mustafa , Daniel M. Keysers , Joan Puigcerver i Perez , Maxim Neumann , Neil Matthew Tinmouth Houlsby , Rodolphe Jenatton
IPC: G06T5/60
CPC classification number: G06T5/60 , G06T2207/20084
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating predictions about images. One of the systems includes a neural network comprising a sequence of one or more network blocks that are each configured to perform operations comprising: obtaining a block input that represents an intermediate representation of an input image; determining a plurality of patches of the block input or of an updated representation of the block input, wherein each patch comprises a different subset of elements of the block input or of the updated representation of the block input; assigning each patch to one or more respective expert modules of a plurality of expert modules of the network block; for each patch of the plurality of patches, processing the patch using the corresponding expert modules to generate respective module outputs; and generating a block output by combining the module outputs.
-
公开(公告)号:US20240169629A1
公开(公告)日:2024-05-23
申请号:US18513031
申请日:2023-11-17
Applicant: Google LLC
IPC: G06T11/60 , G06F40/284 , G06V10/74 , G06V10/764 , G06V10/774 , G06V10/776 , G06V20/70
CPC classification number: G06T11/60 , G06F40/284 , G06V10/761 , G06V10/764 , G06V10/774 , G06V10/776 , G06V20/70 , G06V10/945
Abstract: A first image and textual content associated with the first image is obtained. A second image that depicts the textual content associated with the first image is rendered. The first image and the second image are processed with a machine-learned encoding model to respectively obtain a first image embedding and a second image embedding for an image embedding space including a plurality of image embeddings. The machine-learned encoding model is trained based on a difference between the first image embedding and the second image embedding.
-
公开(公告)号:US12272442B2
公开(公告)日:2025-04-08
申请号:US17551050
申请日:2021-12-14
Applicant: Google LLC
Inventor: Xiaohua Zhai , Sylvain Gelly , Alexander Kolesnikov , Yin Ching Jessica Yung , Joan Puigcerver i Perez , Lucas Klaus Beyer , Neil Matthew Tinmouth Houlsby , Wen Yau Aaron Loh , Alan Prasana Karthikesalingam , Basil Mustafa , Jan Freyberg , Patricia Leigh MacWilliams , Vivek Natarajan
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network to perform a downstream computer vision task. One of the methods includes pre-training an initial neural network that shares layers with the neural network to perform an initial computer vision task and then training the neural network on the downstream computer vision task.
-
公开(公告)号:US20240153256A1
公开(公告)日:2024-05-09
申请号:US18051106
申请日:2022-10-31
Applicant: Google LLC
Inventor: Daniel Keysers , Xiaohua Zhai , Xiao Wang , Lucas Beyer , Basil Mustafa , Andreas Steiner , Alexander Kolesnikov
IPC: G06V10/778
CPC classification number: G06V10/778
Abstract: A method may include obtaining a pretrained image encoder and a training sample comprising a training image and a training text string corresponding to the training image. The method may also include initializing a text encoder in an untrained state, determining, using the pretrained image encoder and based on the training image, a first latent representation of the training image, and determining, using the text encoder and based on the training text string, a second latent representation of the training text string. The method may further include determining a loss value based on the first latent representation and the second latent representation, updating, based on the loss value, one or more parameters of the text encoder while holding fixed parameters of the pretrained image encoder, and outputting the text encoder in a trained state.
-
公开(公告)号:US20240256835A1
公开(公告)日:2024-08-01
申请号:US18424420
申请日:2024-01-26
Applicant: Google LLC
Inventor: Mostafa Dehghani , Josip Djolonga , Jonathan Heek , Basil Mustafa , Piotr Michal Padlewski , Justin Morgan Gilmer , Neil Matthew Tinmouth Houlsby
IPC: G06N3/0455 , G06N3/088
CPC classification number: G06N3/0455 , G06N3/088
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing an input through each of a plurality of layers of a neural network to generate an output using a plurality of hardware accelerators. The plurality of layers comprise a fully connected layer having a plurality of parameters arranged in a row dimension and a column dimension. One of the methods comprises: generating a plurality of parameter blocks by partitioning the plurality of parameters along the row dimension and the column dimension; determining a ratio of a number of parameters along the row dimension relative to a number of parameters along the column dimension; and determining whether to use row sharding or column sharding with the plurality of hardware accelerators to calculate an output for the fully connected layer and then calculating the output for the fully connected layer using either row sharding or column sharding.
-
公开(公告)号:US20230196211A1
公开(公告)日:2023-06-22
申请号:US18008293
申请日:2021-06-07
Applicant: Google LLC
Inventor: Carlos Riquelme Ruiz , André Susano Pinto , Joan Puigcerver , Basil Mustafa , Neil Matthew Tinmouth Houlsby , Sylvain Gelly , Cedric Benjamin Renggli , Daniel Martin Keysers
IPC: G06N20/20
CPC classification number: G06N20/20
Abstract: Generally, the present disclosure is directed to systems and methods that provide a simple, scalable, yet effective strategy to perform transfer learning with a mixture of experts (MoE). In particular, the transfer of pre-trained representations can improve sample efficiency and reduce computational requirements for new tasks. However, representations used for transfer are usually generic, and are not tailored to a particular distribution of downstream tasks. In contrast, example systems and methods of the present disclosure use expert representations for transfer with a simple, yet effective, strategy.
-
公开(公告)号:US20220108171A1
公开(公告)日:2022-04-07
申请号:US17488166
申请日:2021-09-28
Applicant: Google LLC
Inventor: Joan Puigcerver i Perez , Basil Mustafa , André Susano Pinto , Carlos Riquelme Ruiz , Neil Matthew Tinmouth Houlsby , Daniel M. Keysers
Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training neural networks using transfer learning. One of the methods includes training a neural network to perform a first prediction task, including: obtaining trained model parameters for each of a plurality of candidate neural networks, wherein each candidate neural network has been pre-trained to perform a respective second prediction task that is different from the first prediction task; obtaining a plurality of training examples corresponding to the first prediction task; selecting a proper subset of the plurality of candidate neural networks using the plurality of training examples; generating, for each candidate neural network, one or more fine-tuned neural networks, wherein each fine-tuned neural network is generated by updating the model parameters of the candidate neural network using the plurality of training examples; and determining model parameters for the neural network using the respective fine-tuned neural networks.
-
公开(公告)号:US20250139432A1
公开(公告)日:2025-05-01
申请号:US18835613
申请日:2023-02-06
Applicant: Google LLC
Inventor: Cédric Benjamin Renggli , Carlos Riquelme Ruiz , André Susano Pinto , Basil Mustafa , Joan Puigcerver i Perez , Neil Matthew Tinmouth Houlsby
IPC: G06N3/08
Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for performing a machine learning task on a network input to generate a network output. In one aspect, one of the systems includes a neural network configured to perform the machine learning task, the neural network including one or more merger neural network blocks that each generate block output sequence that has fewer elements than the block input sequence that is processed by the merger neural network block.
-
公开(公告)号:US20240354593A1
公开(公告)日:2024-10-24
申请号:US18355672
申请日:2023-07-20
Applicant: Google LLC
Inventor: Rodolphe René Willy Jenatton , Mark Patrick Collier , Effrosyni Kokiopoulou , Basil Mustafa , Neil Matthew Tinmouth Houlsby , Jesse Berent
IPC: G06N3/0985 , G06N3/048
CPC classification number: G06N3/0985 , G06N3/048
Abstract: HET classifiers, which learn a multivariate Gaussian distribution over prediction logits, perform well on image classification problems with hundreds to thousands of classes. However, compared to standard classifiers (e.g., deterministic (DET) classifiers), they introduce extra parameters that scale linearly with the number of classes. This makes them infeasible to apply to larger-scale problems. In addition, HET classifiers introduce a temperature hyperparameter, which is ordinarily tuned. HET classifiers are disclosed, where the parameter count (when compared to a DET classifier) scales independently of the number of classes. In large-scale settings of the embodiments, the need to tune the temperature hyperparameter is removed, by directly learning it on the training data.
-
公开(公告)号:US20230260652A1
公开(公告)日:2023-08-17
申请号:US18012187
申请日:2021-12-10
Applicant: Google LLC
Inventor: Shekoofeh Azizi , Wen Yau Aaron Loh , Zachary William Beaver , Ting Chen , Jonathan Paul Deaton , Jan Freyberg , Alan Prasana Karthikesalingam , Simon Kornblith , Basil Mustafa , Mohammad Norouzi , Vivek Natarajan , Fiona Keleher Ryan
CPC classification number: G16H50/20 , G06T7/0012 , G06V10/761 , G16H30/40 , G16H50/70 , G06T2207/20081 , G06T2207/20132
Abstract: Systems and methods can perform self-supervised machine learning for improved medical image analysis. As one example, self-supervised learning on ImageNet, followed by additional self-supervised learning on unlabeled medical images from the target domain of interest, followed by fine-tuning on labeled medical images from the target domain significantly improves the accuracy of medical image classifiers such as, for example diagnostic models. Another example aspect of the present disclosure is directed to a novel Multi-Instance Contrastive Learning (MICLe) method that uses multiple different medical images that share one or more attributes (e.g., multiple images that depict the same underlying pathology and/or the same patient) to construct more informative positive pairs for self-supervised learning.
-
-
-
-
-
-
-
-
-