-
公开(公告)号:US12265898B2
公开(公告)日:2025-04-01
申请号:US18409520
申请日:2024-01-10
Applicant: Google LLC
Inventor: Deniz Oktay , Saurabh Singh , Johannes Balle , Abhinav Shrivastava
Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.
-
公开(公告)号:US20240220863A1
公开(公告)日:2024-07-04
申请号:US18409520
申请日:2024-01-10
Applicant: Google LLC
Inventor: Deniz Oktay , Saurabh Singh , Johannes Balle , Abhinav Shrivastava
Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.
-
公开(公告)号:US11907818B2
公开(公告)日:2024-02-20
申请号:US18165211
申请日:2023-02-06
Applicant: Google LLC
Inventor: Deniz Oktay , Saurabh Singh , Johannes Balle , Abhinav Shrivistava
Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.
-
公开(公告)号:US20230186166A1
公开(公告)日:2023-06-15
申请号:US18165211
申请日:2023-02-06
Applicant: Google LLC
Inventor: Deniz Oktay , Saurabh Singh , Johannes Balle , Abhinav Shrivistava
Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.
-
公开(公告)号:US11574232B2
公开(公告)日:2023-02-07
申请号:US15931016
申请日:2020-05-13
Applicant: Google LLC
Inventor: Deniz Oktay , Saurabh Singh , Johannes Balle , Abhinav Shrivastava
Abstract: Example aspects of the present disclosure are directed to systems and methods that learn a compressed representation of a machine-learned model (e.g., neural network) via representation of the model parameters within a reparameterization space during training of the model. In particular, the present disclosure describes an end-to-end model weight compression approach that employs a latent-variable data compression method. The model parameters (e.g., weights and biases) are represented in a “latent” or “reparameterization” space, amounting to a reparameterization. In some implementations, this space can be equipped with a learned probability model, which is used first to impose an entropy penalty on the parameter representation during training, and second to compress the representation using arithmetic coding after training. The proposed approach can thus maximize accuracy and model compressibility jointly, in an end-to-end fashion, with the rate-error trade-off specified by a hyperparameter.
-
-
-
-