User-level Privacy Preservation for Federated Machine Learning

    公开(公告)号:US20230047092A1

    公开(公告)日:2023-02-16

    申请号:US17663008

    申请日:2022-05-11

    Abstract: User-level privacy preservation is implemented within federated machine learning. An aggregation server may distribute a machine learning model to multiple users each including respective private datasets. Individual users may train the model using the local, private dataset to generate one or more parameter updates. Prior to sending the generated parameter updates to the aggregation server for incorporation into the machine learning model, a user may modify the parameter updates by applying respective noise values to individual ones of the parameter updates to ensure differential privacy for the dataset private to the user. The aggregation server may then receive the respective modified parameter updates from the multiple users and aggregate the updates into a single set of parameter updates to update the machine learning model. The federated machine learning may further include iteratively performing said sending, training, modifying, receiving, aggregating and updating steps.

    Hierarchical Gradient Averaging For Enforcing Subject Level Privacy

    公开(公告)号:US20230394374A1

    公开(公告)日:2023-12-07

    申请号:US17805674

    申请日:2022-06-06

    CPC classification number: G06N20/20 G06F21/6245

    Abstract: Hierarchical gradient averaging is performed as part of training a machine learning model to enforce subject level privacy. A sample of data items from a training data set is identified and respective gradients for the data items are determined. The gradients are then clipped. Each subject's clipped gradients in the sample are averaged. A noise value is added to a sum of the averaged gradients of each of the subjects in the sample. An average gradient for the entire sample is determined from the averaged gradients of the individual subjects with the added noise value. This average gradient for the entire sample is used for determining machine learning model updates.

    Decentralized Group Privacy in Cross-Silo Federated Learning

    公开(公告)号:US20240394597A1

    公开(公告)日:2024-11-28

    申请号:US18597771

    申请日:2024-03-06

    Abstract: Federated training of a machine learning model with enforcement of subject level privacy is implemented. Respective samples of data items from a training data set are generated at multiple nodes of a federated machine learning system. Noise values are determined for individual ones of the sampled data items according to respective counts of data items of particular subjects and the cumulative counts of the items of the subjects. Respective gradients for the data items are the determined The gradients are then clipped and noise values are applied. Each subject's noisy clipped gradients in the sample are then aggregated. The aggregasted gradients for the entire sample are then used for determining machine learning model updates.

    Subject-Level Granular Differential Privacy in Federated Learning

    公开(公告)号:US20230052231A1

    公开(公告)日:2023-02-16

    申请号:US17663009

    申请日:2022-05-11

    Abstract: Group-level privacy preservation is implemented within federated machine learning. An aggregation server may distribute a machine learning model to multiple users each including respective private datasets. The private datasets may individually include multiple items associated with a single group. Individual users may train the model using their local, private dataset to generate one or more parameter updates and to determine a count of the largest number of items associated with any single group of a number of groups in the dataset. Parameter updates generated by the individual users may be modified by applying respective noise values to individual ones of the parameter updates according to the respective counts to ensure differential privacy for the groups of the dataset. The aggregation server may aggregate the updates into a single set of parameter updates to update the machine learning model.

    Privacy preserving collaborative learning with domain adaptation

    公开(公告)号:US11443240B2

    公开(公告)日:2022-09-13

    申请号:US16829433

    申请日:2020-03-25

    Abstract: Herein are techniques for domain adaptation of a machine learning (ML) model. These techniques impose differential privacy onto federated learning by the ML model. In an embodiment, each of many client devices receive, from a server, coefficients of a general ML model. For respective new data point(s), each client device operates as follows. Based on the new data point(s), a respective private ML model is trained. Based on the new data point(s), respective gradients are calculated for the coefficients of the general ML model. Random noise is added to the gradients to generate respective noisy gradients. A combined inference may be generated based on: the private ML model, the general ML model, and one of the new data point(s). The noisy gradients are sent to the server. The server adjusts the general ML model based on the noisy gradients from the client devices. This client/server process may be repeated indefinitely.

Patent Agency Ranking