ADAPTIVE ASYNCHRONOUS FEDERATED LEARNING

    公开(公告)号:US20210342749A1

    公开(公告)日:2021-11-04

    申请号:US16861284

    申请日:2020-04-29

    IPC分类号: G06N20/20

    摘要: Techniques for adaptive asynchronous federated learning are described herein. An aspect includes providing a first version of a global parameter to a first client and a second client. Another aspect includes receiving, from the first client, a first gradient, wherein the first gradient was computed by the first client based on the first version of the global parameter and a respective first local dataset of the first client. Another aspect includes determining whether the first version of the global parameter matches a most recent version of the global parameter. Another aspect includes, based on determining that the first version of the global parameter does not match the most recent version of the global parameter, selecting a version of the global parameter. Another aspect includes aggregating the first gradient with the selected version of the global parameter to determine an updated version of the global parameter.

    Adaptive asynchronous federated learning

    公开(公告)号:US11574254B2

    公开(公告)日:2023-02-07

    申请号:US16861284

    申请日:2020-04-29

    IPC分类号: G06N20/20

    摘要: Techniques for adaptive asynchronous federated learning are described herein. An aspect includes providing a first version of a global parameter to a first client and a second client. Another aspect includes receiving, from the first client, a first gradient, wherein the first gradient was computed by the first client based on the first version of the global parameter and a respective first local dataset of the first client. Another aspect includes determining whether the first version of the global parameter matches a most recent version of the global parameter. Another aspect includes, based on determining that the first version of the global parameter does not match the most recent version of the global parameter, selecting a version of the global parameter. Another aspect includes aggregating the first gradient with the selected version of the global parameter to determine an updated version of the global parameter.

    Distributed machine learning at edge nodes

    公开(公告)号:US11836576B2

    公开(公告)日:2023-12-05

    申请号:US15952625

    申请日:2018-04-13

    CPC分类号: G06N20/00 H04L67/10 H04L67/12

    摘要: A training process of a machine learning model is executed at the edge node for a number of iterations to generate a model parameter based at least in part on a local dataset and a global model parameter. A resource parameter set indicative of resources available at the edge node is estimated. The model parameter and the resource parameter set are sent to a synchronization node. Updates to the global model parameter and the number of iterations are received from the synchronization node based at least in part on the model parameter and the resource parameter set of edge nodes. The training process of the machine learning model is repeated at the edge node to determine an update to the model parameter based at least in part on the local dataset and updates to the global model parameter and the number of iterations from the synchronization node.

    DISTRIBUTED MACHINE LEARNING AT EDGE NODES
    4.
    发明申请

    公开(公告)号:US20190318268A1

    公开(公告)日:2019-10-17

    申请号:US15952625

    申请日:2018-04-13

    IPC分类号: G06N99/00 H04L29/08

    摘要: A training process of a machine learning model is executed at the edge node for a number of iterations to generate a model parameter based at least in part on a local dataset and a global model parameter. A resource parameter set indicative of resources available at the edge node is estimated. The model parameter and the resource parameter set are sent to a synchronization node. Updates to the global model parameter and the number of iterations are received from the synchronization node based at least in part on the model parameter and the resource parameter set of edge nodes. The training process of the machine learning model is repeated at the edge node to determine an update to the model parameter based at least in part on the local dataset and updates to the global model parameter and the number of iterations from the synchronization node.

    Federated learning of clients
    5.
    发明授权

    公开(公告)号:US11461593B2

    公开(公告)日:2022-10-04

    申请号:US16695268

    申请日:2019-11-26

    摘要: A method, a computer program product, and a computer system determine when to perform a federated learning process. The method includes identifying currently available contributors among contributors of a federated learning task for which the federated learning process is to be performed. The method includes determining a usefulness metric of the currently available contributors for respective datasets from each of the currently available contributors used in performing the federated learning process. The method includes, as a result of the usefulness metric of the currently available contributors being at least a usefulness threshold, generating a recommendation to perform the federated learning process with the datasets of the currently available contributors. The method includes transmitting the recommendation to a processing component configured to perform the federated learning process.

    FEDERATED LEARNING OF CLIENTS
    6.
    发明申请

    公开(公告)号:US20210158099A1

    公开(公告)日:2021-05-27

    申请号:US16695268

    申请日:2019-11-26

    IPC分类号: G06K9/62 G06N3/04 G06Q10/10

    摘要: A method, a computer program product, and a computer system determine when to perform a federated learning process. The method includes identifying currently available contributors among contributors of a federated learning task for which the federated learning process is to be performed. The method includes determining a usefulness metric of the currently available contributors for respective datasets from each of the currently available contributors used in performing the federated learning process. The method includes, as a result of the usefulness metric of the currently available contributors being at least a usefulness threshold, generating a recommendation to perform the federated learning process with the datasets of the currently available contributors. The method includes transmitting the recommendation to a processing component configured to perform the federated learning process.