USER DATA PROCESSING SYSTEM, METHOD, AND APPARATUS

    公开(公告)号:US20240362361A1

    公开(公告)日:2024-10-31

    申请号:US18764330

    申请日:2024-07-04

    CPC classification number: G06F21/6245 H04L9/0816

    Abstract: This disclosure provides a user data processing system. A first data processing device in the system generates a first intermediate result, and sends a third intermediate result to a second data processing device. The third intermediate result is obtained from the first intermediate result based on a parameter of a first machine learning model and target historical user data obtained by the first data processing device, and an identifier of the target historical user data is the same as an identifier of historical user data of the second data processing device. The first data processing device further receives a second intermediate result, and updates the parameter of the first machine learning model based on the first intermediate result and the second intermediate result. The second data processing device further updates a parameter of a second machine learning model based on the received third intermediate result and the second intermediate result.

    MACHINE LEARNING MODEL UPDATE METHOD AND APPARATUS

    公开(公告)号:US20230342669A1

    公开(公告)日:2023-10-26

    申请号:US18344188

    申请日:2023-06-29

    CPC classification number: G06N20/00 H04L9/008 H04L9/088 H04L9/30

    Abstract: Embodiments of this application provide a machine learning model update method, applied to the field of artificial intelligence. The method includes: A first apparatus generates a first intermediate result based on a first data subset. The first apparatus receives an encrypted second intermediate result sent by a second apparatus, where the second intermediate result is generated based on a second data subset corresponding to the second apparatus. The first apparatus obtains a first gradient of a first model, where the first gradient of the first model is generated based on the first intermediate result and the encrypted second intermediate result. After being decrypted by using a second private key, the first gradient of the first model is for updating the first model, where the second private key is a decryption key generated by the second apparatus for homomorphic encryption.

    FEDERATED LEARNING METHOD AND APPARATUS, AND CHIP

    公开(公告)号:US20230116117A1

    公开(公告)日:2023-04-13

    申请号:US18080523

    申请日:2022-12-13

    Abstract: A method includes: A second node sends a prior distribution of a parameter in a federated model to at least one first node. After receiving the prior distribution of the parameter in the federated model, the at least one first node performs training based on the prior distribution of the parameter in the federated model and local training data of the first node, to obtain a posterior distribution of a parameter in a local model of the first node. After the local training ends, the at least one first node feeds back the posterior distribution of the parameter in the local model to the second node, so that the second node updates the prior distribution of the parameter in the federated model based on the posterior distribution of the parameter in the local model of the at least one first node.

    Federated Learning Method and Apparatus

    公开(公告)号:US20250156726A1

    公开(公告)日:2025-05-15

    申请号:US19022480

    申请日:2025-01-15

    Abstract: A federated learning method includes a central node that separately sends a first model to at least one central edge device, receives at least one second model, and aggregates the at least one second model to obtain a fourth model. The at least one central edge device is in one-to-one correspondence with at least one edge device group. The second model is obtained by aggregating a third model respectively obtained by each edge device in at least one edge device group. The third model is obtained by one edge device in collaboration with at least one terminal device in a coverage area through learning the first model based on local data. The edge devices are grouped into edge device groups, and a central edge device in one edge device group sends the first model to each edge device in the edge device group.

    METHOD, APPARATUS, AND SYSTEM FOR TRAINING TREE MODEL

    公开(公告)号:US20230353347A1

    公开(公告)日:2023-11-02

    申请号:US18344185

    申请日:2023-06-29

    CPC classification number: H04L9/0836 H04L9/008

    Abstract: A first apparatus provides a second apparatus with encrypted label distribution information for the first node, so that the second apparatus calculates an intermediate parameter of a segmentation policy of the second apparatus side based on the encrypted label distribution information, and therefore a gain of the segmentation policy of the second apparatus side can be obtained. A preferred segmentation policy of the first node can also be obtained based on the gain of the segmentation policy of the second apparatus side and a gain of a segmentation policy of the first apparatus side. The encrypted label distribution information includes label data and distribution information, and is in a ciphertext state. The encrypted label distribution information can be used to determine the gain of the segmentation policy without leaking a distribution status of a sample set on the first node.

    Ground environment detection method and apparatus

    公开(公告)号:US11455511B2

    公开(公告)日:2022-09-27

    申请号:US16456057

    申请日:2019-06-28

    Abstract: A ground environment detection method and apparatus are disclosed, where the method includes: scanning a ground environment by using laser sounding signals having different operating wavelengths, receiving a reflected signal that is reflected back by the ground environment, determining scanning spot information of each scanning spot of the ground environment based on the reflected signal, determining space coordinate information and a laser reflection feature of each scanning spot based on each piece of scanning spot information, partitioning the ground environment into sub-regions having different laser reflection features, and determining a ground environment type of each sub-region. Lasers having different operating wavelengths are used to scan the ground, and the ground environment type is determined based on the reflection intensity of the ground environment under different wavelengths of lasers, thereby improving a perception effect of a complex ground environment, and better determining a passable road surface.

    Model parameter fusion method and apparatus

    公开(公告)号:US11373116B2

    公开(公告)日:2022-06-28

    申请号:US15980496

    申请日:2018-05-15

    Abstract: Embodiments of the present invention provide a model parameter fusion method and apparatus, which relate to the field of machine learning and intend to reduce a data transmission amount and implement dynamical adjustment of computing resources during model parameter fusion. The method includes: dividing, by an ith node, a model parameter of the ith node into N blocks, where the ith node is any node of N nodes that participate in a fusion, and 1≤i≤N≤M; receiving, by the ith node, ith model parameter blocks respectively sent by other nodes of the N nodes than the ith node; fusing, by the ith node, an ith model parameter block of the ith node and the ith model parameter blocks respectively sent by the other nodes, so as to obtain the ith general model parameter block; and distributing, by the ith node, the ith general model parameter block to the other nodes of the N nodes.

    Tensor-Based Continual Learning Method and Apparatus

    公开(公告)号:US20250094822A1

    公开(公告)日:2025-03-20

    申请号:US18963964

    申请日:2024-11-29

    Abstract: This application discloses a tensor-based continual learning method and apparatus. The method includes: obtaining input data; and inputting the input data into a first neural network to obtain a data processing result. After training of an ith task ends, the neural network includes A tensor cores, the A tensor cores are divided into B tensor layers, and each of the B tensor layers includes data of all of the A tensor cores in a same dimension. In training of an (i+1)th task, C tensor cores and/or D tensor layers are added to the first neural network, and parameters in the C tensor cores and/or parameters at the D tensor layers are updated. According to this application, an anti-forgetting capability of a model can be effectively improved, and an increase in a scale of the model is small, to effectively reduce storage and communication overheads.

    MACHINE LEARNING MODEL TRAINING METHOD, SERVICE DATA PROCESSING METHOD, APPARATUS, AND SYSTEM

    公开(公告)号:US20240394556A1

    公开(公告)日:2024-11-28

    申请号:US18795145

    申请日:2024-08-05

    Abstract: A machine learning model training method, a service data processing method, and an apparatus are provided, which are applied to the artificial intelligence field. In a training phase, a cloud server sends a machine learning submodel to an edge server. The edge server performs federated learning with client devices in a management domain of the edge server based on the obtained machine learning submodel, to obtain a trained machine learning submodel, and sends the trained machine learning submodel to the cloud server. The cloud server fuses obtained different trained machine learning submodels, to obtain a machine learning model. According to this application, training efficiency of the machine learning model can be improved. In an inference phase, the client device processes service data by using the trained machine learning submodel. According to this application, prediction efficiency of the machine learning model can be improved.

    Federated Learning Method and Related Device
    10.
    发明公开

    公开(公告)号:US20240211816A1

    公开(公告)日:2024-06-27

    申请号:US18597011

    申请日:2024-03-06

    CPC classification number: G06N20/20

    Abstract: A method includes a server delivering a random quantization instruction to a plurality of terminals. The plurality of terminals perform random quantization on training update data based on the random quantization instruction and upload, to the server, training update data on which random quantization has been performed. After aggregating the training update data on which random quantization has been performed, the server may eliminate an additional quantization error introduced by random quantization.

Patent Agency Ranking