NODE DISAMBIGUATION
    1.
    发明申请

    公开(公告)号:US20220215260A1

    公开(公告)日:2022-07-07

    申请号:US17702064

    申请日:2022-03-23

    Abstract: A data processing system for implementing a machine learning process in dependence on a graph neural network, the system being configured to receive a plurality of input graphs each having a plurality of nodes, at least some of the nodes having an attribute, the system being configured to: for at least one graph of the input graphs: determine one or more sets of nodes of the plurality of nodes, the nodes of each set having identical attributes; for each set, assign a label to each of the nodes of that set so that each node of a set has a different label from the other nodes of that set; process the sets to form an aggregate value; and implement the machine learning process taking as input: (i) the input graphs with the exception of the said sets and (ii) the aggregate value.

    NORMALIZATION SCHEME FOR SELF-ATTENTION NEURAL NETWORKS

    公开(公告)号:US20230385615A1

    公开(公告)日:2023-11-30

    申请号:US18365047

    申请日:2023-08-03

    CPC classification number: G06N3/048

    Abstract: Described is a data processing device for performing an attention-based operation on a graph neural network. The device is configured to receive one or more input graphs each having a plurality of nodes and to, for at least one of the input graphs: form an input node representation for each node in the respective input graph, wherein a respective norm is defined for each input node representation; form a set of attention parameters; multiply each of the input node representations with each of the set of attention parameters to form a score function of the respective input graph; normalize the score function based on a maximum of the norms of the input node representations to form a normalised score function; and form a weighted node representation by weighting each node in the respective input graph using a respective element of the normalised score function. The normalization of the score function enables deep attention-based neural networks to perform better by enforcing Lipschitz continuity.

Patent Agency Ranking