NATURAL GRAPH CONVOLUTIONS
    1.
    发明申请

    公开(公告)号:US20210334623A1

    公开(公告)日:2021-10-28

    申请号:US17239580

    申请日:2021-04-24

    IPC分类号: G06N3/04 G06F16/901

    摘要: A method for generating a graph convolutional network includes receiving a graph network comprising nodes connected by edges. A node neighborhood is determined for each of the nodes of the graph network and an edge neighborhood is determined for each of the edges of the graph network. The node neighborhood for each of the nodes and the edge neighborhood for each of the edges are classified based on isomorphism. A mapping of a kernel from an edge neighborhood class representative to each of the edges of the graph network is determined. The graph convolutional network is generated based on the kernel mapping.

    Multi-object positioning using mixture density networks

    公开(公告)号:US11696093B2

    公开(公告)日:2023-07-04

    申请号:US17182153

    申请日:2021-02-22

    IPC分类号: H04W4/029

    CPC分类号: H04W4/029

    摘要: Certain aspects of the present disclosure provide techniques for object positioning using mixture density networks, comprising: receiving radio frequency (RF) signal data collected in a physical space; generating a feature vector encoding the RF signal data by processing the RF signal data using a first neural network; processing the feature vector using a first mixture model to generate a first encoding tensor indicating a set of moving objects in the physical space, a first location tensor indicating a location of each of the moving objects in the physical space, and a first uncertainty tensor indicating uncertainty of the locations of each of the moving objects in the physical space; and outputting at least one location from the first location tensor.

    Channel Gating For Conditional Computation
    7.
    发明申请

    公开(公告)号:US20200372361A1

    公开(公告)日:2020-11-26

    申请号:US16419509

    申请日:2019-05-22

    IPC分类号: G06N3/08 G06N20/00

    摘要: A computing device may be equipped with a generalized framework for accomplishing conditional computation or gating in a neural network. The computing device may receive input in a neural network layer that includes two or more filters. The computing device may intelligently determine whether the two or more filters are relevant to the received input. The computing device may deactivate filters that are determined not to be relevant to the received input (or activate filters that are determined to be relevant to the received input), and apply the received input to active filters in the layer to generate an activation.

    Hypernetwork Kalman filter for channel estimation and tracking

    公开(公告)号:US11700070B2

    公开(公告)日:2023-07-11

    申请号:US17734524

    申请日:2022-05-02

    IPC分类号: H04B17/373 H04B17/391

    CPC分类号: H04B17/373 H04B17/3913

    摘要: A processor-implemented method is presented. The method includes receiving an input sequence comprising a group of channel dynamics observations for a wireless communication channel. Each channel dynamics observation may correspond to a timing of a group of timings. The method also includes determining, via a recurrent neural network (RNN), a residual at each of the group of timings based on the group of channel dynamics observations. The method further includes updating Kalman filter (KF) parameters based on the residual and estimating, via the KF, a channel state based on the updated KF parameters.

    Performing XNOR equivalent operations by adjusting column thresholds of a compute-in-memory array

    公开(公告)号:US11562212B2

    公开(公告)日:2023-01-24

    申请号:US16565308

    申请日:2019-09-09

    IPC分类号: G06N3/04 G06N3/08

    摘要: A method performs XNOR-equivalent operations by adjusting column thresholds of a compute-in-memory array of an artificial neural network. The method includes adjusting an activation threshold generated for each column of the compute-in-memory array based on a function of a weight value and an activation value. The method also includes calculating a conversion bias current reference based on an input value from an input vector to the compute-in-memory array, the compute-in-memory array being programmed with a set of weights. The adjusted activation threshold and the conversion bias current reference are used as a threshold for determining the output values of the compute-in-memory array.