LOW-POWER AI PROCESSING SYSTEM AND METHOD COMBINING ARTIFICIAL NEURAL NETWORK AND SPIKING NEURAL NETWORK

    公开(公告)号:US20240320472A1

    公开(公告)日:2024-09-26

    申请号:US18613117

    申请日:2024-03-22

    CPC classification number: G06N3/045 G06N3/049

    Abstract: A low-power artificial intelligence (AI) processing system combining an artificial neural network (ANN) and a spiking neural network (SNN) includes an ANN including an artificial layer, an SNN configured to output an artificial layer of the ANN as the same operation result, a main controller configured to calculate an ANN computational cost and an SNN computational cost for each artificial layer, an operation domain selector configured to select an operation domain having a lower computational cost by comparing the ANN computational cost and the SNN computational cost, and an equivalent converter configured to form a combined neural network by converting the artificial layer of the ANN into a spiking layer of the SNN according to selection of an SNN operation domain of the operation domain selector. Therefore, there is no loss of accuracy when compared to the ANN.

    MRAM CELL WITH PAIR OF MAGNETIC TUNNEL JUNCTIONS HAVING OPPOSITE STATES AND MEMORY DEVICE USING THE SAME

    公开(公告)号:US20230410864A1

    公开(公告)日:2023-12-21

    申请号:US18087844

    申请日:2022-12-23

    CPC classification number: G11C11/161 G11C11/1673 G11C11/1675 G11C11/1655

    Abstract: An MRAM cell includes a switch unit configured to determine opening and closing thereof by a word line voltage and to activate a current path between a bit line and a bit line bar in an opened state thereof, first and second MTJs having opposite states, respectively, and connected in series between the bit line and the bit line bar, to constitute a storage node, and a sensing line configured to be activated in a reading mode of the MRAM cell, thereby creating data reading information based on a voltage between the first and second MTJs, wherein the first and second MTJs have different ones of a low resistance state and a high resistance state, respectively, in accordance with a voltage drop direction between the bit line and the bit line bar, thereby storing data of 0 or 1.

    APPARATUS AND METHOD FOR ACCELERATING DEEP NEURAL NETWORK LEARNING FOR DEEP REINFORCEMENT LEARNING

    公开(公告)号:US20230072432A1

    公开(公告)日:2023-03-09

    申请号:US17898553

    申请日:2022-08-30

    Abstract: Provided is a deep neural network (DNN) learning accelerating apparatus for deep reinforcement learning, the apparatus including: a DNN operation core configured to perform DNN learning for the deep reinforcement learning; and a weight training unit configured to train a weight parameter to accelerate the DNN learning and transmit it to the DNN operation core, the weight training unit including: a neural network weight memory storing the weight parameter; a neural network pruning unit configured to store a sparse weight pattern generated as a result of performing the weight pruning based on the weight parameter; and a weight prefetcher configured to select/align only pieces of weight data of which values are not zero (0) from the neural network weight memory using the sparse weight pattern and transmit the pieces of weight data of which the values are not zero to the DNN operation core.

Patent Agency Ranking