-
11.
公开(公告)号:US20240320472A1
公开(公告)日:2024-09-26
申请号:US18613117
申请日:2024-03-22
Inventor: Hoi Jun YOO , Seong Yon HONG
Abstract: A low-power artificial intelligence (AI) processing system combining an artificial neural network (ANN) and a spiking neural network (SNN) includes an ANN including an artificial layer, an SNN configured to output an artificial layer of the ANN as the same operation result, a main controller configured to calculate an ANN computational cost and an SNN computational cost for each artificial layer, an operation domain selector configured to select an operation domain having a lower computational cost by comparing the ANN computational cost and the SNN computational cost, and an equivalent converter configured to form a combined neural network by converting the artificial layer of the ANN into a spiking layer of the SNN according to selection of an SNN operation domain of the operation domain selector. Therefore, there is no loss of accuracy when compared to the ANN.
-
12.
公开(公告)号:US20230410864A1
公开(公告)日:2023-12-21
申请号:US18087844
申请日:2022-12-23
Inventor: Hoi Jun YOO , Wenao Xie
IPC: G11C11/16
CPC classification number: G11C11/161 , G11C11/1673 , G11C11/1675 , G11C11/1655
Abstract: An MRAM cell includes a switch unit configured to determine opening and closing thereof by a word line voltage and to activate a current path between a bit line and a bit line bar in an opened state thereof, first and second MTJs having opposite states, respectively, and connected in series between the bit line and the bit line bar, to constitute a storage node, and a sensing line configured to be activated in a reading mode of the MRAM cell, thereby creating data reading information based on a voltage between the first and second MTJs, wherein the first and second MTJs have different ones of a low resistance state and a high resistance state, respectively, in accordance with a voltage drop direction between the bit line and the bit line bar, thereby storing data of 0 or 1.
-
13.
公开(公告)号:US20230072432A1
公开(公告)日:2023-03-09
申请号:US17898553
申请日:2022-08-30
Inventor: Hoi Jun YOO , Juhyoung LEE
IPC: G06N3/08
Abstract: Provided is a deep neural network (DNN) learning accelerating apparatus for deep reinforcement learning, the apparatus including: a DNN operation core configured to perform DNN learning for the deep reinforcement learning; and a weight training unit configured to train a weight parameter to accelerate the DNN learning and transmit it to the DNN operation core, the weight training unit including: a neural network weight memory storing the weight parameter; a neural network pruning unit configured to store a sparse weight pattern generated as a result of performing the weight pruning based on the weight parameter; and a weight prefetcher configured to select/align only pieces of weight data of which values are not zero (0) from the neural network weight memory using the sparse weight pattern and transmit the pieces of weight data of which the values are not zero to the DNN operation core.
-
14.
公开(公告)号:US20220222533A1
公开(公告)日:2022-07-14
申请号:US17317900
申请日:2021-05-12
Inventor: Hoi Jun YOO , Sang Yeob KIM
Abstract: A method of accelerating training of a low-power, high-performance artificial neural network (ANN) includes (a) performing fine-grained pruning and coarse-grained pruning to generate sparsity in weights by a pruning unit in a convolution core of a cluster in a lower-power, high-performance ANN trainer; (b) selecting and performing dual zero skipping according to input sparsity, output sparsity, and the sparsity of weights by the convolution core, and (c) restricting access to a weight memory during training by allowing a deep neural network (DNN) computation core and a weight pruning core to share weights retrieved from a memory by the convolution core.
-
-
-