-
1.
公开(公告)号:US12217184B2
公开(公告)日:2025-02-04
申请号:US17317900
申请日:2021-05-12
Inventor: Hoi Jun Yoo , Sang Yeob Kim
Abstract: A method of accelerating training of a low-power, high-performance artificial neural network (ANN) includes (a) performing fine-grained pruning and coarse-grained pruning to generate sparsity in weights by a pruning unit in a convolution core of a cluster in a lower-power, high-performance ANN trainer; (b) selecting and performing dual zero skipping according to input sparsity, output sparsity, and the sparsity of weights by the convolution core, and (c) restricting access to a weight memory during training by allowing a deep neural network (DNN) computation core and a weight pruning core to share weights retrieved from a memory by the convolution core.
-
公开(公告)号:US12205624B2
公开(公告)日:2025-01-21
申请号:US18087844
申请日:2022-12-23
Inventor: Hoi Jun Yoo , Wenao Xie
IPC: G11C11/16
Abstract: An MRAM cell includes a switch unit configured to determine opening and closing thereof by a word line voltage and to activate a current path between a bit line and a bit line bar in an opened state thereof, first and second MTJs having opposite states, respectively, and connected in series between the bit line and the bit line bar, to constitute a storage node, and a sensing line configured to be activated in a reading mode of the MRAM cell, thereby creating data reading information based on a voltage between the first and second MTJs, wherein the first and second MTJs have different ones of a low resistance state and a high resistance state, respectively, in accordance with a voltage drop direction between the bit line and the bit line bar, thereby storing data of 0 or 1.
-
公开(公告)号:US11915141B2
公开(公告)日:2024-02-27
申请号:US16988737
申请日:2020-08-10
Inventor: Hoi Jun Yoo , Dong Hyeon Han
Abstract: Disclosed herein are an apparatus and method for training a deep neural network. An apparatus for training a deep neural network including N layers, each having multiple neurons, includes an error propagation processing unit configured to, when an error occurs in an N-th layer in response to initiation of training of the deep neural network, determine an error propagation value for an arbitrary layer based on the error occurring in the N-th layer and directly propagate the error propagation value to the arbitrary layer, a weight gradient update processing unit configured to update a forward weight for the arbitrary layer based on a feed-forward value input to the arbitrary layer and the error propagation value in response to the error propagation value, and a feed-forward processing unit configured to, when update of the forward weight is completed, perform a feed-forward operation in the arbitrary layer using the forward weight.
-
-