-
1.
公开(公告)号:US20220222533A1
公开(公告)日:2022-07-14
申请号:US17317900
申请日:2021-05-12
Inventor: Hoi Jun YOO , Sang Yeob KIM
Abstract: A method of accelerating training of a low-power, high-performance artificial neural network (ANN) includes (a) performing fine-grained pruning and coarse-grained pruning to generate sparsity in weights by a pruning unit in a convolution core of a cluster in a lower-power, high-performance ANN trainer; (b) selecting and performing dual zero skipping according to input sparsity, output sparsity, and the sparsity of weights by the convolution core, and (c) restricting access to a weight memory during training by allowing a deep neural network (DNN) computation core and a weight pruning core to share weights retrieved from a memory by the convolution core.