-
1.
公开(公告)号:US11775303B2
公开(公告)日:2023-10-03
申请号:US17446678
申请日:2021-09-01
Inventor: Jeongmin Yang
IPC: G06F9/30 , G06F9/38 , G06F17/18 , G06F12/0875
CPC classification number: G06F9/30189 , G06F9/3001 , G06F9/3016 , G06F9/3808 , G06F12/0875 , G06F17/18 , G06F2212/452
Abstract: Disclosed is a general-purpose computing accelerator which includes a memory including an instruction cache, a first executing unit performing a first computation operation, a second executing unit performing a second computation operation, an instruction fetching unit fetching an instruction stored in the instruction cache, a decoding unit that decodes the instruction, and a state control unit controlling a path of the instruction depending on an operation state of the second executing unit. The decoding unit provides the instruction to the first executing unit when the instruction is of a first type and provides the instruction to the state control unit when the instruction is of a second type. Depending on the operation state of the second executing unit, the state control unit provides the instruction of the second type to the second executing unit or stores the instruction of the second type as a register file in the memory.
-
公开(公告)号:US11507429B2
公开(公告)日:2022-11-22
申请号:US16038243
申请日:2018-07-18
Inventor: Chun-Gi Lyuh , Young-Su Kwon , Chan Kim , Hyun Mi Kim , Jeongmin Yang , Jaehoon Chung , Yong Cheol Peter Cho
Abstract: Provided is a neural network accelerator which performs a calculation of a neural network provided with layers, the neural network accelerator including a kernel memory configured to store kernel data related to a filter, a feature map memory configured to store feature map data which are outputs of the layers, and a Processing Element (PE) array including PEs arranged along first and second directions, wherein each of the PEs performs a calculation using the feature map data transmitted in the first direction from the feature map memory and the kernel data transmitted in the second direction from the kernel memory, and transmits a calculation result to the feature map memory in a third direction opposite to the first direction.
-
公开(公告)号:US11853759B2
公开(公告)日:2023-12-26
申请号:US17459935
申请日:2021-08-27
Inventor: Jeongmin Yang
CPC classification number: G06F9/3004 , G06F7/5443 , G06F9/30025 , G06F15/80 , H03M7/70
Abstract: Disclosed are a neural network accelerator and an operating method thereof, which include an instruction analyzer that analyzes a first instruction instructing an operation with respect to a first layer of a neural network algorithm from an external device, a polymorphic operator array including a plurality of operators that performs the operation with respect to the first layer under a control of the instruction analyzer, an interface that communicates with the external device and an external memory under the control of the instruction analyzer, an internal memory, a type converter, a type conversion data mover that stores data received from the external memory through the interface in the internal memory under the control of the instruction analyzer, and an internal type converter that performs a conversion of data stored in the internal memory or data generated by the polymorphic operator array under the control of the instruction analyzer.
-
公开(公告)号:US12210952B2
公开(公告)日:2025-01-28
申请号:US16201871
申请日:2018-11-27
Inventor: Young-Su Kwon , Chan Kim , Hyun Mi Kim , Jeongmin Yang , Chun-Gi Lyuh , Jaehoon Chung , Yong Cheol Peter Cho
IPC: G06N3/04
Abstract: A reorganizable neural network computing device is provided. The computing device includes a data processing array unit including a plurality of operators disposed at locations corresponding to a row and a column. One or more chaining paths which transfer the first input data from the operator of the first row of the data processing array to the operator of the second row are optionally formed. The plurality of first data input processors of the computing device transfer the first input data for a layer of the neural network to the operators along rows of the data processing array unit, and the plurality of second data input processors of the computing device transfer the second input data to the operators along the columns of the data processing array.
-
公开(公告)号:US11068394B2
公开(公告)日:2021-07-20
申请号:US16567241
申请日:2019-09-11
Inventor: Jeongmin Yang , Young-Su Kwon
Abstract: Provided is a neural network system for processing data transferred from an external memory. The neural network system includes an internal memory storing input data transferred from the external memory, an operator performing a multidimensional matrix operation by using the input data of the internal memory and transferring a result of the multidimensional array operation as output data to the internal memory, and a data moving controller controlling an exchange of the input data or the output data between the external memory and the internal memory. The data moving controller reorders a dimension order with respect to an access address of the external memory to generate an access address of the internal memory, for the multidimensional matrix operation.
-
-
-
-