-
公开(公告)号:US20210157734A1
公开(公告)日:2021-05-27
申请号:US16953242
申请日:2020-11-19
Inventor: Byung Jo KIM , Joo Hyun LEE , Seong Min KIM , Ju-Yeob KIM , Jin Kyu KIM , Mi Young LEE
IPC: G06F12/0862 , G06F12/02 , G06F13/16 , G06N3/063
Abstract: A method for controlling a memory from which data is transferred to a neural network processor and an apparatus thereof are provided, the method including: generating prefetch information of data by using a blob descriptor and a reference prediction table after history information is input; reading the data in the memory based on the pre-fetch information and temporarily archiving read data in a prefetch buffer; and accessing next data in the memory based on the prefetch information and temporarily archiving the next data in the prefetch buffer after the data is transferred to the neural network from the prefetch buffer.
-
公开(公告)号:US20200151568A1
公开(公告)日:2020-05-14
申请号:US16541275
申请日:2019-08-15
Inventor: Mi Young LEE , Byung Jo KIM , Seong Min KIM , Ju-Yeob KIM , Jin Kyu KIM , Joo Hyun LEE
Abstract: An embodiment of the present invention provides a quantization method for weights of a plurality of batch normalization layers, including: receiving a plurality of previously learned first weights of the plurality of batch normalization layers; obtaining first distribution information of the plurality of first weights; performing a first quantization on the plurality of first weights using the first distribution information to obtain a plurality of second weights; obtaining second distribution information of the plurality of second weights; and performing a second quantization on the plurality of second weights using the second distribution information to obtain a plurality of final weights, and thereby reducing an error that may occur when quantizing the weight of the batch normalization layer.
-
公开(公告)号:US20210357753A1
公开(公告)日:2021-11-18
申请号:US17317607
申请日:2021-05-11
Inventor: Jin Kyu KIM , Byung Jo KIM , Seong Min KIM , Ju-Yeob KIM , Ki Hyuk PARK , Mi Young LEE , Joo Hyun LEE , Young-deuk JEON , Min-Hyung CHO
Abstract: A method and apparatus for multi-level stepwise quantization for neural network are provided. The apparatus sets a reference level by selecting a value from among values of parameters of the neural network in a direction from a high value equal to or greater than a predetermined value to a lower value, and performs learning based on the reference level. The setting of a reference level and the performing of learning are iteratively performed until the result of the reference level learning satisfies a predetermined value and there is no variable parameter that is updated during learning among the parameters.
-
公开(公告)号:US20180268571A1
公开(公告)日:2018-09-20
申请号:US15698499
申请日:2017-09-07
Inventor: Seong Mo PARK , Sung Eun KIM , Ju-Yeob KIM , Jin Kyu KIM , Kwang Il OH , Joo Hyun LEE
CPC classification number: G06T9/002 , G06T7/11 , G06T7/194 , G06T2200/28 , G06T2207/20081 , G06T2207/20084
Abstract: Provided is an image compression device including an object extracting unit configured to perform convolution neural network (CNN) training and identify an object from an image received externally, a parameter adjusting unit configured to adjust a quantization parameter of a region in which the identified object is included in the image on the basis of the identified object, and an image compression unit configured to compress the image on the basis of the adjusted quantization parameter.
-
5.
公开(公告)号:US20180096249A1
公开(公告)日:2018-04-05
申请号:US15718912
申请日:2017-09-28
Inventor: Jin Kyu KIM , Joo Hyun LEE
IPC: G06N3/08
CPC classification number: G06N3/082 , G06N3/0454
Abstract: Provided is a method for operating a convolutional neural network. The method includes performing learning on weights between neural network nodes by using input data, removing an adaptive parameter that performs learning using the input data after removing a weight having a size less than a threshold value among weights, and mapping remaining weights in the removing of the adaptive parameter to a plurality of representative values.
-
公开(公告)号:US20210303982A1
公开(公告)日:2021-09-30
申请号:US17205433
申请日:2021-03-18
Inventor: Mi Young LEE , Young-deuk JEON , Byung Jo KIM , Ju-Yeob KIM , Jin Kyu KIM , Ki Hyuk PARK , JOO HYUN LEE , MIN-HYUNG CHO
Abstract: Disclosed is a neural network computing device. The neural network computing device includes a neural network accelerator including an analog MAC, a controller controlling the neural network accelerator in one of a first mode and a second mode, and a calibrator that calibrating a gain and a DC offset of the analog MAC. The calibrator includes a memory storing weight data, calibration weight data, and calibration input data, a gain and offset calculator reading the calibration weight data and the calibration input data from the memory, inputting the calibration weight data and the calibration input data to the analog MAC, receiving calibration output data from the analog MAC, and calculating the gain and the DC offset of the analog MAC, and an on-device quantizer reading the weight data, receiving the gain and the DC offset, generating quantized weight data, based on the gain and the DC offset.
-
公开(公告)号:US20210151091A1
公开(公告)日:2021-05-20
申请号:US16997445
申请日:2020-08-19
Inventor: Young-deuk JEON , Seong Min KIM , Jin Kyu KIM , Joo Hyun LEE , Min-Hyung CHO , Jin Ho HAN
IPC: G11C11/4076 , G11C11/4099 , G11C11/4096
Abstract: Disclosed are a device and a method for calibrating a reference voltage. The reference voltage calibrating device includes a data signal communication unit that transmits/receives a data signal, a data strobe signal receiving unit that receives a first data strobe signal and a second data strobe signal, a voltage level of the second data strobe signal being opposite to a voltage level of the first data strobe signal, and a reference voltage generating unit that sets a reference voltage for determining a data value of the data signal, based on the first data strobe signal and the second data strobe signal, and the reference voltage generating unit adjusts the reference voltage based on the first data strobe signal and the second data strobe signal.
-
公开(公告)号:US20200226456A1
公开(公告)日:2020-07-16
申请号:US16742808
申请日:2020-01-14
Inventor: Young-deuk JEON , Byung Jo KIM , Ju-Yeob KIM , Jin Kyu KIM , Ki Hyuk PARK , Mi Young LEE , Joo Hyun LEE , Min-Hyung CHO
Abstract: The neuromorphic arithmetic device comprises an input monitoring circuit that outputs a monitoring result by monitoring that first bits of at least one first digit of a plurality of feature data and a plurality of weight data are all zeros, a partial sum data generator that skips an arithmetic operation that generates a first partial sum data corresponding to the first bits of a plurality of partial sum data in response to the monitoring result while performing the arithmetic operation of generating the plurality of partial sum data, based on the plurality of feature data and the plurality of weight data, and a shift adder that generates the first partial sum data with a zero value and result data, based on second partial sum data except for the first partial sum data among the plurality of partial sum data and the first partial sum data generated with the zero value.
-
公开(公告)号:US20180225563A1
公开(公告)日:2018-08-09
申请号:US15868889
申请日:2018-01-11
Inventor: Ju-Yeob KIM , Byung Jo KIM , Jin Kyu KIM , Mi Young LEE , Seong Min KIM , Joo Hyun LEE
Abstract: Provided is an artificial neural network device including pre-synaptic neurons configured to generate a plurality of input spike signals, and a post-synaptic neuron configured to receive the plurality of input spike signals and to generate an output spike signal during a plurality of time periods, wherein the post-synaptic neuron respectively applies different weights in the plurality of time periods according to contiguousness with a reference time period in which input spike signals, which lead generation of the output spike signal from among the plurality of input spike signals, are received.
-
10.
公开(公告)号:US20180197084A1
公开(公告)日:2018-07-12
申请号:US15866351
申请日:2018-01-09
Inventor: Ju-Yeob KIM , Byung Jo KIM , Jin Kyu KIM , Mi Young LEE , Seong Min KIM , Joo Hyun LEE
CPC classification number: G06N3/084 , G06N3/04 , G06N3/0454 , G06N3/063
Abstract: Provided is a convolutional neural network system. The system includes an input buffer configured to store an input feature, a parameter buffer configured to store a learning parameter, a calculation unit configured to perform a convolution layer calculation or a fully connected layer calculation by using the input feature provided from the input buffer and the learning parameter provided from the parameter buffer, and an output buffer configured to store an output feature outputted from the calculation unit and output the stored output feature to the outside. The parameter buffer provides a real learning parameter to the calculation unit at the time of the convolution layer calculation and provides a binary learning parameter to the calculation unit at the time of the fully connected layer calculation.
-
-
-
-
-
-
-
-
-