-
1.
公开(公告)号:US11854174B2
公开(公告)日:2023-12-26
申请号:US17851704
申请日:2022-06-28
Applicant: Samsung Electronics Co., Ltd.
Inventor: Dinesh Kumar Yadav , Ankur Deshwal , Saptarsi Das , Junwoo Jang , Sehwan Lee
IPC: G06T5/20 , G06N3/08 , G06F18/2111 , G06T1/00
CPC classification number: G06T5/20 , G06F18/2111 , G06N3/08 , G06T1/0007
Abstract: A method of performing convolution in a neural network with variable dilation rate is provided. The method includes receiving a size of a first kernel and a dilation rate, determining at least one of size of one or more disintegrated kernels based on the size of the first kernel, a baseline architecture of a memory and the dilation rate, determining an address of one or more blocks of an input image based on the dilation rate, and one or more parameters associated with a size of the input image and the memory. Thereafter, the one or more blocks of the input image and the one or more disintegrated kernels are fetched from the memory, and an output image is obtained based on convolution of each of the one or more disintegrated kernels and the one or more blocks of the input image.
-
公开(公告)号:US12175208B2
公开(公告)日:2024-12-24
申请号:US16989391
申请日:2020-08-10
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Jinook Song , Daekyeung Kim , Junseok Park , Joonho Song , Sehwan Lee , Junwoo Jang , Yunkyo Cho
Abstract: An arithmetic apparatus includes a first operand holding circuit configured to output a first operand according to a clock signal, generate an indicator signal based on bit values of high-order bit data including a most significant bit of the first operand, and gate the clock signal based on the indicator signal, the clock signal being applied to a flip-flop latching the high-order bit data of the first operand; a second operand holding circuit configured to output a second operand according to the clock signal; and an arithmetic circuit configured to perform data gating on the high-order bit data of the first operand based on the indicator signal and output an operation result by performing an operation using a modified first operand resulting from the data gating and the second operand.
-
公开(公告)号:US11875255B2
公开(公告)日:2024-01-16
申请号:US16803342
申请日:2020-02-27
Applicant: Samsung Electronics Co., Ltd.
Inventor: Hyunsun Park , Yoojin Kim , Hyeongseok Yu , Sehwan Lee , Junwoo Jang
Abstract: A method of processing data in a neural network, includes identifying a sparsity of input data, based on valid information included in the input data in which the input data includes valid values and invalid values, generate rearranged input data, based on a form of the sparsity by rearranging, in the input data, location of at least one of the valid values and the invalid values, and generating, by performing a convolution on the rearranged input data in the neural network, an output.
-
4.
公开(公告)号:US12056595B2
公开(公告)日:2024-08-06
申请号:US16158660
申请日:2018-10-12
Applicant: Samsung Electronics Co., Ltd.
Inventor: Sehwan Lee , Namjoon Kim , Joonho Song , Junwoo Jang
Abstract: Provided are a method and apparatus for processing a convolution operation in a neural network, the method includes determining operands from input feature maps and kernels, on which a convolution operation is to be performed, dispatching operand pairs combined from the determined operands to multipliers in a convolution operator, generating outputs by performing addition and accumulation operations with respect to results of multiplication operations, and obtaining pixel values of output feature maps corresponding to a result of the convolution operation based on the generated outputs.
-
公开(公告)号:US11960999B2
公开(公告)日:2024-04-16
申请号:US18304574
申请日:2023-04-21
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Joonho Song , Sehwan Lee , Junwoo Jang
CPC classification number: G06N3/08 , G06F17/153 , G06N3/04 , G06N3/045
Abstract: A neural network apparatus configured to perform a deconvolution operation includes a memory configured to store a first kernel; and a processor configured to: obtain, from the memory, the first kernel; calculate a second kernel by adjusting an arrangement of matrix elements comprised in the first kernel; generate sub-kernels by dividing the second kernel; perform a convolution operation between an input feature map and the sub-kernels using a convolution operator; and generate an output feature map, as a deconvolution of the input feature map, by merging results of the convolution operation.
-
公开(公告)号:US12271809B2
公开(公告)日:2025-04-08
申请号:US17848007
申请日:2022-06-23
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Namjoon Kim , Sehwan Lee , Junwoo Jang
Abstract: A neural network apparatus includes a plurality of node buffers connected to a node lane and configured to store input node data by a predetermined bit size; a plurality of weight buffers connected to a weight lane and configured to store weights; and one or more processors configured to: generate first and second split data by splitting the input node data by the predetermined bit size, store the first and second split data in the node buffers, output the first split data to an operation circuit for a neural network operation on an index-by-index basis, shift the second split data, and output the second split data to the operation circuit on the index-by-index basis.
-
7.
公开(公告)号:US11423251B2
公开(公告)日:2022-08-23
申请号:US16733314
申请日:2020-01-03
Applicant: Samsung Electronics Co., Ltd.
Inventor: Dinesh Kumar Yadav , Ankur Deshwal , Saptarsi Das , Junwoo Jang , Sehwan Lee
Abstract: A method of performing convolution in a neural network with variable dilation rate is provided. The method includes receiving a size of a first kernel and a dilation rate, determining at least one of size of one or more disintegrated kernels based on the size of the first kernel, a baseline architecture of a memory and the dilation rate, determining an address of one or more blocks of an input image based on the dilation rate, and one or more parameters associated with a size of the input image and the memory. Thereafter, the one or more blocks of the input image and the one or more disintegrated kernels are fetched from the memory, and an output image is obtained based on convolution of each of the one or more disintegrated kernels and the one or more blocks of the input image.
-
公开(公告)号:US10885433B2
公开(公告)日:2021-01-05
申请号:US16107717
申请日:2018-08-21
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Joonho Song , Sehwan Lee , Junwoo Jang
Abstract: A neural network apparatus configured to perform a deconvolution operation includes a memory configured to store a first kernel; and a processor configured to: obtain, from the memory, the first kernel; calculate a second kernel by adjusting an arrangement of matrix elements comprised in the first kernel; generate sub-kernels by dividing the second kernel; perform a convolution operation between an input feature map and the sub-kernels using a convolution operator; and generate an output feature map, as a deconvolution of the input feature map, by merging results of the convolution operation.
-
公开(公告)号:US11854241B2
公开(公告)日:2023-12-26
申请号:US17243057
申请日:2021-04-28
Applicant: Samsung Electronics Co., Ltd.
Inventor: Junwoo Jang
IPC: G06T7/10 , G06V10/44 , G06N3/08 , G06F18/213 , G06F18/2413 , G06V30/19 , G06V10/82
CPC classification number: G06V10/454 , G06F18/213 , G06F18/2413 , G06N3/08 , G06V10/82 , G06V30/19173
Abstract: A neural network apparatus includes one or more processors configured to acquire an input feature map and trained weights, generate a plurality of sub-feature maps by splitting the input feature map based on a dilation rate, generate a plurality of intermediate feature maps by performing a convolution operation between the plurality of sub-feature maps and the trained weights, and generate a dilated output feature map by merging the plurality of intermediate feature maps based on the dilation rate.
-
公开(公告)号:US11663473B2
公开(公告)日:2023-05-30
申请号:US17112041
申请日:2020-12-04
Applicant: SAMSUNG ELECTRONICS CO., LTD.
Inventor: Joonho Song , Sehwan Lee , Junwoo Jang
CPC classification number: G06N3/08 , G06F17/153 , G06N3/04 , G06N3/045
Abstract: A neural network apparatus configured to perform a deconvolution operation includes a memory configured to store a first kernel; and a processor configured to: obtain, from the memory, the first kernel; calculate a second kernel by adjusting an arrangement of matrix elements comprised in the first kernel; generate sub-kernels by dividing the second kernel; perform a convolution operation between an input feature map and the sub-kernels using a convolution operator; and generate an output feature map, as a deconvolution of the input feature map, by merging results of the convolution operation.
-
-
-
-
-
-
-
-
-