-
公开(公告)号:US20180341491A1
公开(公告)日:2018-11-29
申请号:US15973962
申请日:2018-05-08
Inventor: Yong-Seok CHOI , Shin-Young AHN , Eun-Ji LIM , Young-Choon WOO , Wan CHOI
Abstract: Disclosed herein are an apparatus and method for sharing memory between computers. The apparatus for sharing memory between computers includes multiple memory adapters, installed in corresponding ones of multiple computers, for receiving an address corresponding to an instruction from the computers and transforming the received address into an instruction in the form of a packet; and shared memory for transforming the instruction in the form of the packet, received from the multiple memory adapters, into an address and performing an operation corresponding to the instruction for a memory cell corresponding to the address.
-
公开(公告)号:US20190243782A1
公开(公告)日:2019-08-08
申请号:US16165891
申请日:2018-10-19
Inventor: Yong-Seok CHOI , Shin-Young AHN , Eun-Ji LIM , Young-Choon WOO , Wan CHOI
IPC: G06F12/1081 , G06F13/28
Abstract: Disclosed herein are an apparatus and method for interfacing with common memory. The apparatus for interfacing with common memory includes a computer-input/output (I/O)-interface-protocol-processing unit for receiving a packet for accessing common memory from a computer; a direct memory access unit for transforming the packet into an instruction for performing any one of reading from and writing to the common memory; and a common memory interface unit for transmitting the instruction to the common memory and receiving information about whether execution of the instruction is completed from the common memory.
-
公开(公告)号:US20180336076A1
公开(公告)日:2018-11-22
申请号:US15979169
申请日:2018-05-14
Inventor: Eun-Ji LIM , Shin-Young AHN , Yong-Seok CHOI , Young-Choon WOO , Wan CHOI
CPC classification number: G06F9/544 , G06F9/52 , G06F15/167 , G06N20/00
Abstract: Disclosed herein are a parameter-sharing apparatus and method. The parameter-sharing apparatus includes a memory allocation unit for managing allocation of a memory area, in which a parameter is to be stored, to a memory box, and updating a mapping table stored in the memory box based on allocation management of the memory area, and an operation processing unit for providing the memory allocation unit with parameter information required for the allocation management of the memory area in which the parameter is to be stored and sharing the parameter stored in the memory box.
-
4.
公开(公告)号:US20210216495A1
公开(公告)日:2021-07-15
申请号:US17216322
申请日:2021-03-29
Inventor: Shin-Young AHN , Eun-Ji LIM , Yong-Seok CHOI , Young-Choon WOO , Wan CHOI
IPC: G06F15/173 , G06N3/08 , H04L29/08 , G06F9/50
Abstract: Disclosed herein are a parameter server and a method for sharing distributed deep-learning parameters using the parameter server. The method for sharing distributed deep-learning parameters using the parameter server includes initializing a global weight parameter in response to an initialization request by a master process; performing an update by receiving a learned local gradient parameter from the worker process, which performs deep-learning training after updating a local weight parameter using the global weight parameter; accumulating the gradient parameters in response to a request by the master process; and performing an update by receiving the global weight parameter from the master process that calculates the global weight parameter using the accumulated gradient parameters of the one or more worker processes.
-
5.
公开(公告)号:US20180349313A1
公开(公告)日:2018-12-06
申请号:US15984262
申请日:2018-05-18
Inventor: Shin-Young AHN , Eun-Ji LIM , Yong-Seok CHOI , Young-Choon WOO , Wan CHOI
IPC: G06F15/173 , H04L29/08 , G06N3/08
Abstract: Disclosed herein are a parameter server and a method for sharing distributed deep-learning parameters using the parameter server. The method for sharing distributed deep-learning parameters using the parameter server includes initializing a global weight parameter in response to an initialization request by a master process; performing an update by receiving a learned local gradient parameter from the worker process, which performs deep-learning training after updating a local weight parameter using the global weight parameter; accumulating the gradient parameters in response to a request by the master process; and performing an update by receiving the global weight parameter from the master process that calculates the global weight parameter using the accumulated gradient parameters of the one or more worker processes.
-
-
-
-