-
1.
公开(公告)号:US20220414503A1
公开(公告)日:2022-12-29
申请号:US17569393
申请日:2022-01-05
Inventor: Jongse PARK , Wonik SEO , Sanghoon CHA , Yeonjae KIM , Jaehyuk HUH
Abstract: Disclosed is an SLO-aware artificial intelligence inference scheduler technology in a heterogeneous processor-based edge system. A scheduling method for a machine learning (ML) inference task, which is performed by a scheduling system, may include receiving inference task requests of multiple ML models with respect to an edge system composed of heterogeneous processors and operating heterogeneous processor resources of the edge system based on a service-level objective (SLO)-aware-based scheduling policy in response to the received inference task requests.
-
2.
公开(公告)号:US20180176202A1
公开(公告)日:2018-06-21
申请号:US15704601
申请日:2017-09-14
Inventor: Yeonju RO , Seongwook JIN , Jaehyuk HUH , John Dongjun KIM
CPC classification number: H04L63/067 , H04L9/0863 , H04L9/3228 , H04L63/0428 , H04W12/04
Abstract: A device transmits or receives a packet in a memory network including one or more processors and/or one or more memory devices. The device includes a key storage unit configured to store a one-time password (OTP) key that is shared with a target node, an encryption unit configured to encrypt a transmission packet with the OTP key stored in the key storage unit and to transmit the encrypted transmission packet to the target node, and a decryption unit configured to decrypt a receiving packet from the target node with the OTP key stored in the key storage unit. The device is a processor or a memory device in the memory network.
-