-
公开(公告)号:US20240386272A1
公开(公告)日:2024-11-21
申请号:US18785849
申请日:2024-07-26
Applicant: Intel Corporation
Inventor: Yamini Nimmagadda , Susanne M. Balle , Olugbemisola Oniyinde
Abstract: An Infrastructure Processing Unit (IPU), including: a model optimization processor configured to optimize an artificial intelligence (AI) model for an accelerator managed by the IPU, and deploy the optimized AI model to the accelerator for execution of an inference; and a local memory configured to store data related to the AI model optimization.
-
公开(公告)号:US11922227B2
公开(公告)日:2024-03-05
申请号:US18069700
申请日:2022-12-21
Applicant: INTEL CORPORATION
Inventor: Francesc Guim Bernat , Karthik Kumar , Susanne M. Balle , Ignacio Astilleros Diez , Timothy Verrall , Ned M. Smith
CPC classification number: G06F9/5088 , G06F9/4856 , G06F9/4881 , G06F9/5072 , G06F2209/484
Abstract: Technologies for providing efficient migration of services include a server device. The server device includes compute engine circuitry to execute a set of services on behalf of a terminal device and migration accelerator circuitry. The migration accelerator circuitry is to determine whether execution of the services is to be migrated from an edge station in which the present server device is located to a second edge station in which a second server device is located, determine a prioritization of the services executed by the server device, and send, in response to a determination that the services are to be migrated and as a function of the determined prioritization, data utilized by each service to the second server device of the second edge station to migrate the services. Other embodiments are also described and claimed.
-
公开(公告)号:US11907557B2
公开(公告)日:2024-02-20
申请号:US17681025
申请日:2022-02-25
Applicant: Intel Corporation
Inventor: Susanne M. Balle , Francesc Guim Bernat , Slawomir Putyrski , Joe Grecco , Henry Mitchel , Evan Custodio , Rahul Khanna , Sujoy Sen
IPC: G06F15/80 , G06F3/06 , G06F16/174 , G06F21/57 , G06F21/73 , G06F8/65 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L67/10 , G06F11/30 , G06F9/50 , H01R13/453 , G06F9/48 , G06F9/455 , H05K7/14 , H04L61/5007 , H04L67/63 , H04L67/75 , H03M7/30 , H03M7/40 , H04L43/08 , H04L47/20 , H04L47/2441 , G06F11/07 , G06F11/34 , G06F7/06 , G06T9/00 , H03M7/42 , H04L12/28 , H04L12/46 , G06F13/16 , G06F21/62 , G06F21/76 , H03K19/173 , H04L9/08 , H04L41/044 , H04L49/104 , H04L43/04 , H04L43/06 , H04L43/0894 , G06F9/38 , G06F12/02 , G06F12/06 , G06T1/20 , G06T1/60 , G06F9/54 , H04L67/1014 , G06F8/656 , G06F8/658 , G06F8/654 , G06F9/4401 , H01R13/631 , H04L47/78 , G06F16/28 , H04Q11/00 , G06F11/14 , H04L41/046 , H04L41/0896 , H04L41/142 , H04L9/40
CPC classification number: G06F3/0641 , G06F3/0604 , G06F3/065 , G06F3/067 , G06F3/0608 , G06F3/0611 , G06F3/0613 , G06F3/0617 , G06F3/0647 , G06F3/0653 , G06F7/06 , G06F8/65 , G06F8/654 , G06F8/656 , G06F8/658 , G06F9/3851 , G06F9/3891 , G06F9/4401 , G06F9/45533 , G06F9/4843 , G06F9/4881 , G06F9/5005 , G06F9/505 , G06F9/5038 , G06F9/5044 , G06F9/5083 , G06F9/544 , G06F11/0709 , G06F11/079 , G06F11/0751 , G06F11/3006 , G06F11/3034 , G06F11/3055 , G06F11/3079 , G06F11/3409 , G06F12/0284 , G06F12/0692 , G06F13/1652 , G06F16/1744 , G06F21/57 , G06F21/6218 , G06F21/73 , G06F21/76 , G06T1/20 , G06T1/60 , G06T9/005 , H01R13/453 , H01R13/4536 , H01R13/4538 , H01R13/631 , H03K19/1731 , H03M7/3084 , H03M7/40 , H03M7/42 , H03M7/60 , H03M7/6011 , H03M7/6017 , H03M7/6029 , H04L9/0822 , H04L12/2881 , H04L12/4633 , H04L41/044 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L43/04 , H04L43/06 , H04L43/08 , H04L43/0894 , H04L47/20 , H04L47/2441 , H04L49/104 , H04L61/5007 , H04L67/10 , H04L67/1014 , H04L67/63 , H04L67/75 , H05K7/1452 , H05K7/1487 , H05K7/1491 , G06F11/1453 , G06F12/023 , G06F15/80 , G06F16/285 , G06F2212/401 , G06F2212/402 , G06F2221/2107 , H04L41/046 , H04L41/0896 , H04L41/142 , H04L47/78 , H04L63/1425 , H04Q11/0005 , H05K7/1447 , H05K7/1492
Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
-
公开(公告)号:US20230195346A1
公开(公告)日:2023-06-22
申请号:US18109774
申请日:2023-02-14
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Evan Custodio , Susanne M. Balle , Joe Grecco , Henry Mitchel , Slawomir Putyrski
IPC: G06F3/06 , G06F16/174 , G06F21/57 , G06F21/73 , G06F8/65 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L67/10 , G06F11/30 , G06F9/50 , H01R13/453 , G06F9/48 , G06F9/455 , H05K7/14 , H04L61/5007 , H04L67/63 , H04L67/75 , H03M7/30 , H03M7/40 , H04L43/08 , H04L47/20 , H04L47/2441 , G06F11/07 , G06F11/34 , G06F7/06 , G06T9/00 , H03M7/42 , H04L12/28 , H04L12/46 , G06F13/16 , G06F21/62 , G06F21/76 , H03K19/173 , H04L9/08 , H04L41/044 , H04L49/104 , H04L43/04 , H04L43/06 , H04L43/0894 , G06F9/38 , G06F12/02 , G06F12/06 , G06T1/20 , G06T1/60 , G06F9/54 , H04L67/1014 , G06F8/656 , G06F8/658 , G06F8/654 , G06F9/4401 , H01R13/631
CPC classification number: G06F3/0641 , G06F16/1744 , G06F21/57 , G06F21/73 , G06F8/65 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L67/10 , G06F11/3079 , G06F9/5005 , H01R13/4536 , H01R13/453 , G06F9/5044 , G06F9/4843 , G06F9/45533 , G06F9/5083 , H05K7/1491 , H04L61/5007 , H04L67/63 , H04L67/75 , G06F3/0608 , G06F3/065 , G06F3/067 , H03M7/6017 , H03M7/60 , H03M7/40 , H03M7/6011 , H03M7/6029 , G06F3/0611 , G06F3/0613 , G06F3/0617 , G06F3/0647 , G06F3/0653 , H04L43/08 , H04L47/20 , H04L47/2441 , G06F11/0709 , G06F11/0751 , G06F11/079 , G06F11/3006 , G06F11/3409 , G06F7/06 , G06T9/005 , H03M7/3084 , H03M7/42 , H04L12/2881 , H04L12/4633 , G06F13/1652 , G06F21/6218 , G06F21/76 , H03K19/1731 , H04L9/0822 , H04L41/044 , H04L49/104 , H04L43/04 , H04L43/06 , H04L43/0894 , G06F9/3851 , G06F9/4881 , G06F9/505 , G06F12/0284 , G06F12/0692 , G06T1/20 , G06T1/60 , G06F9/3891 , G06F9/5038 , G06F9/544 , H04L67/1014 , G06F8/656 , G06F8/658 , G06F8/654 , G06F9/4401 , G06F3/0604 , G06F11/3034 , G06F11/3055 , H01R13/4538 , H01R13/631 , H05K7/1452 , H05K7/1487 , H04L47/78
Abstract: A compute device to manage workflow to disaggregated computing resources is provided. The compute device comprises a compute engine receive a workload processing request, the workload processing request defined by at least one request parameter, determine at least one accelerator device capable of processing a workload in accordance with the at least one request parameter, transmit a workload to the at least one accelerator device, receive a work product produced by the at least one accelerator device from the workload, and provide the work product to an application.
-
公开(公告)号:US11604882B2
公开(公告)日:2023-03-14
申请号:US16433709
申请日:2019-06-06
Applicant: Intel Corporation
Inventor: Yeluri Raghuram , Susanne M. Balle , Nigel Thomas Cook , Kapil Sood
Abstract: Disclosed herein are embodiments related to security in cloudlet environments. In some embodiments, for example, a computing device (e.g., a cloudlet) may include: a trusted execution environment; a Basic Input/Output System (BIOS) to request a Key Encryption Key (KEK) from the trusted execution environment; and a Self-Encrypting Storage (SES) associated with the KEK; wherein the trusted execution environment is to verify the BIOS and provide the KEK to the BIOS subsequent to verification of the BIOS, and the BIOS is to provide the KEK to the SES to unlock the SES for access by the trusted execution environment.
-
公开(公告)号:US20220207358A1
公开(公告)日:2022-06-30
申请号:US17480236
申请日:2021-09-21
Applicant: Intel Corporation
Inventor: Yamini Nimmagadda , Susanne M. Balle , Olugbemisola Oniyinde
Abstract: An Infrastructure Processing Unit (IPU), including: a model optimization processor configured to optimize an artificial intelligence (Al) model for an accelerator managed by the IPU, and deploy the optimized Al model to the accelerator for execution of an inference; and a local memory configured to store data related to the Al model optimization.
-
公开(公告)号:US20220179575A1
公开(公告)日:2022-06-09
申请号:US17681025
申请日:2022-02-25
Applicant: Intel Corporation
Inventor: Susanne M. Balle , Francesc Guim Bernat , Slawomir Putyrski , Joe Grecco , Henry Mitchel , Evan CUSTODIO , Rahul Khanna , Sujoy Sen
IPC: G06F3/06 , G06F16/174 , G06F21/57 , G06F21/73 , G06F8/65 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L67/10 , G06F11/30 , G06F9/50 , H01R13/453 , G06F9/48 , G06F9/455 , H05K7/14 , H03M7/30 , H03M7/40 , H04L43/08 , H04L47/20 , H04L47/2441 , G06F11/07 , G06F11/34 , G06F7/06 , G06T9/00 , H03M7/42 , H04L12/28 , H04L12/46 , H04L61/5007 , G06F13/16 , G06F21/62 , G06F21/76 , H03K19/173 , H04L9/08 , H04L41/044 , H04L49/104 , H04L67/63 , H04L67/75 , H04L43/04 , H04L43/06 , H04L43/0894 , G06F9/38 , G06F12/02 , G06F12/06 , G06T1/20 , G06T1/60 , G06F9/54 , H04L67/1014 , G06F8/656 , G06F8/658 , G06F8/654 , G06F9/4401 , H01R13/631
Abstract: Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
-
8.
公开(公告)号:US11228539B2
公开(公告)日:2022-01-18
申请号:US16540807
申请日:2019-08-14
Applicant: Intel Corporation
Inventor: Mrittika Ganguli , Sugesh Chandran , Parthasarathy Sarangam , Sujoy Sen , Susanne M. Balle , Rajesh Sankaran
IPC: H04L12/931 , H04L29/12 , H04L12/06 , G06F30/34
Abstract: Technologies for network interface controllers (NICs) include a compute sled and an accelerator sled in communication over a network. The accelerator sled configures a virtual switch endpoint associated with a remote direct memory access (RDMA) server instance that is associated with a field-programmable gate array (FPGA) of the accelerator sled. The accelerator sled updates local software defined networking (SDN) tables with a virtual tunnel associated with the virtual switch endpoint and a remote compute sled. A virtual switch of the accelerator sled switches virtual tunnel traffic from the remote compute sled to the RDMA server instance, which transfers data to or from the FPGA. The compute sled also updates a local SDN table with the virtual tunnel, and a virtual switch of the compute sled switches virtual tunnel traffic to or from the accelerator sled. Other embodiments are described and claimed.
-
9.
公开(公告)号:US11137922B2
公开(公告)日:2021-10-05
申请号:US15719770
申请日:2017-09-29
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Evan Custodio , Susanne M. Balle , Joe Grecco , Henry Mitchel , Rahul Khanna , Slawomir Putyrski , Sujoy Sen , Paul Dormitzer
IPC: G06F9/50 , G06F3/06 , G06F16/174 , G06F21/57 , G06F21/73 , G06F8/65 , H04L12/24 , H04L29/08 , G06F11/30 , H01R13/453 , G06F9/48 , H03M7/30 , H03M7/40 , H04L12/26 , H04L12/813 , H04L12/851 , G06F11/07 , G06F11/34 , G06F7/06 , G06T9/00 , H03M7/42 , H04L12/28 , H04L12/46 , H04L29/12 , G06F13/16 , G06F21/62 , G06F21/76 , H03K19/173 , H04L9/08 , H04L12/933 , G06F9/38 , G06F12/02 , G06F12/06 , G06T1/20 , G06T1/60 , G06F9/54 , G06F8/656 , G06F8/658 , G06F8/654 , G06F9/4401 , H01R13/631 , H05K7/14 , H04L12/911 , G06F11/14 , H04L29/06 , G06F15/80
Abstract: Technologies for providing accelerated functions as a service in a disaggregated architecture include a compute device that is to receive a request for an accelerated task. The task is associated with a kernel usable by an accelerator sled communicatively coupled to the compute device to execute the task. The compute device is further to determine, in response to the request and with a database indicative of kernels and associated accelerator sleds, an accelerator sled that includes an accelerator device configured with the kernel associated with the request. Additionally, the compute device is to assign the task to the determined accelerator sled for execution. Other embodiments are also described and claimed.
-
公开(公告)号:US10990309B2
公开(公告)日:2021-04-27
申请号:US15721833
申请日:2017-09-30
Applicant: Intel Corporation
Inventor: Francesc Guim Bernat , Evan Custodio , Susanne M. Balle , Joe Grecco , Henry Mitchel , Slawomir Putyrski
IPC: G06F9/46 , G06F3/06 , G06F16/174 , G06F21/57 , G06F21/73 , G06F8/65 , H04L12/24 , H04L29/08 , G06F11/30 , G06F9/50 , H01R13/453 , G06F9/48 , H03M7/30 , H03M7/40 , H04L12/26 , H04L12/813 , H04L12/851 , G06F11/07 , G06F11/34 , G06F7/06 , G06T9/00 , H03M7/42 , H04L12/28 , H04L12/46 , H04L29/12 , G06F13/16 , G06F21/62 , G06F21/76 , H03K19/173 , H04L9/08 , H04L12/933 , G06F9/38 , G06F12/02 , G06F12/06 , G06T1/20 , G06T1/60 , G06F9/54 , G06F8/656 , G06F8/658 , G06F8/654 , G06F9/4401 , H01R13/631 , H05K7/14 , H04L12/911 , G06F11/14 , H04L29/06 , G06F15/80
Abstract: A compute device to manage workflow to disaggregated computing resources is provided. The compute device comprises a compute engine receive a workload processing request, the workload processing request defined by at least one request parameter, determine at least one accelerator device capable of processing a workload in accordance with the at least one request parameter, transmit a workload to the at least one accelerator device, receive a work product produced by the at least one accelerator device from the workload, and provide the work product to an application.
-
-
-
-
-
-
-
-
-