-
公开(公告)号:US20230176919A1
公开(公告)日:2023-06-08
申请号:US18103739
申请日:2023-01-31
申请人: Intel Corporation
CPC分类号: G06F9/505 , G06F11/0709 , G06F11/0751 , G06F11/079 , G06F11/3006 , G06F11/3055 , H05K7/1487 , H05K7/1491 , G06F11/3034 , G06F11/3409
摘要: Technologies for composing a managed node with multiple processors on multiple compute sleds to cooperatively execute a workload include a memory, one or more processors connected to the memory, and an accelerator. The accelerator further includes a coherence logic unit that is configured to receive a node configuration request to execute a workload. The node configuration request identifies the compute sled and a second compute sled to be included in a managed node. The coherence logic unit is further configured to modify a portion of local working data associated with the workload on the compute sled in the memory with the one or more processors of the compute sled, determine coherence data indicative of the modification made by the one or more processors of the compute sled to the local working data in the memory, and send the coherence data to the second compute sled of the managed node.
-
公开(公告)号:US11630702B2
公开(公告)日:2023-04-18
申请号:US17246388
申请日:2021-04-30
申请人: Intel Corporation
摘要: Technologies for composing a managed node with multiple processors on multiple compute sleds to cooperatively execute a workload include a memory, one or more processors connected to the memory, and an accelerator. The accelerator further includes a coherence logic unit that is configured to receive a node configuration request to execute a workload. The node configuration request identifies the compute sled and a second compute sled to be included in a managed node. The coherence logic unit is further configured to modify a portion of local working data associated with the workload on the compute sled in the memory with the one or more processors of the compute sled, determine coherence data indicative of the modification made by the one or more processors of the compute sled to the local working data in the memory, and send the coherence data to the second compute sled of the managed node.
-
公开(公告)号:US11562063B2
公开(公告)日:2023-01-24
申请号:US17114246
申请日:2020-12-07
申请人: INTEL CORPORATION
发明人: Michael Lemay , David M. Durham , Michael E. Kounavis , Barry E. Huntley , Vedvyas Shanbhogue , Jason W. Brandt , Josh Triplett , Gilbert Neiger , Karanvir Grewal , Baiju Patel , Ye Zhuang , Jr-Shian Tsai , Vadim Sukhomlinov , Ravi Sahita , Mingwei Zhang , James C. Farwell , Amitabh Das , Krishna Bhuyan
摘要: Disclosed embodiments relate to encoded inline capabilities. In one example, a system includes a trusted execution environment (TEE) to partition an address space within a memory into a plurality of compartments each associated with code to execute a function, the TEE further to assign a message object in a heap to each compartment, receive a request from a first compartment to send a message block to a specified destination compartment, respond to the request by authenticating the request, generating a corresponding encoded capability, conveying the encoded capability to the destination compartment, and scheduling the destination compartment to respond to the request, and subsequently, respond to a check capability request from the destination compartment by checking the encoded capability and, when the check passes, providing a memory address to access the message block, and, otherwise, generating a fault, wherein each compartment is isolated from other compartments.
-
公开(公告)号:US11922220B2
公开(公告)日:2024-03-05
申请号:US17255588
申请日:2019-04-16
申请人: Intel Corporation
发明人: Mohammad R. Haghighat , Kshitij Doshi , Andrew J. Herdrich , Anup Mohan , Ravishankar R. Iyer , Mingqiu Sun , Krishna Bhuyan , Teck Joo Goh , Mohan J. Kumar , Michael Prinke , Michael Lemay , Leeor Peled , Jr-Shian Tsai , David M. Durham , Jeffrey D. Chamberlain , Vadim A. Sukhomlinov , Eric J. Dahlen , Sara Baghsorkhi , Harshad Sane , Areg Melik-Adamyan , Ravi Sahita , Dmitry Yurievich Babokin , Ian M. Steiner , Alexander Bachmutsky , Anil Rao , Mingwei Zhang , Nilesh K. Jain , Amin Firoozshahian , Baiju V. Patel , Wenyong Huang , Yeluri Raghuram
CPC分类号: G06F9/5061 , G06F9/52 , G06F11/302 , G06F11/3495 , G06F21/53 , G06F21/604 , G06F21/56 , G06F2209/521 , G06F2221/033 , G06N20/00
摘要: Embodiments of systems, apparatuses and methods provide enhanced function as a service (FaaS) to users, e.g., computer developers and cloud service providers (CSPs). A computing system configured to provide such enhanced FaaS service include one or more controls architectural subsystems, software and orchestration subsystems, network and storage subsystems, and security subsystems. The computing system executes functions in response to events triggered by the users in an execution environment provided by the architectural subsystems, which represent an abstraction of execution management and shield the users from the burden of managing the execution. The software and orchestration subsystems allocate computing resources for the function execution by intelligently spinning up and down containers for function code with decreased instantiation latency and increased execution scalability while maintaining secured execution. Furthermore, the computing system enables customers to pay only when their code gets executed with a granular billing down to millisecond increments.
-
公开(公告)号:US20210263779A1
公开(公告)日:2021-08-26
申请号:US17255588
申请日:2019-04-16
申请人: Intel Corporation
发明人: Mohammad R. Haghighat , Kshitij Doshi , Andrew J. Herdrich , Anup Mohan , Ravishankar R. Iyer , Mingqiu Sun , Krishna Bhuyan , Teck Joo Goh , Mohan J. Kumar , Michael Prinke , Michael Lemay , Leeor Peled , Jr-Shian Tsai , David M. Durham , Jeffrey D. Chamberlain , Vadim A. Sukhomlinov , Eric J. Dahlen , Sara Baghsorkhi , Harshad Sane , Areg Melik-Adamyan , Ravi Sahita , Dmitry Yurievich Babokin , Ian M. Steiner , Alexander Bachmutsky , Anil Rao , Mingwei Zhang , Nilesh K. Jain , Amin Firoozshahian , Baiju V. Patel , Wenyong Huang , Yeluri Raghuram
摘要: Embodiments of systems, apparatuses and methods provide enhanced function as a service (FaaS) to users, e.g., computer developers and cloud service providers (CSPs). A computing system configured to provide such enhanced FaaS service include one or more controls architectural subsystems, software and orchestration subsystems, network and storage subsystems, and security subsystems. The computing system executes functions in response to events triggered by the users in an execution environment provided by the architectural subsystems, which represent an abstraction of execution management and shield the users from the burden of managing the execution. The software and orchestration subsystems allocate computing resources for the function execution by intelligently spinning up and down containers for function code with decreased instantiation latency and increased execution scalability while maintaining secured execution. Furthermore, the computing system enables customers to pay only when their code gets executed with a granular billing down to millisecond increments.
-
公开(公告)号:US20210255915A1
公开(公告)日:2021-08-19
申请号:US17246388
申请日:2021-04-30
申请人: Intel Corporation
摘要: Technologies for composing a managed node with multiple processors on multiple compute sleds to cooperatively execute a workload include a memory, one or more processors connected to the memory, and an accelerator. The accelerator further includes a coherence logic unit that is configured to receive a node configuration request to execute a workload. The node configuration request identifies the compute sled and a second compute sled to be included in a managed node. The coherence logic unit is further configured to modify a portion of local working data associated with the workload on the compute sled in the memory with the one or more processors of the compute sled, determine coherence data indicative of the modification made by the one or more processors of the compute sled to the local working data in the memory, and send the coherence data to the second compute sled of the managed node.
-
公开(公告)号:US20190042308A1
公开(公告)日:2019-02-07
申请号:US16118840
申请日:2018-08-31
申请人: Intel Corporation
发明人: Mohan J. Kumar , Krishna Bhuyan
IPC分类号: G06F9/48
摘要: Technologies for providing efficient scheduling of functions include a compute device. The compute device is configured to obtain a function dependency graph indicative of data dependencies between functions to be executed in a networked set of compute devices, perform a cluster analysis of the execution of the functions in the networked set of compute devices to identify additional data dependencies between the functions, and update, based on the cluster analysis, the function dependency graph.
-
公开(公告)号:US20240241761A1
公开(公告)日:2024-07-18
申请号:US18618901
申请日:2024-03-27
申请人: Intel Corporation
IPC分类号: G06F9/50 , G06F3/06 , G06F7/06 , G06F8/65 , G06F8/654 , G06F8/656 , G06F8/658 , G06F9/38 , G06F9/4401 , G06F9/455 , G06F9/48 , G06F9/54 , G06F11/07 , G06F11/30 , G06F11/34 , G06F12/02 , G06F12/06 , G06F13/16 , G06F16/174 , G06F21/57 , G06F21/62 , G06F21/73 , G06F21/76 , G06T1/20 , G06T1/60 , G06T9/00 , H01R13/453 , H01R13/631 , H03K19/173 , H03M7/30 , H03M7/40 , H03M7/42 , H04L9/08 , H04L12/28 , H04L12/46 , H04L41/044 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L43/04 , H04L43/06 , H04L43/08 , H04L43/0894 , H04L47/20 , H04L47/2441 , H04L49/104 , H04L61/5007 , H04L67/10 , H04L67/1014 , H04L67/63 , H04L67/75 , H05K7/14 , G06F11/14 , G06F15/80 , G06F16/28 , H04L9/40 , H04L41/046 , H04L41/0896 , H04L41/142 , H04L47/78 , H04Q11/00
CPC分类号: G06F9/505 , G06F3/0604 , G06F3/0608 , G06F3/0611 , G06F3/0613 , G06F3/0617 , G06F3/0641 , G06F3/0647 , G06F3/065 , G06F3/0653 , G06F3/067 , G06F7/06 , G06F8/65 , G06F8/654 , G06F8/656 , G06F8/658 , G06F9/3851 , G06F9/3891 , G06F9/4401 , G06F9/45533 , G06F9/4843 , G06F9/4881 , G06F9/5005 , G06F9/5038 , G06F9/5044 , G06F9/5083 , G06F9/544 , G06F11/0709 , G06F11/0751 , G06F11/079 , G06F11/3006 , G06F11/3034 , G06F11/3055 , G06F11/3079 , G06F11/3409 , G06F12/0284 , G06F12/0692 , G06F13/1652 , G06F16/1744 , G06F21/57 , G06F21/6218 , G06F21/73 , G06F21/76 , G06T1/20 , G06T1/60 , G06T9/005 , H01R13/453 , H01R13/4536 , H01R13/4538 , H01R13/631 , H03K19/1731 , H03M7/3084 , H03M7/40 , H03M7/42 , H03M7/60 , H03M7/6011 , H03M7/6017 , H03M7/6029 , H04L9/0822 , H04L12/2881 , H04L12/4633 , H04L41/044 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L43/04 , H04L43/06 , H04L43/08 , H04L43/0894 , H04L47/20 , H04L47/2441 , H04L49/104 , H04L61/5007 , H04L67/10 , H04L67/1014 , H04L67/63 , H04L67/75 , H05K7/1452 , H05K7/1487 , H05K7/1491 , G06F11/1453 , G06F12/023 , G06F15/80 , G06F16/285 , G06F2212/401 , G06F2212/402 , G06F2221/2107 , H04L41/046 , H04L41/0896 , H04L41/142 , H04L47/78 , H04L63/1425 , H04Q11/0005 , H05K7/1447 , H05K7/1492
摘要: Technologies for composing a managed node with multiple processors on multiple compute sleds to cooperatively execute a workload include a memory, one or more processors connected to the memory, and an accelerator. The accelerator further includes a coherence logic unit that is configured to receive a node configuration request to execute a workload. The node configuration request identifies the compute sled and a second compute sled to be included in a managed node. The coherence logic unit is further configured to modify a portion of local working data associated with the workload on the compute sled in the memory with the one or more processors of the compute sled, determine coherence data indicative of the modification made by the one or more processors of the compute sled to the local working data in the memory, and send the coherence data to the second compute sled of the managed node.
-
公开(公告)号:US11977923B2
公开(公告)日:2024-05-07
申请号:US18103739
申请日:2023-01-31
申请人: Intel Corporation
IPC分类号: G06F9/50 , G06F3/06 , G06F7/06 , G06F8/65 , G06F8/654 , G06F8/656 , G06F8/658 , G06F9/38 , G06F9/4401 , G06F9/455 , G06F9/48 , G06F9/54 , G06F11/07 , G06F11/30 , G06F11/34 , G06F12/02 , G06F12/06 , G06F13/16 , G06F16/174 , G06F21/57 , G06F21/62 , G06F21/73 , G06F21/76 , G06T1/20 , G06T1/60 , G06T9/00 , H01R13/453 , H01R13/631 , H03K19/173 , H03M7/30 , H03M7/40 , H03M7/42 , H04L9/08 , H04L12/28 , H04L12/46 , H04L41/044 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L43/04 , H04L43/06 , H04L43/08 , H04L43/0894 , H04L47/20 , H04L47/2441 , H04L49/104 , H04L61/5007 , H04L67/10 , H04L67/1014 , H04L67/63 , H04L67/75 , H05K7/14 , G06F11/14 , G06F15/80 , G06F16/28 , H04L9/40 , H04L41/046 , H04L41/0896 , H04L41/142 , H04L47/78 , H04Q11/00
CPC分类号: G06F9/505 , G06F3/0604 , G06F3/0608 , G06F3/0611 , G06F3/0613 , G06F3/0617 , G06F3/0641 , G06F3/0647 , G06F3/065 , G06F3/0653 , G06F3/067 , G06F7/06 , G06F8/65 , G06F8/654 , G06F8/656 , G06F8/658 , G06F9/3851 , G06F9/3891 , G06F9/4401 , G06F9/45533 , G06F9/4843 , G06F9/4881 , G06F9/5005 , G06F9/5038 , G06F9/5044 , G06F9/5083 , G06F9/544 , G06F11/0709 , G06F11/0751 , G06F11/079 , G06F11/3006 , G06F11/3034 , G06F11/3055 , G06F11/3079 , G06F11/3409 , G06F12/0284 , G06F12/0692 , G06F13/1652 , G06F16/1744 , G06F21/57 , G06F21/6218 , G06F21/73 , G06F21/76 , G06T1/20 , G06T1/60 , G06T9/005 , H01R13/453 , H01R13/4536 , H01R13/4538 , H01R13/631 , H03K19/1731 , H03M7/3084 , H03M7/40 , H03M7/42 , H03M7/60 , H03M7/6011 , H03M7/6017 , H03M7/6029 , H04L9/0822 , H04L12/2881 , H04L12/4633 , H04L41/044 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L43/04 , H04L43/06 , H04L43/08 , H04L43/0894 , H04L47/20 , H04L47/2441 , H04L49/104 , H04L61/5007 , H04L67/10 , H04L67/1014 , H04L67/63 , H04L67/75 , H05K7/1452 , H05K7/1487 , H05K7/1491 , G06F11/1453 , G06F12/023 , G06F15/80 , G06F16/285 , G06F2212/401 , G06F2212/402 , G06F2221/2107 , H04L41/046 , H04L41/0896 , H04L41/142 , H04L47/78 , H04L63/1425 , H04Q11/0005 , H05K7/1447 , H05K7/1492
摘要: Technologies for composing a managed node with multiple processors on multiple compute sleds to cooperatively execute a workload include a memory, one or more processors connected to the memory, and an accelerator. The accelerator further includes a coherence logic unit that is configured to receive a node configuration request to execute a workload. The node configuration request identifies the compute sled and a second compute sled to be included in a managed node. The coherence logic unit is further configured to modify a portion of local working data associated with the workload on the compute sled in the memory with the one or more processors of the compute sled, determine coherence data indicative of the modification made by the one or more processors of the compute sled to the local working data in the memory, and send the coherence data to the second compute sled of the managed node.
-
公开(公告)号:US20220158934A1
公开(公告)日:2022-05-19
申请号:US17598115
申请日:2020-07-02
发明人: Nageen Himayat , Srikathyayani Srikanteswara , Krishna Bhuyan , Daojing Guo , Rustam Pirmagomedov , Gabriel Arrobo Vidal , Yi Zhang , Dmitri Moltchanov
IPC分类号: H04L45/00 , H04L45/745 , H04L47/31 , H04L47/28
摘要: Systems and methods for dynamic compute orchestration include receiving, at a network node of an information centric network, a first interest packet comprising a name field indicating a named function and one or more constraints specifying compute requirements for a computing node to execute the named function, the first interest packet received from a client node. A plurality of computing nodes are identified that satisfy the compute requirements for executing the named function. The first interest packet is forwarded to at least some of the plurality of computing nodes. Data packets are received from at least some of the plurality of computing nodes in response to the first interest packet. One of the plurality of computing nodes is selected based on the received data packets, and a second interest packet is sent to the selected one of the plurality of computing nodes instructing the selected one of the plurality of compute nodes to execute the named function.
-
-
-
-
-
-
-
-
-