-
公开(公告)号:US20190138359A1
公开(公告)日:2019-05-09
申请号:US16113872
申请日:2018-08-27
Applicant: Intel Corporation
Inventor: Madhusudhan Rangarajan , Nagasubramanian Gurumoorthy , Robert Cone , Rajesh Poornachandran , Kartik Ananthanarayanan , Rebecca Weekly
Abstract: Devices, systems, and methods for offloading data service operations from an application critical path are disclosed. A storage service control apparatus can include a compute resource interface configured to communicatively couple to a compute resource, a memory interface configured to communicatively couple to a memory resource, an out of band (oob) channel interface configured to communicatively couple to an oob channel, and a data service controller communicatively coupled to the oob channel interface. The data service controller is configured to identify a data service operation to be performed by the compute resource on data stored in the memory resource, load a data service agent configured to facilitate the data service operation, and perform the data service operation on the data to generate serviced data via the data service agent over the oob channel by an oob compute resource, thus freeing the compute resource from performing the data service operation.
-
公开(公告)号:US12177277B2
公开(公告)日:2024-12-24
申请号:US17313353
申请日:2021-05-06
Applicant: Intel Corporation
Inventor: Lokpraveen Mosur , Ilango Ganga , Robert Cone , Kshitij Arun Doshi , John J. Browne , Mark Debbage , Stephen Doyle , Patrick Fleming , Doddaballapur Jayasimha
IPC: H04L65/61 , H04L47/50 , H04L49/9005
Abstract: In one embodiment, a system includes a device and a host. The device includes a device stream buffer. The host includes a processor to execute at least a first application and a second application, a host stream buffer, and a host scheduler. The first application is associated with a first transmit streaming channel to stream first data from the first application to the device stream buffer. The first transmit streaming channel has a first allocated amount of buffer space in the device stream buffer. The host scheduler schedules enqueue of the first data from the first application to the first transmit streaming channel based at least in part on availability of space in the first allocated amount of buffer space in the device stream buffer. Other embodiments are described and claimed.
-
公开(公告)号:US11687264B2
公开(公告)日:2023-06-27
申请号:US15721053
申请日:2017-09-29
Applicant: Intel Corporation
Inventor: Chih-Jen Chang , Brad Burres , Jose Niell , Dan Biederman , Robert Cone , Pat Wang , Kenneth Keels , Patrick Fleming
IPC: H04L67/63 , G06F3/06 , G06F16/174 , G06F21/57 , G06F21/73 , G06F8/65 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L67/10 , G06F11/30 , G06F9/50 , H01R13/453 , G06F9/48 , G06F9/455 , H05K7/14 , H04L61/5007 , H04L67/75 , H03M7/30 , H03M7/40 , H04L43/08 , H04L47/20 , H04L47/2441 , G06F11/07 , G06F11/34 , G06F7/06 , G06T9/00 , H03M7/42 , H04L12/28 , H04L12/46 , G06F13/16 , G06F21/62 , G06F21/76 , H03K19/173 , H04L9/08 , H04L41/044 , H04L49/104 , H04L43/04 , H04L43/06 , H04L43/0894 , G06F9/38 , G06F12/02 , G06F12/06 , G06T1/20 , G06T1/60 , G06F9/54 , H04L67/1014 , G06F8/656 , G06F8/658 , G06F8/654 , G06F9/4401 , H01R13/631 , H04L47/78 , G06F16/28 , H04Q11/00 , G06F11/14 , H04L41/046 , H04L41/0896 , H04L41/142 , H04L9/40 , G06F15/80
CPC classification number: G06F3/0641 , G06F3/0604 , G06F3/065 , G06F3/067 , G06F3/0608 , G06F3/0611 , G06F3/0613 , G06F3/0617 , G06F3/0647 , G06F3/0653 , G06F7/06 , G06F8/65 , G06F8/654 , G06F8/656 , G06F8/658 , G06F9/3851 , G06F9/3891 , G06F9/4401 , G06F9/45533 , G06F9/4843 , G06F9/4881 , G06F9/5005 , G06F9/505 , G06F9/5038 , G06F9/5044 , G06F9/5083 , G06F9/544 , G06F11/0709 , G06F11/079 , G06F11/0751 , G06F11/3006 , G06F11/3034 , G06F11/3055 , G06F11/3079 , G06F11/3409 , G06F12/0284 , G06F12/0692 , G06F13/1652 , G06F16/1744 , G06F21/57 , G06F21/6218 , G06F21/73 , G06F21/76 , G06T1/20 , G06T1/60 , G06T9/005 , H01R13/453 , H01R13/4536 , H01R13/4538 , H01R13/631 , H03K19/1731 , H03M7/3084 , H03M7/40 , H03M7/42 , H03M7/60 , H03M7/6011 , H03M7/6017 , H03M7/6029 , H04L9/0822 , H04L12/2881 , H04L12/4633 , H04L41/044 , H04L41/0816 , H04L41/0853 , H04L41/12 , H04L43/04 , H04L43/06 , H04L43/08 , H04L43/0894 , H04L47/20 , H04L47/2441 , H04L49/104 , H04L61/5007 , H04L67/10 , H04L67/1014 , H04L67/63 , H04L67/75 , H05K7/1452 , H05K7/1487 , H05K7/1491 , G06F11/1453 , G06F12/023 , G06F15/80 , G06F16/285 , G06F2212/401 , G06F2212/402 , G06F2221/2107 , H04L41/046 , H04L41/0896 , H04L41/142 , H04L47/78 , H04L63/1425 , H04Q11/0005 , H05K7/1447 , H05K7/1492
Abstract: Technologies for an accelerator interface over Ethernet are disclosed. In the illustrative embodiment, a network interface controller of a compute device may receive a data packet. If the network interface controller determines that the data packet should be pre-processed (e.g., decrypted) with a remote accelerator device, the network interface controller may encapsulate the data packet in an encapsulating network packet and send the encapsulating network packet to a remote accelerator device on a remote compute device. The remote accelerator device may pre-process the data packet (e.g., decrypt the data packet) and send it back to the network interface controller. The network interface controller may then send the pre-processed packet to a processor of the compute device.