Connecting accelerator resources using a switch

    公开(公告)号:US11249808B2

    公开(公告)日:2022-02-15

    申请号:US15682896

    申请日:2017-08-22

    Abstract: The present disclosure describes a number of embodiments related to devices and techniques for implementing an interconnect switch to provide a switchable low-latency bypass between node resources such as CPUs and accelerator resources for caching. A resource manager may be used to receive an indication of a node of a plurality of nodes and an indication of an accelerator resource of a plurality of accelerator resources to connect to the node. If the indicated accelerator resource is connected to another node of the plurality of nodes, then transmit, to a interconnect switch, one or more hot-remove commands. The resource manager may then transmit to the interconnect switch one or more hot-add commands to connect the node resource and the accelerator resource.

    Technologies for accelerating edge device workloads

    公开(公告)号:US10541942B2

    公开(公告)日:2020-01-21

    申请号:US15941943

    申请日:2018-03-30

    Abstract: Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.

    CONNECTING ACCELERATOR RESOURCES USING A SWITCH

    公开(公告)号:US20190065272A1

    公开(公告)日:2019-02-28

    申请号:US15682896

    申请日:2017-08-22

    Abstract: The present disclosure describes a number of embodiments related to devices and techniques for implementing an interconnect switch to provide a switchable low-latency bypass between node resources such as CPUs and accelerator resources for caching. A resource manager may be used to receive an indication of a node of a plurality of nodes and an indication of an accelerator resource of a plurality of accelerator resources to connect to the node. If the indicated accelerator resource is connected to another node of the plurality of nodes, then transmit, to a interconnect switch, one or more hot-remove commands. The resource manager may then transmit to the interconnect switch one or more hot-add commands to connect the node resource and the accelerator resource.

    Connecting accelerator resources using a switch

    公开(公告)号:US12223358B2

    公开(公告)日:2025-02-11

    申请号:US17350874

    申请日:2021-06-17

    Abstract: The present disclosure describes a number of embodiments related to devices and techniques for implementing an interconnect switch to provide a switchable low-latency bypass between node resources such as CPUs and accelerator resources for caching. A resource manager may be used to receive an indication of a node of a plurality of nodes and an indication of an accelerator resource of a plurality of accelerator resources to connect to the node. If the indicated accelerator resource is connected to another node of the plurality of nodes, then transmit, to a interconnect switch, one or more hot-remove commands. The resource manager may then transmit to the interconnect switch one or more hot-add commands to connect the node resource and the accelerator resource.

    CONNECTING ACCELERATOR RESOURCES USING A SWITCH

    公开(公告)号:US20210311800A1

    公开(公告)日:2021-10-07

    申请号:US17350874

    申请日:2021-06-17

    Abstract: The present disclosure describes a number of embodiments related to devices and techniques for implementing an interconnect switch to provide a switchable low-latency bypass between node resources such as CPUs and accelerator resources for caching. A resource manager may be used to receive an indication of a node of a plurality of nodes and an indication of an accelerator resource of a plurality of accelerator resources to connect to the node. If the indicated accelerator resource is connected to another node of the plurality of nodes, then transmit, to a interconnect switch, one or more hot-remove commands. The resource manager may then transmit to the interconnect switch one or more hot-add commands to connect the node resource and the accelerator resource.

    Technologies for accelerating edge device workloads

    公开(公告)号:US11159454B2

    公开(公告)日:2021-10-26

    申请号:US16748232

    申请日:2020-01-21

    Abstract: Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.

    TECHNOLOGIES FOR ACCELERATING EDGE DEVICE WORKLOADS

    公开(公告)号:US20190044886A1

    公开(公告)日:2019-02-07

    申请号:US15941943

    申请日:2018-03-30

    Abstract: Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.

Patent Agency Ranking