COMPUTE EXPRESS LINK OVER ETHERNET IN COMPOSABLE DATA CENTERS

    公开(公告)号:US20230224255A1

    公开(公告)日:2023-07-13

    申请号:US18122015

    申请日:2023-03-15

    CPC classification number: H04L47/72 H04L47/829

    Abstract: Techniques for sending Compute Express Link (CXL) packets over Ethernet (CXL-E) in a composable data center that may include disaggregated, composable servers. The techniques may include receiving, from a first server device, a request to bind the first server device with a multiple logical device (MLD) appliance. Based at least in part on the request, a first CXL-E connection may be established for the first server device to export a computing resource to the MLD appliance. The techniques may also include receiving, from the MLD appliance, an indication that the computing resource is available, and receiving, from a second server device, a second request for the computing resource. Based at least in part on the second request, a second CXL-E connection may be established for the second server device to consume or otherwise utilize the computing resource of the first server device via the MLD appliance.

    Compute express link over ethernet in composable data centers

    公开(公告)号:US11632337B1

    公开(公告)日:2023-04-18

    申请号:US17751210

    申请日:2022-05-23

    Abstract: Techniques for sending Compute Express Link (CXL) packets over Ethernet (CXL-E) in a composable data center that may include disaggregated, composable servers. The techniques may include receiving, from a first server device, a request to bind the first server device with a multiple logical device (MLD) appliance. Based at least in part on the request, a first CXL-E connection may be established for the first server device to export a computing resource to the MLD appliance. The techniques may also include receiving, from the MLD appliance, an indication that the computing resource is available, and receiving, from a second server device, a second request for the computing resource. Based at least in part on the second request, a second CXL-E connection may be established for the second server device to consume or otherwise utilize the computing resource of the first server device via the MLD appliance.

    Designated intermediate system (DIS) priority changing

    公开(公告)号:US11627062B2

    公开(公告)日:2023-04-11

    申请号:US17830176

    申请日:2022-06-01

    Abstract: A communication pathway between a plurality of network nodes within a network is established. A DIS election operation is executed to determine a first network node among the plurality of network nodes as the DIS for the network and creating a first pseudo node for the first network node, and with each network node of the plurality of network nodes, determining whether the connectivity between the first network node and the other network nodes of the plurality of network nodes within the network is in a synchronous state with the adjacencies with the other network nodes of the plurality of network nodes within the network.

    Unlocking computing resources for decomposable data centers

    公开(公告)号:US11601377B1

    公开(公告)日:2023-03-07

    申请号:US17751181

    申请日:2022-05-23

    Abstract: Techniques for sending Compute Express Link (CXL) packets over Ethernet (CXL-E) in a composable data center that may include disaggregated, composable servers. The techniques may include receiving, from a first server device, a request to bind the first server device with a multiple logical device (MLD) appliance. Based at least in part on the request, a first CXL-E connection may be established for the first server device to export a computing resource to the MLD appliance. The techniques may also include receiving, from the MLD appliance, an indication that the computing resource is available, and receiving, from a second server device, a second request for the computing resource. Based at least in part on the second request, a second CXL-E connection may be established for the second server device to consume or otherwise utilize the computing resource of the first server device via the MLD appliance.

    Utilizing network analytics for service provisioning

    公开(公告)号:US11588884B2

    公开(公告)日:2023-02-21

    申请号:US16565048

    申请日:2019-09-09

    Abstract: This disclosure describes techniques for collecting network parameter data for network switches and/or physical servers and provisioning virtual resources of a service on physical servers based on network resource availability. The network parameter data may include network resource availability data, diagnostic constraint data, traffic flow data, etc. The techniques include determining network switches that have an availability of network resources to support a virtual resource on a connected physical server. A scheduler may deploy virtual machines to particular servers based on the network parameter data in lieu of, or in addition to, the server utilization data of the physical servers (e.g., CPU usage, memory usage, etc.). In this way, a virtual resource may be deployed to a physical server that has an availability of the server resources, but also is connected to a network switch with the availability of network resources to support the virtual resource.

    Deadlock avoidance in leaf-spine networks

    公开(公告)号:US10454839B1

    公开(公告)日:2019-10-22

    申请号:US15979865

    申请日:2018-05-15

    Abstract: Techniques for implementing deadlock avoidance in a leaf-spine network are described. In one embodiment, a method includes monitoring traffic of a plurality of packets at a leaf switch in a network having a leaf-spine topology. The method includes marking a packet with an identifier associated with an inbound uplink port of the leaf switch when the packet is received from one of a first spine switch and a second spine switch. The method includes detecting a valley routing condition upon determining that the packet marked with the identifier is being routed to an outbound uplink port of the leaf switch to be transmitted to the first spine switch or the second spine switch. Upon detecting the valley routing condition, the method includes dropping packets associated with a no-drop class of service when a packet buffer of the inbound uplink port reaches a predetermined threshold.

Patent Agency Ranking