USING NETWORK INTERFACE CARDS HAVING PROCESSING UNITS TO DETERMINE LATENCY

    公开(公告)号:US20230006904A1

    公开(公告)日:2023-01-05

    申请号:US17806865

    申请日:2022-06-14

    Abstract: A system is configured to compute a latency between a first computing device and a second computing device. The system includes a network interface card (NIC) of a first computing device. The NIC includes a set of interfaces configured to receive one or more packets and send one or more packets. The processing unit is configured to identify information indicative of a forward packet, compute, based on a first time corresponding to the forward packet and a second time corresponding to a reverse packet associated with the forward packet, a latency between the first computing device and a second computing device, wherein the second computing device includes a destination of the forward packet and a source of the reverse packet, and output information indicative of the latency between the first computing device and the second computing device.

    Dynamic service rebalancing in network interface cards having processing units

    公开(公告)号:US12289240B2

    公开(公告)日:2025-04-29

    申请号:US18316668

    申请日:2023-05-12

    Abstract: An edge services controller may use a service scheduling algorithm to deploy services on Network Interface Cards (NICs) of a NIC fabric while incrementally scheduling services. The edge services controller may assign services to specific nodes depending on their available resources on these nodes. Available resources may include CPU compute, DPU compute, node bandwidth, etc. The edge services controller may also consider the distance between the services that communicate with each other (i.e., hop count between nodes if two communicating services are placed on separate nodes) and the weight of communication between the services. Two services that communicate heavily with each other may consume more bandwidth, and putting them further apart is more detrimental than keeping them closer to each other, i.e., reducing the hop count between each other depending on the bandwidth consumption due to their inter-service communications.

    NETWORKING DEVICE VISUAL INDICATOR SIMULATION

    公开(公告)号:US20250088573A1

    公开(公告)日:2025-03-13

    申请号:US18825732

    申请日:2024-09-05

    Abstract: In some examples, a computing system includes memory and one or more programmable processors in communication with the memory. The computing system is configured to visual indicator status information of a network device, wherein the visual indicator status information includes information for one or more virtual visual indicators of the network device to indicate a state of the network device or a state of one or more links associated with the network device. The computing system is further configured to generate, based on the visual indicator status information of the network device, a user interface that includes a representation of the network device and one or more virtual visual indicators indicating the state of the network device or the state of one or more links associated with the network device. The computing system is further configured to output, for display on a display device, the user interface.

    CARBON-AWARE PREDICTIVE WORKLOAD SCHEDULING AND SCALING

    公开(公告)号:US20250086010A1

    公开(公告)日:2025-03-13

    申请号:US18759468

    申请日:2024-06-28

    Abstract: Example techniques and devices are described for scheduling workloads. An example computing device is configured to predict an occurrence of a scale event for a first service. The computing device is configured to determine, based on the predicted occurrence of the scale event for the first service, a predicted level of greenness for the first service, the predicted level of greenness being based on a current level of greenness for the first service and a predicted scale up factor. The computing device is configured to determine whether the predicted level of greenness for the first service satisfies a first threshold. The computing device is configured to perform, based on whether the predicted level of greenness for the first service satisfies the first threshold, a first action on a first workload of the first service.

    Self-Correcting Service Level Agreement Enforcer

    公开(公告)号:US20240179076A1

    公开(公告)日:2024-05-30

    申请号:US18472111

    申请日:2023-09-21

    CPC classification number: H04L41/5009 H04L43/0811 H04L43/0888

    Abstract: Example systems, methods, and storage media are described. An example network system includes processing circuitry and one or more memories coupled to the processing circuitry. The one or more memories are configured to store instructions which, when executed by the processing circuitry, cause the network system to obtain telemetry data. The instructions cause the network system to determine, based on the telemetry data, that an application running on server processing circuitry does not meet at least one service level agreement (SLA) requirement, the server processing circuitry not including processing circuitry resident on a network interface card (NIC). The instructions cause the network system to, based on the application not meeting the at least one SLA requirement, determine to offload at least one component of the application from the server processing circuitry to the processing circuitry resident on the NIC.

    DISTRIBUTED APPLICATION CALL PATH PERFORMANCE ANALYSIS

    公开(公告)号:US20250112851A1

    公开(公告)日:2025-04-03

    申请号:US18478260

    申请日:2023-09-29

    Abstract: In general, techniques are described for managing a distributed application based on call paths among the multiple services of the distributed application that traverse underlying network infrastructure. In an example, a method comprises determining, by a computing system, and for a distributed application implemented with a plurality of services, a call path from an entry endpoint service of the plurality of services to a terminating endpoint service of the plurality of services; determining, by the computing system, a corresponding network path for each pair of adjacent services from a plurality of pairs of services that communicate for the call path; and based on a performance indicator for a network device of the corresponding network path meeting a threshold, performing, by the computing system, one or more of: reconfiguring the network; or redeploying one of the plurality of services to a different compute node of the compute nodes.

Patent Agency Ranking