APPLICATION AND TRAFFIC AWARE MACHINE LEARNING-BASED POWER MANAGER

    公开(公告)号:US20250088427A1

    公开(公告)日:2025-03-13

    申请号:US18759333

    申请日:2024-06-28

    Abstract: Example systems and techniques are disclosed for power management. An example system includes one or more memories and one or more processors. The one or more processors are configured to obtain workload metrics from a plurality of nodes of a cluster. The one or more processors are configured to obtain network function metrics from the plurality of nodes of the cluster. The one or more processors are configured to execute at least one machine learning model to predict a corresponding measure of criticality of traffic of each node. The one or more processors are configured to determine, based on the corresponding measure of criticality of traffic of each node, a corresponding power mode for at least one processing core of each node. The one or more processors are configured to recommend or apply the corresponding power mode to the at least one processing core of each node.

    DYNAMIC SERVICE REBALANCING IN NETWORK INTERFACE CARDS HAVING PROCESSING UNITS

    公开(公告)号:US20240380701A1

    公开(公告)日:2024-11-14

    申请号:US18316668

    申请日:2023-05-12

    Abstract: An edge services controller may use a service scheduling algorithm to deploy services on Network Interface Cards (NICs) of a NIC fabric while incrementally scheduling services. The edge services controller may assign services to specific nodes depending on their available resources on these nodes. Available resources may include CPU compute, DPU compute, node bandwidth, etc. The edge services controller may also consider the distance between the services that communicate with each other (i.e., hop count between nodes if two communicating services are placed on separate nodes) and the weight of communication between the services. Two services that communicate heavily with each other may consume more bandwidth, and putting them further apart is more detrimental than keeping them closer to each other, i.e., reducing the hop count between each other depending on the bandwidth consumption due to their inter-service communications.

    INTELLIGENT FIREWALL FLOW CREATOR
    36.
    发明公开

    公开(公告)号:US20240179126A1

    公开(公告)日:2024-05-30

    申请号:US18472042

    申请日:2023-09-21

    CPC classification number: H04L63/0263 H04L41/16 H04L63/0236

    Abstract: Example systems, methods, and storage media are described. An example network system includes processing circuitry and one or more memories coupled to the processing circuitry. The one or more memories are configured to store instructions which, when executed by the processing circuitry, cause the network system to obtain telemetry data, the telemetry data comprising indications of creations of instances of a flow. The instructions cause the network system to, based on the indications of the creations of the instances of the flow, determine a pattern of creation of the instances of the flow. The instructions cause the network system to, based on the pattern of creation of the instances of the flow, generate an action entry in a policy table for a particular instance of the flow prior to receiving a first packet of the particular instance of the flow.

    USING NETWORK INTERFACE CARDS HAVING PROCESSING UNITS TO DETERMINE LATENCY

    公开(公告)号:US20230006904A1

    公开(公告)日:2023-01-05

    申请号:US17806865

    申请日:2022-06-14

    Abstract: A system is configured to compute a latency between a first computing device and a second computing device. The system includes a network interface card (NIC) of a first computing device. The NIC includes a set of interfaces configured to receive one or more packets and send one or more packets. The processing unit is configured to identify information indicative of a forward packet, compute, based on a first time corresponding to the forward packet and a second time corresponding to a reverse packet associated with the forward packet, a latency between the first computing device and a second computing device, wherein the second computing device includes a destination of the forward packet and a source of the reverse packet, and output information indicative of the latency between the first computing device and the second computing device.

Patent Agency Ranking