Self-Correcting Service Level Agreement Enforcer

    公开(公告)号:US20240179076A1

    公开(公告)日:2024-05-30

    申请号:US18472111

    申请日:2023-09-21

    CPC classification number: H04L41/5009 H04L43/0811 H04L43/0888

    Abstract: Example systems, methods, and storage media are described. An example network system includes processing circuitry and one or more memories coupled to the processing circuitry. The one or more memories are configured to store instructions which, when executed by the processing circuitry, cause the network system to obtain telemetry data. The instructions cause the network system to determine, based on the telemetry data, that an application running on server processing circuitry does not meet at least one service level agreement (SLA) requirement, the server processing circuitry not including processing circuitry resident on a network interface card (NIC). The instructions cause the network system to, based on the application not meeting the at least one SLA requirement, determine to offload at least one component of the application from the server processing circuitry to the processing circuitry resident on the NIC.

    APPLICATION AND TRAFFIC AWARE MACHINE LEARNING-BASED POWER MANAGER

    公开(公告)号:US20250088427A1

    公开(公告)日:2025-03-13

    申请号:US18759333

    申请日:2024-06-28

    Abstract: Example systems and techniques are disclosed for power management. An example system includes one or more memories and one or more processors. The one or more processors are configured to obtain workload metrics from a plurality of nodes of a cluster. The one or more processors are configured to obtain network function metrics from the plurality of nodes of the cluster. The one or more processors are configured to execute at least one machine learning model to predict a corresponding measure of criticality of traffic of each node. The one or more processors are configured to determine, based on the corresponding measure of criticality of traffic of each node, a corresponding power mode for at least one processing core of each node. The one or more processors are configured to recommend or apply the corresponding power mode to the at least one processing core of each node.

    DYNAMIC SERVICE REBALANCING IN NETWORK INTERFACE CARDS HAVING PROCESSING UNITS

    公开(公告)号:US20240380701A1

    公开(公告)日:2024-11-14

    申请号:US18316668

    申请日:2023-05-12

    Abstract: An edge services controller may use a service scheduling algorithm to deploy services on Network Interface Cards (NICs) of a NIC fabric while incrementally scheduling services. The edge services controller may assign services to specific nodes depending on their available resources on these nodes. Available resources may include CPU compute, DPU compute, node bandwidth, etc. The edge services controller may also consider the distance between the services that communicate with each other (i.e., hop count between nodes if two communicating services are placed on separate nodes) and the weight of communication between the services. Two services that communicate heavily with each other may consume more bandwidth, and putting them further apart is more detrimental than keeping them closer to each other, i.e., reducing the hop count between each other depending on the bandwidth consumption due to their inter-service communications.

    INTELLIGENT FIREWALL FLOW CREATOR
    18.
    发明公开

    公开(公告)号:US20240179126A1

    公开(公告)日:2024-05-30

    申请号:US18472042

    申请日:2023-09-21

    CPC classification number: H04L63/0263 H04L41/16 H04L63/0236

    Abstract: Example systems, methods, and storage media are described. An example network system includes processing circuitry and one or more memories coupled to the processing circuitry. The one or more memories are configured to store instructions which, when executed by the processing circuitry, cause the network system to obtain telemetry data, the telemetry data comprising indications of creations of instances of a flow. The instructions cause the network system to, based on the indications of the creations of the instances of the flow, determine a pattern of creation of the instances of the flow. The instructions cause the network system to, based on the pattern of creation of the instances of the flow, generate an action entry in a policy table for a particular instance of the flow prior to receiving a first packet of the particular instance of the flow.

    Intelligent firewall policy processor

    公开(公告)号:US12267300B2

    公开(公告)日:2025-04-01

    申请号:US18472050

    申请日:2023-09-21

    Abstract: An example network system includes processing circuitry and one or more memories coupled to the processing circuitry. The one or more memories are configured to store instructions which cause the system to obtain telemetry data, the telemetry data being associated with a plurality of applications running on a plurality of hosts. The instructions cause the system to, based on the telemetry data, determine a subset of applications of the plurality of applications that run on a first host of the plurality of hosts. The instructions cause the system to determine a subset of firewall policies of a plurality of firewall polices, each of the subset of firewall policies applying to at least one respective application of the subset of applications. The instructions cause the system to generate an indication of the subset of firewall policies and send the indication to a management plane of a distributed firewall.

Patent Agency Ranking