SELF-LEARNING GREEN APPLICATION WORKLOADS
    11.
    发明公开

    公开(公告)号:US20230342275A1

    公开(公告)日:2023-10-26

    申请号:US18305194

    申请日:2023-04-21

    CPC classification number: G06F11/3058 G06F9/5088

    Abstract: Techniques are described for determining the energy usage of a data center and invoking one or more actions to improve the energy usage of the data center. For example, a computing system may obtain energy usage data of a data center deploying an application. The computing system may also determine, based on a comparison of the energy usage data of the data center deploying the application to a percentage of energy provided by one or more renewable energy sources to the data center, a green quotient of the application that specifies a value that indicates whether the data center deploying the application is energy efficient. The computing system may further invoke, based on the green quotient of the application that specifies a value that indicates the data center deploying the application is not energy efficient, an action to improve energy usage of the data center deploying the application.

    EXTENDING SWITCH FABRIC PROCESSING TO NETWORK INTERFACE CARDS

    公开(公告)号:US20230017692A1

    公开(公告)日:2023-01-19

    申请号:US17809452

    申请日:2022-06-28

    Abstract: An example system comprises a plurality of servers comprising respective network interface cards (NICs) connected by physical links in a physical topology, wherein each NIC of the plurality of NICs comprises an embedded switch and a processing unit coupled to the embedded switch; and an edge services controller configured to program the processing unit of a network interface card of the plurality of network interface cards to: receive, at a first network interface of the NIC, a data packet from a physical device; based on the data packet being received at the first network interface, modify the data packet to generate a modified data packet; and output the modified data packet to the physical device via a second network interface of the NIC.

    APPLICATION-AWARE ACTIVE MEASUREMENT FOR MONITORING NETWORK HEALTH

    公开(公告)号:US20250150327A1

    公开(公告)日:2025-05-08

    申请号:US19018663

    申请日:2025-01-13

    Abstract: In general, this disclosure describes techniques that enable a network system to perform application-aware active measurement for monitoring network health. The network system includes memory that stores a topology graph for a network. The network system includes processing circuitry that may receive an identifier associated with an application utilizing the network for communications, and determine, based on the topology graph and the identifier, a subgraph of the topology graph based on a location, in the topology graph, of a node representing a compute node that is a host of the application. The processing circuitry may next determine, based on the subgraph, a probe module to measure performance metrics associated with the application, and for the probe module, generate configuration data corresponding to the probe module. The processing circuitry may output, to the probe module, the configuration data.

    GRAPH ANALYTICS ENGINE FOR APPLICATION-TO-NETWORK TROUBLESHOOTING

    公开(公告)号:US20250150326A1

    公开(公告)日:2025-05-08

    申请号:US19018627

    申请日:2025-01-13

    Abstract: A computing device may implement the techniques described in this disclosure. The computing device may include processing circuitry configured to execute an analysis framework system, and memory configured to store time series data. The analysis framework system may create, based on the time series data, a knowledge graph comprising a plurality of first nodes in the network system referenced in the time series data interconnected by edges. The analysis framework system may cause a graph analytics service of the analysis framework system to receive a graph analysis request comprising a request to determine a fault propagation path, a request to determine changes in the knowledge graph, a request to determine an impact of an emulated fault, or a request to determine an application-to-network path. The analysis framework system may also cause the graph analytics service to determine a response to the graph analysis request, and output the response.

    APPLICATION AND TRAFFIC AWARE MACHINE LEARNING-BASED POWER MANAGER

    公开(公告)号:US20250088427A1

    公开(公告)日:2025-03-13

    申请号:US18759333

    申请日:2024-06-28

    Abstract: Example systems and techniques are disclosed for power management. An example system includes one or more memories and one or more processors. The one or more processors are configured to obtain workload metrics from a plurality of nodes of a cluster. The one or more processors are configured to obtain network function metrics from the plurality of nodes of the cluster. The one or more processors are configured to execute at least one machine learning model to predict a corresponding measure of criticality of traffic of each node. The one or more processors are configured to determine, based on the corresponding measure of criticality of traffic of each node, a corresponding power mode for at least one processing core of each node. The one or more processors are configured to recommend or apply the corresponding power mode to the at least one processing core of each node.

    DYNAMIC SERVICE REBALANCING IN NETWORK INTERFACE CARDS HAVING PROCESSING UNITS

    公开(公告)号:US20240380701A1

    公开(公告)日:2024-11-14

    申请号:US18316668

    申请日:2023-05-12

    Abstract: An edge services controller may use a service scheduling algorithm to deploy services on Network Interface Cards (NICs) of a NIC fabric while incrementally scheduling services. The edge services controller may assign services to specific nodes depending on their available resources on these nodes. Available resources may include CPU compute, DPU compute, node bandwidth, etc. The edge services controller may also consider the distance between the services that communicate with each other (i.e., hop count between nodes if two communicating services are placed on separate nodes) and the weight of communication between the services. Two services that communicate heavily with each other may consume more bandwidth, and putting them further apart is more detrimental than keeping them closer to each other, i.e., reducing the hop count between each other depending on the bandwidth consumption due to their inter-service communications.

    INTELLIGENT FIREWALL FLOW CREATOR
    20.
    发明公开

    公开(公告)号:US20240179126A1

    公开(公告)日:2024-05-30

    申请号:US18472042

    申请日:2023-09-21

    CPC classification number: H04L63/0263 H04L41/16 H04L63/0236

    Abstract: Example systems, methods, and storage media are described. An example network system includes processing circuitry and one or more memories coupled to the processing circuitry. The one or more memories are configured to store instructions which, when executed by the processing circuitry, cause the network system to obtain telemetry data, the telemetry data comprising indications of creations of instances of a flow. The instructions cause the network system to, based on the indications of the creations of the instances of the flow, determine a pattern of creation of the instances of the flow. The instructions cause the network system to, based on the pattern of creation of the instances of the flow, generate an action entry in a policy table for a particular instance of the flow prior to receiving a first packet of the particular instance of the flow.

Patent Agency Ranking