Abstract:
In one embodiment, a primary networking device in a branch network receives a notification of an anomaly detected by a secondary networking device in the branch network. The primary networking device is located at an edge of the network. The primary networking device aggregates the anomaly detected by the secondary networking device and a second anomaly detected in the network into an aggregated anomaly. The primary networking device associates the aggregated anomaly with a location of the secondary networking device in the branch network. The primary networking device reports the aggregated anomaly and the associated location of the secondary networking device to a supervisory device.
Abstract:
In one embodiment, network metrics are collected and analyzed in a network having nodes interconnected by communication links. Then, it is predicted whether a network element failure is relatively likely to occur based on the collected and analyzed network metrics. In response to predicting that a network element failure is relatively likely to occur, traffic in the network is rerouted in order to avoid the network element failure before it is likely to occur.
Abstract:
In one embodiment, network traffic data is received regarding traffic flowing through one or more routers in a network. A future traffic profile through the one or more routers is predicted by modeling the network traffic data. Network condition data for the network is received and future network performance is predicted by modeling the network condition data. A behavior of the network is adjusted based on the predicted future traffic profile and on the predicted network performance.
Abstract:
In one embodiment, a state tracking engine (STE) defines one or more classes of elements that can be tracked in a network. A set of elements to track is determined from the one or more classes, and the set of elements is tracked in the network. Access to the tracked set of elements then provided via one or more corresponding application programming interfaces (APIs). In another embodiment, a metric computation engine (MCE) defines one or more network metrics to be tracked in the network. One or more tracked elements are received from the STE. The one or more network metrics are tracked in the network based on the received one or more tracked elements. Access to the tracked network metrics is then provided via one or more corresponding APIs.
Abstract:
In one embodiment, techniques are shown and described relating to learning machine based computation of network join times. In particular, in one embodiment, a device computes a join time of the device to join a computer network. During joining, the device sends a configuration request to a server, and receives instructions whether to provide the join time. The device may then provide the join time to a collector in response to instructions to provide the join time. In another embodiment, a collector receives a plurality of join times from a respective plurality of nodes having one or more associated node properties. The collector may then estimate a mapping between the join times and the node properties and determines a confidence interval of the mapping. Accordingly, the collector may then determine a rate at which nodes having particular node properties report their join times based on the confidence interval.
Abstract:
In one embodiment, techniques are shown and described relating to dynamically determining node locations to apply learning machine based network performance improvement. In particular, a degree of significance of nodes in a network, respectively, is calculated based on one or more significance factors. One or more significant nodes are then determined based on the calculated degree of significance. Additionally, a nodal region in the network of deteriorated network health is determined, and the nodal region of deteriorated network health is correlated with a significant node of the one or more significant nodes.
Abstract:
In one embodiment, information relating to network metrics in a computer network is collected. A packet delay for a packet to be transmitted along a particular communication path is predicted based on the network metrics. Then, an optimal packet size for optimizing a transmission experience of the packet to be transmitted along the particular communication path is calculated based on the predicted packet delay. Also, a size of the packet to be transmitted along the particular communication path is dynamically adjusted based on the calculated optimal packet size.
Abstract:
In one embodiment, a device in a network receives a switchover policy for a particular type of traffic in the network. The device determines a predicted effect of directing a traffic flow of the particular type of traffic from a first path in the network to a second path in the network. The device determines whether the predicted effect of directing the traffic flow to the second path would violate the switchover policy. The device causes the traffic flow to be routed via the second path in the network, based on a determination that the predicted effect of directing the traffic flow to the second path would not violate the switchover policy for the particular type of traffic.
Abstract:
In one embodiment, a device in a network monitors performance data for a first predictive model. The first predictive model is used to make proactive decisions in the network. The device maintains a supervisory model based on the monitored performance data for the first predictive model. The device identifies a time period during which the supervisory model predicts that the first predictive model will perform poorly. The device causes a switchover from the first predictive model to a second predictive model at a point in time associated with the time period, in response to identifying the time period.
Abstract:
In one embodiment, a management system determines respective capability information of machine learning systems, the capability information including at least an action the respective machine learning system is configured to perform. The management system receives, for each of the machine learning systems, respective performance scoring information associated with the respective action, and computes a degree of freedom for each machine learning system to perform the respective action based on the performance scoring information. Accordingly, the management system then specifies the respective degree of freedom to the machine learning systems. In one embodiment, the management system comprises a management device that computes a respective trust level for the machine learning systems based on receiving the respective performance scoring feedback, and a policy engine that computes the degree of freedom based on receiving the trust level. In further embodiments, the machine learning system performs the action based on the degree of freedom.