摘要:
A system and method are provided for updating a network including at least one optical circuit switch (OCS) to transition from an existing network topology to a new network topology. One or more intermediate topologies between the existing topology and the new topology are created. Creating the intermediate topologies includes selecting first links to be added to the existing topology without removing links, selecting additional links to be added to the existing topology upon removal of one or more existing links, and adding one or more of the selected first and additional links to the existing topology to create a first intermediate topology. It is determined whether any of the selected first and additional links are still to be added, and if no selected first and additional links are to be added, remaining links are removed. The transition from the existing topology to the first intermediate topology is then effected.
摘要:
The present disclosure describes a system and method for reducing total batch completion time using a max-min fairness process. In some implementations, the max-min fairness process described herein reduces the batch completion time by collectively routing the batches in a way that targets providing the same effective path capacity across all requests. More particularly, given a network shared by batches of flows, total throughput is increased with max-min fairness (and therefore batch completion time decreased) if the nth percentile fastest flow of a batch cannot increase its throughput without decreasing the nth percentile fastest flow of another batch whose throughput is not greater than the throughput of the first batch.
摘要:
The present disclosure describes a system and method for reducing total batch completion time using a per-destination max-min fairness scheme. In a distributed computer system, worker nodes often simultaneously return responses to a server node. In some distributed computer systems, multiple batches can traverse a network at any one given time. The nodes of the network are often unaware of the batches other nodes are sending through the network. Accordingly, in some implementations, the different batches encounter different effective path capacities as nodes send flows through links that are or become bottlenecked. The per-destination max-min fairness scheme described herein reduces the total batch completion time by collectively routing the batches in a way that targets providing substantially uniform transmission times without under underutilizing the network.
摘要:
Systems and methods for increasing bandwidth in a computer network are provided. A computer network can include a first lower level switch having a first port and a second port. The computer network can include a second lower level switch having a first port and a second port. The computer network can include an upper level switch having respective ports directly coupled to ports of the first and second lower level switches. A third port of the upper level switch can couple to a first port of a passive optical splitter. The passive optical splitter can have second and third ports coupled to respective ports of the first and second lower level switches. The passive optical splitter can be configured to transmit signals received at its first port as output signals on both of its second and third ports.
摘要:
A system for configuring a network topology in a data center is disclosed. The data center includes nodes having ports capable of supporting data links that can be connected to other nodes. The system includes a memory and a processing unit coupled to the memory. The processing unit receives demand information indicative of demands between nodes. The processing unit determines a set of constraints on the network topology based on the nodes, feasible data links between the nodes, and the demand information. The processing unit determines an objective function based on a sum of data throughput across data links satisfying demands. The processing unit performs an optimization of the objective function subject to the set of constraints using a linear program. The processing unit configures the network topology by establishing data links between the nodes according to results of the optimization.
摘要:
The present disclosure provides for the determination of bandwidth allocation of inter-block traffic in a data center network. It employs a number of optimization objectives and a heuristic water-filling strategy to avoid producing unnecessary paths and to avoid determining paths that would be unavailable when actually needed. Allocation may be adjusted incrementally upon node and link failure, for instance to perform only the minimal allocation changes necessary. If demand between a source and a destination cannot be satisfied, a decomposition process may be used to allocate remaining demand. One aspect constructs a graph for route computation based on inter-block topology. Here, the graph initially starts with a highest level of abstraction with each node representing a middle block, and gradually reduces the abstraction level to identify paths of mixed abstraction level to satisfy additional demand.
摘要:
As an overview, the present disclosure presents a system for increasing network optimization. In particular, the disclosure discusses a unified system for control of data routing in a dynamic network. In some implementations, edge devices (i.e., hosts or exterior switches) are interconnected through a network fabric (i.e., a plurality of interior switches). The hosts and switches include forwarding engines, which determine the next destination of incoming traffic. The disclosure discusses a network controller that collects application requirements and programs the forwarding engines of the edge devices and the network fabric responsive to the application requirements.
摘要:
Systems and methods of configuring a computer network are provided. A first stage having F switches and a second stage having S switches can be provided. Each switch in the first stage of switches can form M communication links with switches in the second stage of switches. Each switch in the second stage can form N communication links with switches in the first stage of switches. Communication links between respective switch pairs can be assigned. Each switch pair can include one witch in the first stage of switches and one switch in the second stage of switches. The number of communication links assigned to at least one switch pair can differ from the number of communication links assigned to at least a second switch pair.
摘要:
This disclosure provides systems, methods and apparatus for providing a network verification system (NVS) to analyze and detect anomalies and errors within a network. The NVS requests forwarding tables from each of the switches within the network being analyzed, and generates directed forwarding graphs for each subnet within the network. Certain graph properties of the directed forwarding graphs are analyzed to detect anomalies or errors in the subnets represented by the directed forwarding graphs. In some implementations, the NVS can execute the generation of the directed forwarding graphs in parallel. In some implementations, the NVS can be implemented on a MapReduce system.
摘要:
A method for weighted data traffic routing can include generating an integer hash value based on a header of a data packet and encoding the integer hash value to generate a search key for a content addressable memory included in the data switch. The method can also include performing a lookup in the content addressable memory to match the search key with one of a plurality of prefixes stored in the content addressable memory, the plurality of prefixes including an encoded set of routing weights associated with a plurality of egress ports of the data switch. The method can further include forwarding the data packet on an egress port of the plurality of egress ports associated with the one of the plurality of prefixes in the content addressable memory.