Abstract:
Systems and methods for increasing bandwidth in a computer network are provided. A computer network can include a first lower level switch having a first port and a second port. The computer network can include a second lower level switch having a first port and a second port. The computer network can include an upper level switch having respective ports directly coupled to ports of the first and second lower level switches. A third port of the upper level switch can couple to a first port of a passive optical splitter. The passive optical splitter can have second and third ports coupled to respective ports of the first and second lower level switches. The passive optical splitter can be configured to transmit signals received at its first port as output signals on both of its second and third ports.
Abstract:
A router residing in a network comprises at least one ingress port, at least one egress port, and a processor programmed to compare at least two label switch paths, determine potential conflicts between the at least two label switch paths based on the ingress ports and egress ports utilized by the label switch paths, and determine a selected identifier to be assigned relative to each label switch path. The processor is configured to assign a common identifier if no conflict exists. A storage medium is operatively coupled to the processor for storing the selected identifiers related to the label switch paths. The processor may be configured to determine that a conflict exists between two label switch paths if they utilize the same ingress port on the router and different egress ports on the router.
Abstract:
Exemplary embodiments allocate network traffic among multiple paths in a network, which may include one or more preferred paths (e.g. shortest paths) and one or more alternative paths (e.g., non-shortest paths). In one embodiment, network traffic in form of flows may be allocated to the preferred paths until the allocation of additional network traffic would exceed a predetermined data rate. Additional flows may then be sent over the alternative paths, which may be longer than the preferred path. The paths to which each flow is assigned may be dynamically updated, and in some embodiments the path assignment for a particular flow may time out after a predetermined time. Accordingly, the flow traffic of each path may be balanced based on real-time traffic information.
Abstract:
The present technology considers multi-stage network topologies where it is not possible to evenly stripe uplinks from a lower stage of the network topology to switching units in an upper stage of the topology. This technology proposes techniques to both improve overall throughput and to deliver uniform performance to all end hosts with uneven connectivity among the different stages while delivering uniform performance to all hosts. To achieve improved network performance in case of asymmetric connectivity, more flows may be sent to some egress ports than to others, thus weighting some ports more than others, resulting in Weighted Cost Multi Path (WCMP) flow distribution.
Abstract:
Exemplary embodiments provide changes to routing schemes, i.e. WCMP groups or WCMP sets, installed in a network traffic distribution table, e.g. multipath table. WCMP groups of a multipath table are updated to accommodate a new WCMP group. This can be achieved by reducing the size of the existing WCMP groups on the multipath table. The goal is to reduce the existing WCMP groups just enough to make room for the new WCMP group. An objective is to minimize the number of existing WCMP groups to be reduced before a new WCMP group can be installed in the multipath table.
Abstract:
Systems and methods for increasing bandwidth in a computer network are provided. A computer network can include a first lower level switch, first and second upper level switches, and first and second passive optical splitters, and a mirror. The first passive optical splitter can have a first port directly coupled to the first upper level switch, a second port directly coupled to the second upper level switch. The second passive optical splitter can have a port directly coupled to a port of the first passive optical splitter, and a port directly coupled to the first lower level switch. The mirror can be coupled to a port of the second passive optical splitter and reflect an optical signal received from the second passive optical splitter to the first upper level switch and second upper level switch through the second passive optical splitter and the first passive optical splitter.
Abstract:
Exemplary embodiments allocate network traffic among multiple paths in a network, which may include one or more preferred paths (e.g. shortest paths) and one or more alternative paths (e.g., non-shortest paths). In one embodiment, network traffic in form of flows may be allocated to the preferred paths until the allocation of additional network traffic would exceed a predetermined data rate. Additional flows may then be sent over the alternative paths, which may be longer than the preferred path. The paths to which each flow is assigned may be dynamically updated, and in some embodiments the path assignment for a particular flow may time out after a predetermined time. Accordingly, the flow traffic of each path may be balanced based on real-time traffic information.
Abstract:
A multi-stage network is provided, where the network includes a first stage comprising a first plurality of network switching devices, the first plurality of network devices being classified into switching groups. The network further includes a second stage comprising a second plurality of network switching devices. A linking configuration, comprising a plurality of links between the first plurality of network switching devices and the second plurality of network switching devices, couples the first stage to the second stage. Each first stage network switching device in a given switching group includes the same number of links to any given second stage network switching device as each other first stage network switching device in that group.
Abstract:
Systems and methods of configuring a datacenter network are provided. A datacenter network can have a first stage of switches and a second stage of switches. A first stage of switches each including at least one first stage switch can be defined. A second stage of switches each including at least one second stage switch can be defined. For each first stage switch group of a first set of first stage switches, a communication link can be assigned between each first stage switch and each second stage switch in a respective second stage switch group. For each first stage switch group of a second set of first stage switches, a communication link can be assigned between each first stage switch and a single second stage switch of each second stage switch group.
Abstract:
Overlapping flow rules are included in a ternary content addressable memory (TCAM), while still enabling a hardware counter to increment each of the overlapping rules when a packet matching each of the overlapping rules is transmitted through the TCAM. In a given set of flow specifications, a first flow specification is identified that overlaps with a second flow specification. Rules are determined corresponding to the first flow specification, the second flow specification, and an intersection of the first and second flow specifications. Priorities are assigned to each of the rules, wherein the rule corresponding to the intersection is assigned a higher priority than the rules corresponding to the first and second flow specifications. Such rules are stored in the TCAM.