Abstract:
A system forwards congestion management messages to a source host updating the source address in the management message. The system may determine that the congestion management message was triggered responsive to an initial communication that was previously forwarded by the system. The system may use header translation within a single addressing scheme and/or may translate the congestion management message into a different type to support forwarding to the source of the initial communication. The system may use portions of the payload of the congestion management message to determine the source of the initial communication and to derive a different header for the translated congestion management message.
Abstract:
Network devices facilitate flow management through packet marking. The network devices may be switches, routers, bridges, hubs, or any other network device. The packet marking may include analyzing received packets to determine when the received packets meet a marking criterion, and then applying a configurable marking function to mark the packets in a particular way. The marking capability may facilitate deadline aware end-to-end flow management, as one specific example. More generally, the marking capability may facilitate traffic management actions such as visibility actions and flow management actions.
Abstract:
Network devices facilitate network tracing using tracing packets that travel through the network devices. The network devices may be switches, routers, bridges, hubs, or any other network device. The network tracing may include sending tracing packets down each of multiple routed paths between a source and a destination, at each hop through the network, or through a selected subset of the paths between a source and a destination. The network devices may add tracing information to the tracing packets, which an analysis system may review to determine characteristics of the network and the characteristics of the potentially many paths between a source and a destination.
Abstract:
A distributed switch architecture using permutation switching. In one embodiment, the distributed switch architecture facilitates connections between a plurality of ingress nodes and a plurality of egress nodes, wherein each of the plurality of ingress nodes and plurality of egress nodes are coupled to a plurality of ports (e.g., 40 gigabit Ethernet (GbE), 100 GbE, etc.). A plurality of crossbar switch modules are provided that are configured for coupling to a single output from each of the plurality of ingress nodes, and for coupling to a single input from each of the plurality of egress nodes. Permutations of connections for a crossbar switch module are defined by a permutation connection set that is stored in a permutation engine. Each permutation connection in the permutation connection can be designed to couple one of the outputs from the plurality of ingress nodes to one of the inputs from the plurality of ingress nodes, wherein the permutation connection set can ensures that each of the plurality of ingress nodes has an opportunity to connect with each of the plurality of egress nodes.
Abstract:
Network devices add annotation information to network packets as they travel through the network devices. The network devices may be switches, routers, bridges, hubs, or any other network device. The annotation information may be information specific to the network devices, as opposed to simply the kinds of information available at application servers that receive the network packets. As just a few examples, the annotation information may include switch buffer levels, routing delay, routing parameters affecting the packet, switch identifiers, power consumption, and heat, moisture, or other environmental data.
Abstract:
Processing techniques in a network switch help reduce latency in the delivery of data packets to a recipient. The processing techniques include internal cut-through. The internal cut-through may bypass input port buffers by directly forwarding packet data that has been received to an output port. At the output port, the packet data is buffered for processing and communication out of the switch.
Abstract:
A system for multicast switching for distributed devices may include an ingress node including an ingress memory and an egress node including an egress memory, where the ingress node is communicatively coupled to the egress node. The ingress node may be operable to receive a portion of a multicast frame over an ingress port, bypass the ingress memory and provide the portion to the egress node when the portion satisfies an ingress criteria, otherwise receive and store the entire frame in the ingress memory before providing the frame to the egress node. The egress node may be operable to receive the portion from the ingress node, bypass the egress memory for the portion and provide the portion to the first egress port when an egress criteria is satisfied, otherwise receive and store the entire multicast frame in the egress memory before providing the multicast frame to an egress port.
Abstract:
The systems and methods disclosed herein allow for a switch (in a packet-switching network) to track buffer statistics, and trigger an event, such as a hardware interrupt or a system snapshot, in response to the buffer statistics reaching a threshold that may indicate an impending problem. Since the switch itself triggers the event to alert the network administrator, the network administrator no longer needs to sift through mountains of data to identify potential problems. Also, since the switch triggers the event prior to a problem arising, the network administrator can provide remedial action prior to a problem occurring. This type of event-triggering mechanism makes the administration of packet-switching networks more manageable.