Abstract:
A system for providing oversubscription of pipeline bandwidth comprises a steer module, an absorption buffer, an ingress packet processor (IPP), a memory management unit (MMU), and a main packet buffer. The steer module receives packets that include start of packet (SOP), middle of packet (MOP), and end of packet (EOP) cells, attaches a packet identifier to the cells, passes the MOP and EOP cells to the MMU, and stores the SOP cells and EOP metadata in the absorption buffer. The IPP processes the SOP cells and EOP metadata and passes the same to the MMU. The MMU stores the MOP, EOP, and processed SOP cells in the main packet buffer, combines, upon receiving the processed EOP metadata of each packet, the processed SOP cell, the MOP cells and the EOP cell of each packet to reconstruct each packet, and queues each reconstructed packet in an egress port queue.
Abstract:
Disclosed are various embodiments for multi-homing in an extended bridge, including both multi-homing of port extenders and multi-homing of end stations. In various embodiments, a controlling bridge device receives a packet via an ingress virtual port and determines a destination virtual port link aggregation group based at least in part on a destination media access control (MAC) address of an end station in the packet. The controlling bridge device selects one of multiple egress virtual ports of the destination virtual port link aggregation group. The end station of the extended bridge is reachable through any of the egress virtual ports of the destination virtual port link aggregation group. The controlling bridge device forwards the packet through the selected egress virtual port, and the forwarded packet includes an identifier of a destination virtual port to which the end station is connected.
Abstract:
A device with dynamically tunable heterogeneous latencies includes an input port configured to receive a packet via a network, and a processing module configured to determine multiple values corresponding to a number of qualifying parameters associated with the packet. The processing module may use the values to generate a selector value and may allocate a latency mode to the packet based on the selector value.
Abstract:
A device implementing a scalable low-latency mesh may include a memory management unit, an egress processor, and an egress cell circuit that includes at least a first queue and a second queue. The memory management unit may be configured to buffer first cells for transmission. The egress cell circuit may be configured to queue the first cells from the memory management unit in the first queue, queue second cells from an off-chip memory management unit of another device in the second queue, and schedule the first cells from the first queue and second cells from the second queue for transmission via an egress processor. The egress processor may be configured to transmit the first and second cells over at least one first port.
Abstract:
A network device implementing the subject system for end to end flow control may include at least one processor circuit that may be configured to detect that congestion is being experienced by at least one queue of a port and identify another network device that is transmitting downstream traffic being queued at the at least one queue of the port that is at least partially causing the congestion. The at least one processor circuit may be further configured to generate an end to end flow control message that comprises an identifier of the port, the end to end flow control message indicating that the downstream traffic should be flow controlled at the another network device. The at least one processor circuit may be further configured to transmit, out-of-band and through at least one intermediary network device, the end to end flow control message to the another network device.