Abstract:
A first node of a packet switched network transmits at least one flow of protocol data units of a network to at least one output context of one of a plurality of second nodes of the network. The first node includes X virtual output queues (VOQs). The first node receives, from at least one of the second nodes, at least one fair rate record. Each fair rate record corresponds to a particular second node output context and describes a recommended rate of flow to the particular output context. The first node allocates up to X of the VOQs among flows corresponding to i) currently allocated VOQs, and ii) the flows corresponding to the received fair rate records. The first node operates each allocated VOQ according to the corresponding recommended rate of flow until a deallocation condition obtains for the each allocated VOQ.
Abstract:
Techniques are described for a memory device. In various embodiments, a scheduler/controller is configured to manage data as it read to or written from a memory. A memory is partitioned into a group of sub-blocks, a parity block is associated with the sub-blocks, and the sub-blocks are accessed to read data as needed. A pending write buffer is added to a group of memory sub-blocks. Such a buffer may be sized to be equal to the group of memory sub-blocks. The pending write buffer handles collisions for write accesses to the same block.
Abstract:
A work conserving scheduler can be implemented based on a ranking system to provide the scalability of time stamps while avoiding the fast search associated with a traditional time stamp implementation. Each queue can be assigned a time stamp that is initially set to zero. The time stamp for a queue can be incremented each time a data packet from the queue is processed. To provide varying weights to the different queues, the time stamp for the queues can be incremented at varying rates. The data packets can be processed from the queues based on the tier rank order of the queues as determined from the time stamp associated with each queue. To increase the speed at which the ranking is determined, the ranking can be calculate from a subset of the bits defining the time stamp rather than the entire bit set.
Abstract:
A network switch includes a buffer to store network packets and packet descriptors (PDs) used to link the packets into queues for output ports. The buffer and PDs are shared among the multiple traffic pools. The switch receives a multicast packet for queues in a given pool. The switch determines if there is unused buffer space available for packets in the given pool based on a pool dynamic threshold, if there is unused buffer space available for packets in each queue based on a queue dynamic threshold for the queue, if there are unused PDs available to the given pool based on a pool dynamic threshold for PDs, and if there are unused PDs available for each queue based on a queue dynamic threshold for PDs for the queue. The network switch admits the packet only into the queues for which all of the determining operations pass.
Abstract:
A work conserving scheduler can be implemented based on a ranking system to provide the scalability of time stamps while avoiding the fast search associated with a traditional time stamp implementation. Each queue can be assigned a time stamp that is initially set to zero. The time stamp for a queue can be incremented each time a data packet from the queue is processed. To provide varying weights to the different queues, the time stamp for the queues can be incremented at varying rates. The data packets can be processed from the queues based on the tier rank order of the queues as determined from the time stamp associated with each queue. To increase the speed at which the ranking is determined, the ranking can be calculate from a subset of the bits defining the time stamp rather than the entire bit set.
Abstract:
A network device is configured to transmit acknowledgement packets according to the length of the egress queue. The network device receives data packets from one or more endpoints and buffers the data packets in an egress buffer before transmitting the data packets. The network device also receives acknowledgement packets that are sent in response to data packets previously transmitted by the network device. The network device buffers the acknowledgement packets in an acknowledgement buffer. The network device transmits the acknowledgement packets at an acknowledgment rate that is based on a queue length of the egress buffer.
Abstract:
Multiple listlets function as a single master linked list to manage data packets across one or more banks of memory in a first-in first-out (FIFO) order, while allowing multiple push and/or pop functions to be performed per cycle. Each listlet can be a linked list that tracks pointers and is stored in a different memory bank. The nodes can include a pointer to a data packet, a pointer to the next node in the listlet and a next listlet identifier that identifies the listlet that contains the next node in the master linked list. The head and tail of each listlet, as well as an identifier each to track the head and tail of the master linked list, can be maintained in cache. The individual listlets are updated accordingly to maintain order of the master linked list as pointers are pushed and popped from the master linked list.
Abstract:
A first node of a packet switched network transmits at least one flow of protocol data units of a network to at least one output context of one of a plurality of second nodes of the network. The first node includes X virtual output queues (VOQs). The first node receives, from at least one of the second nodes, at least one fair rate record. Each fair rate record corresponds to a particular second node output context and describes a recommended rate of flow to the particular output context. The first node allocates up to X of the VOQs among flows corresponding to i) currently allocated VOQs, and ii) the flows corresponding to the received fair rate records. The first node operates each allocated VOQ according to the corresponding recommended rate of flow until a deallocation condition obtains for the each allocated VOQ.
Abstract:
A network device is configured to transmit acknowledgement packets according to the length of the egress queue. The network device receives data packets from one or more endpoints and buffers the data packets in an egress buffer before transmitting the data packets. The network device also receives acknowledgement packets that are sent in response to data packets previously transmitted by the network device. The network device buffers the acknowledgement packets in an acknowledgement buffer. The network device transmits the acknowledgement packets at an acknowledgment rate that is based on a queue length of the egress buffer.
Abstract:
A network switch includes a buffer to store network packets and packet descriptors (PDs) used to link the packets into queues for output ports. The buffer and PDs are shared among the multiple traffic pools. The switch receives a multicast packet for queues in a given pool. The switch determines if there is unused buffer space available for packets in the given pool based on a pool dynamic threshold, if there is unused buffer space available for packets in each queue based on a queue dynamic threshold for the queue, if there are unused PDs available to the given pool based on a pool dynamic threshold for PDs, and if there are unused PDs available for each queue based on a queue dynamic threshold for PDs for the queue. The network switch admits the packet only into the queues for which all of the determining operations pass.