Abstract:
A method and devices for reducing the delay in end-to-end delivery of network packets may be achieved by having the transmission (TX) side of the device, tag each cell with a unique packet identifier and with a byte offset parameter where the tagging allows the reception (RX) side of the destination device to perform on-the-fly assembly of cells into packets by directly placing them at corresponding host buffer, and the method may be done for multiple packets concurrently, and hence store and forward buffering is not needed in either the source or the destination devices and the lowest possible end-to-end cut-through latency is achieved.
Abstract:
The disclosure relates to a traffic scheduling device for scheduling a transmission sequence of data packets, stored in a plurality of traffic flow queues, an eligibility state of each of the traffic flow queues for the scheduling is being maintained in a hierarchical scheduling database describing a relationship among the plurality of traffic flow queues. The traffic scheduling device includes: a plurality of interconnected memory cluster units. Each memory cluster unit is associated to a single or more levels of the hierarchical scheduling database and each memory cluster unit is coupled to at least one co-processors. At least one co-processor is software-programmable to implement a scheduling algorithm. The traffic scheduling device also includes an interface to the plurality of traffic flow queues.
Abstract:
The invention relates to a scheduling device for receiving a set of requests and providing a set of grants to the set of requests, the scheduling device comprising: a lookup vector prepare unit configured to provide a lookup vector prepared set of requests depending on the set of requests and a selection mask and to provide a set of acknowledgements to the set of requests; and a prefix forest unit coupled to the lookup vector prepare unit, wherein the prefix forest unit is configured to provide the set of grants as a function of the lookup vector prepared set of requests and to provide the selection mask based on the set of grants.
Abstract:
The invention relates to a memory aggregation device for storing a set of input data streams and retrieving data to a set of output data streams, the memory aggregation device comprising: a set of first-in first-out (FIFO) memories each comprising an input and an output; an input interconnector configured to interconnect each one of the set of input data streams to each input of the set of FIFO memories according to an input interconnection matrix; an output interconnector configured to interconnect each output of the set of FIFO memories to each one of the set of output data streams according to an output interconnection matrix; an input selector; an output selector; and a memory controller.
Abstract:
A method and devices for reducing the delay in end-to-end delivery of network packets may be achieved by having the transmission (TX) side of the device, tag each cell with a unique packet identifier and with a byte offset parameter where the tagging allows the reception (RX) side of the destination device to perform on-the-fly assembly of cells into packets by directly placing them at corresponding host buffer, and the method may be done for multiple packets concurrently, and hence store and forward buffering is not needed in either the source or the destination devices and the lowest possible end-to-end cut-through latency is achieved.
Abstract:
The invention relates to a memory aggregation device for storing a set of input data streams and retrieving data to a set of output data streams, the memory aggregation device comprising: a set of first-in first-out (FIFO) memories each comprising an input and an output; an input interconnector configured to interconnect each one of the set of input data streams to each input of the set of FIFO memories according to an input interconnection matrix; an output interconnector configured to interconnect each output of the set of FIFO memories to each one of the set of output data streams according to an output interconnection matrix; an input selector; an output selector; and a memory controller.
Abstract:
The disclosure relates to a traffic scheduling device for scheduling a transmission sequence of data packets, stored in a plurality of traffic flow queues, an eligibility state of each of the traffic flow queues for the scheduling is being maintained in a hierarchical scheduling database describing a relationship among the plurality of traffic flow queues. The traffic scheduling device includes: a plurality of interconnected memory cluster units. Each memory cluster unit is associated to a single or more levels of the hierarchical scheduling database and each memory cluster unit is coupled to at least one co-processors. At least one co-processor is software-programmable to implement a scheduling algorithm. The traffic scheduling device also includes an interface to the plurality of traffic flow queues.
Abstract:
The disclosure relates to a traffic scheduling device for scheduling a transmission sequence of data packets, stored in a plurality of traffic flow queues, an eligibility state of each of the traffic flow queues for the scheduling is being maintained in a hierarchical scheduling database describing a relationship among the plurality of traffic flow queues. The traffic scheduling device includes: a plurality of interconnected memory cluster units. Each memory cluster unit is associated to a single or more levels of the hierarchical scheduling database and each memory cluster unit is coupled to at least one co-processors. At least one co-processor is software-programmable to implement a scheduling algorithm. The traffic scheduling device also includes an interface to the plurality of traffic flow queues.
Abstract:
A method for packet reassembly and reordering, comprising: receiving a cell sent by a source port, wherein the cell carries a Source Identification (SID), a packet sequence number and a cell sequence number; preprocessing the received cell according to the SID to determine whether the cell shall be inserted into a packet reassembly database; ordering cells in the packet reassembly database according to the packet sequence number to obtain a correctly ordered packet; if the correctly ordered packet is a complete packet, ordering the cells of the correctly ordered packet according to the cell sequence number to obtain correctly ordered cells; and performing a packet reassembly for the correctly ordered cells. Correspondingly, a network device and a communication system are provided.
Abstract:
A method for packet reassembly and reordering, comprising: receiving a cell sent by a source port, wherein the cell carries a Source Identification (SID), a packet sequence number and a cell sequence number; preprocessing the received cell according to the SID to determine whether the cell shall be inserted into a packet reassembly database; ordering cells in the packet reassembly database according to the packet sequence number to obtain a correctly ordered packet; if the correctly ordered packet is a complete packet, ordering the cells of the correctly ordered packet according to the cell sequence number to obtain correctly ordered cells; and performing a packet reassembly for the correctly ordered cells. Correspondingly, a network device and a communication system are provided.