Abstract:
A system and method of scheduling packets or cells for a switch device that includes a plurality of input ports each having at least one input queue, a plurality of switch units, and a plurality of output ports. There is generated, by each input port having a packet or cell in its at least one queue, a request to output the corresponding packet or cell to each of the output ports to which a corresponding packet or cell is to be sent to, wherein the request includes a specific one of the plurality of switch units to be used in a transfer of the packet or cell from the corresponding input port to the corresponding output port, the specific one of the plurality of switch units being selected according to a first priority scheme. Access is granted, per output port per switch unit, to the request made, the granting being based on a second priority scheme. Grants are accepted per input port per switch unit, the accepting being based on a third priority scheme. Packets and/or cells are outputted from the respective input ports to the respective output ports, based on the accepted grants, utilizing the corresponding switch units identified in the accepted grants.
Abstract:
A data-packet processing method is used in a network system. The network system includes a buffer for optionally storing a data packet to be transferred, and the method includes steps of: determining a type of the data packet to be transferred; determining a storage state of a buffer where the data packet is to be temporarily stored before transferring; and storing the data packet into the buffer if the storage state of the buffer is a packet-accepting storage state; wherein the packet-accepting storage state of the buffer varies with the type of the data packet.
Abstract:
A transaction switch and integrated circuit incorporating said for switching data through a shared memory between a plurality of data interfaces that support different data protocols, namely packetized interfaces like InfiniBand and addressed data interfaces like PCI. The transaction switch also switches transactions commanding data transfers between the disparate protocol data interfaces and between those of the data interfaces having like protocols. For example, the transaction switch enables a hybrid InfiniBand channel adapter/switch to perform both InfiniBand packet to local bus protocol data transfers through the shared memory as well as InfiniBand packet switching between the multiple InfiniBand interfaces. The transactions are tailored for each interface type to include information needed by the particular interface type to perform a data transfer. The shared buffer memory, dynamically allocated by the transaction switch on a first-come-first serve basis, results in more efficient use of precious buffering resources than in a statically allocated scheme.
Abstract:
The present invention provides a system and a method for filtering a plurality of frames sent between devices coupled to a fabric by Fiber Channel connections. Frames are reviewed against a set of individual frame filters. Each frame filter is associated with an action, and actions selected by filter matches are prioritized. Groups of devices are “zoned” together and frame filtering ensures that restrictions placed upon communications between devices within the same zone are enforced. Zone group filtering is also used to prevent devices not within the same zone from communicating. Zoning may also be used to create LUN-level zones, protocol zones, and access control zones. In addition, individual frame filters may be created that reference selected portions of frame header or frame payload fields.
Abstract:
An MIQ packet switch device and packet switching method are provided. The MIQ packet switch device performs cell-based switching of packet data, and includes one or more input queue arrays for buffering cells input through one or more input ports. Each of the one or more input queue arrays includes an input interface for outputting the cells to one or more output ports. The one or more input queue arrays further include a switch matrix for switching and outputting each of the cells transferred by the input interface to a corresponding output port of the one or more output ports. The one or more queue arrays also include a scheduler for receiving descriptor information for cell scheduling from each of the one or more input queue arrays, and creating control information for controlling each of the one or more input queue arrays to selectively output the cells, based on the descriptor information.
Abstract:
A deficit round-robin scheduler including a round-robin table configured to store a plurality of cycle link lists, wherein each cycle link list includes a head flow identification (FLID) value identifying a first flow of the cycle link list, and a tail FLID value identifying a last flow of the cycle link list. A flow table is provided having a plurality of flow table entries, wherein each of the flow table entries is associated with a corresponding flow, and therefore has a corresponding FLID value. A packet queue is associated with each flow table entry, wherein each packet queue is capable of storing a plurality of packets. The deficit round-robin scheduler also included an idle cycle register having an idle cycle entry corresponding with each of the cycle link lists, wherein each idle cycle entry identifies the corresponding cycle link list as active or idle.
Abstract:
A system and method are described for synchronizing store-and-forward networks and for scheduling and transmitting continuous, periodic, predictable, time-sensitive, or urgent information such as real-time and high-priority messages over those networks. This enables packet-, cell-, and/or frame-based networks to thereby efficiently switch voice, video, streaming, and other real-time or high-priority data at the layer one or physical level, thus ensuring that the delivery of selected information can be made fast, on-time, immediate, non-blocked, non-congested, loss-less, jitter-free, and have guaranteed delivery, and guaranteed quality of service.
Abstract:
A system and method are provided for resynchronizing backplane link management credit counters in a packet communications switch fabric. The method comprises: at an input port card ingress port, accepting information packets including cells and cell headers with destination information; modifying the destination information in the received cell headers; routing information packets between the input port card and output port cards on backplane data links through an intervening crossbar; at the input port card, maintaining a credit counter for each output port card channel; decrementing the counter in response to transmitting cells from the input port card; generating credits in response to transmitting cells from an output port card channel; sending the generated credits to increment the counter, using the modified destination information; and, using the generated credit flow to resynchronize the credit counter.
Abstract:
A fast pattern processor having an internal function bus and an external function bus. In one embodiment, a fast pattern processor includes: (1) an internal function bus, (2) an external function bus, (3) a context memory having a block buffer and a argument signature register wherein the block buffer includes processing blocks associated with a protocol data unit (PDU), (4) a pattern processing engine, associated with the context memory, that performs pattern matching and (5) a function interface system having (5A) a controller arbitration subsystem and (5B) a dispatch subsystem.
Abstract:
The method for scheduling interconnections in an interconnecting fabric comprises the following steps. In a determined time slot input selectors generate requests using a request pointer set, which is related to the determined time slot. Then, the requests are transmitted to output selectors, and the output selectors issue grants using a grant pointer set, which is also related to the determined time slot. In a further step the grants are transmitted to the input selectors, and the input selectors update the request pointer set. These steps are repeated, wherein for a further time slot a further request and grant pointer set are used, which are related to the further time slot.