Abstract:
A system includes a network interface configured to receive a message comprising a routing address, and forward the message in accord with a route. The system further includes logic, operatively connected to the network interface. The logic is configured to apply a mask to the routing address to determine a masked address, and perform an exact match on the masked address.
Abstract:
A device for instrumentation and traffic disposition of a network using one or more time-stamps may include a receiving port to receive a data packet. A device configuration module may be configured to determine whether the device is a boundary device located on a boundary of an instrumented sub-network of the network. If the determination is made that device is the boundary device, a frame processing module may insert a first time-stamp at a first offset from a frame checksum (FCS) field in a data frame associated with the data packet. Otherwise, a corresponding time-stamp may be inserted at a second offset from the FCS field. The one or more time-stamps may enable a receiving endpoint device of the network to determine timeliness information associated with the data packet.
Abstract:
A system for providing oversubscription of pipeline bandwidth comprises a steer module, an absorption buffer, an ingress packet processor (IPP), a memory management unit (MMU), and a main packet buffer. The steer module receives packets that include start of packet (SOP), middle of packet (MOP), and end of packet (EOP) cells, attaches a packet identifier to the cells, passes the MOP and EOP cells to the MMU, and stores the SOP cells and EOP metadata in the absorption buffer. The IPP processes the SOP cells and EOP metadata and passes the same to the MMU. The MMU stores the MOP, EOP, and processed SOP cells in the main packet buffer, combines, upon receiving the processed EOP metadata of each packet, the processed SOP cell, the MOP cells and the EOP cell of each packet to reconstruct each packet, and queues each reconstructed packet in an egress port queue.
Abstract:
Aspects of port empty transition scheduling are described herein. In one embodiment, when one or more cells are added to a queue in a network communications device, an enqueue indicator is generated. The enqueue indicator identifies a number of cells added to the queue. With reference to the enqueue indicator, a queue scheduler maintains a count of cells enqueued for communication and issues a port pick credit for a port of the network communications device. A port scheduler schedules a pick for communicating over the port with reference to the port pick credit and forwards the pick to the queue scheduler. In turn, the queue scheduler forwards a queue pick to the queue, and at least one of the cells is forwarded to dequeue logic. According to aspects of the embodiments described herein, empty port scheduling inefficiencies may be avoided and network throughput increased.
Abstract:
Continuing to integrate more aggregate bandwidth and higher radix into switch devices is an economic imperative because it creates value both for the supplier and customer in large data center environments which are an increasingly important part of the marketplace. While new silicon processes continue to shrink transistor and other chip feature dimensions, process technology cannot be relied upon as a key driver of power reduction. Transitioning from 28 nm to 16 nm is a special case where FinFET provides additional power scaling, but subsequent FinFET nodes are not expected to deliver as substantial of power reductions to meet the desired increases in integration. The disclosed switch architecture attacks the power consumption problem by controlling the rate at which power-consuming activities occur.
Abstract:
A system includes a pre-trigger buffer and a post-trigger buffer for recording entries related to specific network element. Buffer management monitoring circuitry captures entries leading up to a trigger criterion being met in the pre-trigger buffer and entries following the trigger criterion being met in the post-trigger buffer. The trigger criterion may include network element status, such as a threshold queue level; or an event, such as a dropped packet. The pre-trigger buffer may include a circular buffer in which older entries are overwritten by newer entries. Once the trigger condition is met, the pre-trigger buffer contents are held while the post-trigger buffer fills. Once the post-trigger buffer fills, the contents of the buffers may be read.
Abstract:
A network switch including a set of communication ports is provided. The communication ports may have an allocated prebuffer to store data during packet switching operations. The network switch may further include a calendar associated with the set of communication ports that provides bandwidth configuration for the set of communication ports. The network switch may further include a secondary calendar that may be dynamically setup. The secondary calendar may provide an alternative bandwidth configuration strategy for the set of communication ports. The switch includes circuitry that may increase the prebuffer size and upon the successful increase of the prebuffer size reconfigure the set of communication ports from the original calendar to the secondary calendar, without a reboot. The circuitry may reset the prebuffer size after reconfiguration is complete and the switch may continue operation according to the reconfigured settings.
Abstract:
A system and method for a flexible number of lookups in pipeline-based packet processors. Pipeline-based packet processors can be configured to allow multiple lookups per physical table. In one embodiment, a start of packet (SOP) cell is assigned to a first slot of a packet processing pipeline, and a non-SOP cell is assigned to a second slot of a packet processing pipeline. Access of a table by the second slot can be usurped by the SOP cell in the first slot.
Abstract:
Aspects of port empty transition scheduling are described herein. In one embodiment, when one or more cells are added to a queue in a network communications device, an enqueue indicator is generated. The enqueue indicator identifies a number of cells added to the queue. With reference to the enqueue indicator, a queue scheduler maintains a count of cells enqueued for communication and issues a port pick credit for a port of the network communications device. A port scheduler schedules a pick for communicating over the port with reference to the port pick credit and forwards the pick to the queue scheduler. In turn, the queue scheduler forwards a queue pick to the queue, and at least one of the cells is forwarded to dequeue logic. According to aspects of the embodiments described herein, empty port scheduling inefficiencies may be avoided and network throughput increased.
Abstract:
A system and method for a flexible number of lookups in pipeline-based packet processors. Pipeline-based packet processors can be configured to allow multiple lookups per physical table. In one embodiment, a start of packet (SOP) cell is assigned to a first slot of a packet processing pipeline, and a non-SOP cell is assigned to a second slot of a packet processing pipeline. Access of a table by the second slot can be usurped by the SOP cell in the first slot.