Abstract:
A multi-stage scheduler that provides improved bandwidth utilization in the presence of processor intensive traffic is disclosed. Incoming traffic is separated into multiple traffic flows. Data blocks of the traffic flows are scheduled for access to a processor resource using a first scheduling algorithm, and processed by the processor resource as scheduled by the first scheduling algorithm. The processed data blocks of the traffic flows are scheduled for access to a bandwidth resource using a second scheduling algorithm, and provided to the bandwidth resource as scheduled by the second scheduling algorithm. The multi-stage scheduler in an illustrative embodiment may be implemented in a network processor integrated circuit or other processing device of a communication system.
Abstract:
Methods and communication networks having a plurality of connection-oriented switches in which administrative weights are assigned based on link call blocking probabilities. Three assignment schemes include, first, a solution that can be built in the switch software of sufficiently similar switches to enable the switches to determine administrative weights for their links. The second and third schemes employ administrative weight management stations. The second scheme uses an administrative weight management station that operates to enhance total network revenue or throughput in a communication network in which all of the switches implement a certain type of accounting management information databases (MIB). The third scheme uses an administrative weight management station that computes administrative weights for switches that do not have the capability for the second scheme, but which employ appropriate MIBs.
Abstract:
Methods and apparatus for synchronizing a first clock of a transmit node and a second clock of a receive node in a packet network are provided. Receive time stamps are generated for transferred packets at a receive node in-accordance with the second clock. Propagation delay variation is filtered from receive time stamp intervals through a filter in accordance with a frequency of the second clock. The filtered receive time stamp intervals and transmit time stamp intervals of the transferred packets are input into a phase locked loop to generate a new frequency for the second clock. The filter and the second clock are updated in accordance with the new frequency for synchronization with the first clock of the transfer node.
Abstract:
In one embodiment, an apparatus for coordinating merging of packets for one or more virtual circuits (VGs). Each packet of a VC comprising a sequence of cells terminates with an end of packet (EOP) cell. The apparatus comprises one or more buffers, a buffer controller, and a merge processor. Each buffer is configured to receive cells of an associated VC and a threshold value based on traffic of the VC. When a number of cells of a packet in a buffer exceeds the corresponding dynamic threshold value, a corresponding flag of the buffer is set. The buffer controller is configured to drop all cells of the current packet in response to a set flag of a corresponding buffer. The merge processor services each buffer in accordance with a scheduling method to transfer one or more packets from each buffer to an output packet stream.
Abstract:
A method and apparatus of reorganizing cells received over data communication lines at a receive node is provided. The cells have an initial order identified by monotonically increasing sequence identifiers. The receive node has buffers associated with respective ones of the communication lines. Each of the buffers has an output position. A cell having a smallest sequence identifier is detected from one or more cells at the output positions of the buffers. It is determined if the smallest sequence identifier is sequentially consecutive to a specified sequence identifier. If the smallest sequence identifier is sequentially consecutive to the specified sequence identifier, the cell having the smallest sequence identifier is dequeued from an output position of one of the buffers and the specified sequence identifier is redefined as the smallest sequence identifier.
Abstract:
Methods and apparatus for use with an integrated circuit device of a processing device of a network node of a digital networking system, configured to monitor one or more control messages received at the processing device from each of a plurality of CPE devices, and limiting the one or more control messages to one or more specified rates for a specified duration. The integrated circuit device is further configured to provide one or more data channels to the plurality of CPE devices from the processing device in response to the one or more control messages processed at the processing device.
Abstract:
A method of minimizing SID difference of simultaneously transmitted cells in two or more data communication lines is provided. A data transmission speed of each of the two or more data communication lines is identified. A fullness threshold of at least one buffer of two or more buffers in a transmit node is configured in relation to a size of a data cell for transmission. The two or more buffers correspond to respective ones of the two or more data communication lines. The at least one buffer communicates with a given one of the two or more data communication lines having a data transmission speed slower than another of the two or more data communication lines. One or more data cells for transmission are assigned to the two or more buffers of the two or more data communication lines at the transmit node. The one or more data cells are transmitted from the transmit node to a receive node in accordance with the data transmission speeds of the two or more data communication lines. The fullness threshold of the at least one buffer controls assignment of data cells to the at least one buffer during data cell transmission on the given data communication line and minimizes SID difference of simultaneously transmitted cells in the two or more data communication lines.
Abstract:
A method and apparatus are disclosed for per-service flow protection and restoration of data in one or more packet networks. The disclosed protection and restoration techniques allow traffic to be prioritized and protected from the aggregate level down to a micro-flow level. Thus, protection can be limited to those services that are fault sensitive. Protected data is duplicated over a primary path and one or more backup data paths. Following a link failure, protected data can be quickly and efficiently restored without significant service interruption. A received packet is classified at each end point based on information in a header portion of the packet, using one or more rules that determine whether the received packet should be protected. At an ingress node, if the packet classification determines that the received packet should be protected, then the received packet is transmitted on at least two paths. At an egress node, if the packet classification determines that the received packet is protected, then multiple versions of the received packet are expected and only one version of the received packet is transmitted.
Abstract:
Arrangements and methods for efficiently selecting an optimum connection path that meets user specified delay requirements with enhanced efficiency. In a basic aspect, a method is implemented by one of a plurality of algorithms to meet user QoS specifications. The user not only specifies a delay threshold T for the incoming request but also specifies a delay threshold tolerance &egr; for the path delay that will satisfy him. Two implementations are disclosed. The first is termed non-iterative and sets scaling factor &tgr;=min (T, (n−1)/&egr;), where n is a number of links in a shortest path, scales all the relevant delay parameters by &tgr;/T, truncates all the scaled values to integers, and uses a dynamic programming algorithm to accumulate the total of resulting link delay parameters values for each possible shortest path. The second method, termed iterative, is similar, except that it sets &tgr;
Abstract:
The present disclosure includes a network element comprising a first card configured to receive a duplicatively split first signal comprising a data component and a management component, and further configured to receive and transmit a duplicatively split second signal, and a similar second card. The network element also includes a selector configured to select either the first card or the second card to receive the management component of the first signal and to detect a change in designation between the first card and the second card to transmit the second signal. The selector is also configured to modify the selection to select the card designated to transmit the second signal to also receive the management component of the first signal. The disclosure also includes associated methods and systems.