Abstract:
A switch fabric includes input links, output links, and at least one switching element. The input links are configured to receive data items that include destination addresses. At least some of the data items have different priority levels. The output links are configured to output the data items. Each of the output links is assigned multiple ones of the destination addresses. Each of the destination addresses corresponds to one of the priority levels. The switching element(s) is/are configured to receive the data items from the input links and send the data items to ones of the output links without regard to the priority levels of the data items.
Abstract:
A method is disclosed in which a first data transmission is received at a first network node from a second network node in a coordinated network. The first data transmission is received in a first bandwidth of a coordinated network and includes a first plurality of subcarriers. A second data transmission is received at the first network node from a third network node. The second data transmission is received in a second bandwidth of the coordinated network and includes a second plurality of subcarriers. A first transmission schedule is transmitted from the first network node to the second and third network nodes. The first transmission schedule includes modified first and second bandwidths in which the modified first bandwidth includes a subcarrier reallocated from the second bandwidth. The first network node receives data in accordance with the first transmission schedule.
Abstract:
A traffic manager includes an execution unit that is responsive to instructions related to queuing of data in memory. The instructions may be provided by a network processor that is programmed to generate such instructions, depending on the data. Examples of such instructions include (1) writing of data units (of fixed size or variable size) without linking to a queue, (2) re-sequencing of the data units relative to one another without moving the data units in memory, and (3) linking the previously-written data units to a queue. The network processor and traffic manager may be implemented in a single chip.
Abstract:
A packet scheduling apparatus and method to fairly share network bandwidth between multiple subscribers and to fairly share the bandwidth allocated to each subscriber between multiple flows are provided. The packet scheduling method includes calculating first bandwidth for each subscriber to fairly share total bandwidth set for the transmission of packets between multiple subscribers; calculating second bandwidth for each flow to fairly share the first bandwidth between one or more flows that belong to each of the multiple subscribers; and scheduling a packet of each of the one or more flows based on the second bandwidth.
Abstract:
The present invention relates transmission network, which involves Passive Optical Network and thereto connected units, e.g. Optical Network Units. It is an object of the present invention to provide a solution to the upstream data packet traffic congestion problem in transmission networks that comprises a PON system. Said problem is solved by providing adapted node devices and methods for such scheduling control that within the prescribed standard requirements, e.g. QoS, for passive optical network systems eliminate the congestion problem.
Abstract:
A switching device comprising a plurality of ingress ports and a plurality of egress ports. The switching device is arranged to receive data packets through the ingress ports and to forward received data packets to respective ones of the egress ports. The switching device further comprises an ingress module for each of the ingress ports, each ingress module being arranged to receive data packets from a respective single one of the ingress ports and to store the received data packets in one of a plurality of data structures provided by the ingress module, each ingress module being further configured to select a data packet from one of the plurality of data structures, and to request permission to transmit the selected data packet to an egress port. The switching device also comprises at least one egress module arranged to receive a plurality of requests for permission to transmit data packets through a particular egress port, the request being generated by the plurality of ingress modules, and to select one of the plurality of requests.
Abstract:
In an example embodiment, there is disclosed herein logic encoded in at least one tangible media for execution and when executed operable to receive a packet. The logic determines a client associated with the packet. The client associated with a service set, and the service set associated with a transmitter. The logic determines a drop probability for the selected client determines a current packet arrival rate for the selected client and determines whether to enqueue or drop the packet based on the drop probability for the selected client and the current packet arrival rate associated with the selected client. The drop probability is based on a packet arrival rate and virtual queue length for the, which is based on a packet arrival rate and virtual queue length for the service set that is based on a packet arrival rate and virtual queue length for the transmitter.
Abstract:
A method for selecting a queue for service across a shared link. The method includes classifying each queue from a group of queues within a plurality of ingresses into one tier of a number “N” of tiers. The number “N” is greater than or equal to 2. Information about allocated bandwidth is used to classify at least some of the queues into the tiers. Each tier is assigned a different priority. The method also includes matching queues to available egresses by matching queues classified within tiers with higher priorities before matching queues classified within tiers with lower priorities.
Abstract:
A supervisory communications device, such as a headend device within a cable communications network, monitors and controls communications with a plurality of remote communications devices, such as cable modems, throughout a widely distributed network. The supervisory device allocates bandwidth on the upstream channels by sending MAP messages over its downstream channel. A highly integrated media access controller integrated circuit (MAC IC) operates within the headend to provide lower level DOCSIS processing on signals exchanged with the remote devices. The enhanced functionality of the MAC IC relieves the processing burden on the headend CPU and increases packet throughput. The enhanced functionality includes header suppression and expansion, DES encryption and decryption, fragment reassembly, concatenation, and DMA operations.
Abstract translation:诸如电缆通信网络中的前端设备的监控通信设备监视和控制与广泛分布的网络中的多个远程通信设备(例如电缆调制解调器)的通信。 监控设备通过在其下行信道上发送MAP消息来在上行信道上分配带宽。 高集成度的媒体访问控制器集成电路(MAC IC)在头端内部进行操作,以便在与远程设备交换的信号上提供较低级别的DOCSIS处理。 MAC IC的增强功能减轻了头端CPU的处理负担,并增加了数据包吞吐量。 增强的功能包括报头抑制和扩展,DES加密和解密,片段重组,级联和DMA操作。
Abstract:
In one embodiment, a processor-readable medium can store code representing instructions that when executed by a processor cause the processor to receive a value representing a congestion level of a receive queue and a value representing a state of a transmit queue. At least a portion of the transmit queue can be defined by a plurality of packets addressed to the receive queue. A rate value for the transmit queue can be defined based on the value representing the congestion level of the receive queue and the value representing the state of the transmit queue. The processor-readable medium can store code representing instructions that when executed by the processor cause the processor to define a suspension time value for the transmit queue based on the value representing the congestion level of the receive queue and the value representing the state of the transmit queue.