Abstract:
In general, techniques are described for steering data traffic for a subscriber session from a network interface of a wireless access gateway to an anchoring one of a plurality of forwarding units of the wireless access gateway using a layer 2 (L2) address of the data traffic. For example, a wireless access gateway for a wireless local area network (WLAN) access network is described as having a decentralized data plane that includes multiple forwarding units for implementing subscriber sessions. Each forwarding unit may present a network interface for sending and receiving network packets and includes packet processing capabilities to enable subscriber data packet processing to perform the functionality of the wireless access gateway. The techniques enable steering data traffic for a given subscriber session to a particular one of the forwarding units of the wireless access gateway using an L2 address of the data traffic.
Abstract:
A motor vehicle has a master bus device that is designed to use a vehicle communication bus of the motor vehicle to exchange messages with slave bus devices. A transmission device of the master bus device cyclically exchanges the messages with the slave bus devices on the basis of a schedule For making efficient use of the vehicle communication bus by the schedule-controlled master bus device, the transmission device is designed to receive a diagnosis request signal via a data input that is different than the bus port of the master bus device and to take the diagnosis request signal as a basis for interrupting the cyclic processing of the schedule and to exchange at least one special message that is different than the messages stipulated in the schedule with at least one of the slave bus devices and then to continue the processing of the schedule.
Abstract:
An assembly and a method where a number of receiving units receive and store data in a number of queues de-queued by a plurality of processors/processes. If a selected queue for one processor has a fill level exceeding a limit, the packet is forwarded to a queue of another processor which is instructed to not de-queue that queue until the queue with the exceeded fill level has been emptied. Thus, load balancing between processes/processors may be obtained while maintaining an ordering between packets.
Abstract:
A packet scheduling method and apparatus which allows multiple flows that require data transmission to the same output port of a network device such as a router to fairly share bandwidth. The packet scheduling method includes calculating an expected time of arrival of a (k+1)-th packet subsequent to a currently input k-th packet of individual flows by use of bandwidth allocated fairly to each of the flows and a length of the k-th packet; in response to the arrival of the (k+1)-th packet, comparing the expected time of arrival of the (k+1)-th packet to an actual time of arrival of the (k+1)-th packet; and scheduling the (k+1)-th packet of each flow according to the comparison result.
Abstract:
A packet scheduling apparatus and method to fairly share network bandwidth between multiple subscribers and to fairly share the bandwidth allocated to each subscriber between multiple flows are provided. The packet scheduling method includes calculating first bandwidth for each subscriber to fairly share total bandwidth set for the transmission of packets between multiple subscribers; calculating second bandwidth for each flow to fairly share the first bandwidth between one or more flows that belong to each of the multiple subscribers; and scheduling a packet of each of the one or more flows based on the second bandwidth.
Abstract:
A packet scheduler has input connections, and data packets received on the input connections can be placed in queues. The packet scheduler includes a first scheduler, for identifying a first of said queues according to a first queue scheduling algorithm, and a second scheduler, for identifying a second of said queues according to a second queue scheduling algorithm. The packet scheduler determines whether the first of said queues contains a packet of data to be sent and, if so, it selects the first of said queues as a selected queue. If the first of said queues does not contain a packet of data to be sent, it selects the second of said queues as a selected queue. The packet scheduler then determines whether the respective packet of data can be sent from the selected queue, by maintaining a deficit counter indicating a current data quota for the respective queue, and also by maintaining a global deficit counter.
Abstract:
A system schedules traffic flows on an output port using a circular memory structure. The circular memory structure may be a rate wheel that includes a group of sequentially arranged slots. The rate wheel schedules the traffic flows in select ones of the slots based on traffic shaping parameters assigned to the flows. The rate wheel compensates for collisions between multiple flows that occur in the slots by subsequently skipping empty slots.
Abstract:
A communications method is provided. The method includes processing multiple packet queues for a high speed packet data network and associating one or more arrays for the multiple packet queues. The method also includes generating an index for the arrays, where the index is associated with a time stamp in order to determine a burst size for the high speed packet data network.
Abstract:
A method for bandwidth control on a network interface card (NIC), the method that includes initiating a current time period, receiving a plurality of incoming packets for a receive ring, populating, by a NIC, the receive ring with the plurality of incoming packets according to a size of the receive ring during the current time period, wherein the size of the receive ring is based on an allocated bandwidth for the receive ring, and sending, by the NIC, the plurality of incoming packets to a host when a duration of the current time period elapses, wherein the duration is based on the allocated bandwidth for the receive ring.
Abstract:
A system schedules traffic flows on an output port using circular memory structures. The circular memory structures may include rate wheels that include a group of sequentially arranged slots. The traffic flows may be assigned to different rate wheels on a per-priority basis.