摘要:
A network node, for example a router, is configured for assigning network parameters for an identified flow of data packets associated with an application service, based on detecting quality of service parameters specified by XML tags within a message between an application server configured for providing the application service and a destination device configured for receiving the application service. The router includes an XML parser configured for parsing XML tags specifying prescribed user-selectable quality of service attributes for a corresponding application service, and an application resource configured for interpreting the prescribed user-selectable quality of service attributes for the application service. The application resource also is configured for assigning the selected network parameters, for transfer of the identified flow of data packets, based on the interpretation of the prescribed user-selectable quality of service attributes for the specified application service.
摘要:
A data processing apparatus in a network receives packet flows that are communicated between a first network node and a second network node, and comprises a clock and latency analysis logic configured for receiving a first data segment that has been communicated from the first node and forwarding the first data segment to the second node; storing a first time value of the clock in association with a first timestamp value obtained from the first data segment; receiving a second data segment that has been communicated from the second node and forwarding the second data segment to the first node; retrieving the first time value based on the first timestamp value; determining a second time value of the clock; and determining a first latency value by computing a difference of the second time value and the first time value. Thus end-to-end packet latency is determined by passively observing timestamp values.
摘要:
A data compression method and system is disclosed. In one embodiment, the data compression method includes receiving a data packet. Also, the method includes compressing the data packet using a confirmed compression history, wherein the confirmed compression history includes previously acknowledged data packets. Further, the method includes sending a compressed data packet to a downstream device. Moreover, the method includes detecting a delivery acknowledgement associated with the compressed data packet. Continuing, the method includes updating the confirmed compression history by incorporating the data packet information into the confirmed compression history based upon receipt of the delivery acknowledgement.
摘要:
An integrated network switch having multiple network switch ports for outputting data frames also includes a dequeuing system for selectively supplying a data frame for output according to a specified priority by an output switch port. The dequeuing system includes, for each network switch port, a plurality of priority queues configured for holding assigned data frames based on respective priorities assigned by switching logic. A weighted round robin scheduler supplies the assigned data frames held in the priority queues to the output switch port according to a prescribed weighted round robin scheduling. In addition, the dequeuing system uses token bucket filters for selectively passing the assigned data frames to the respective priority queues in a manner that ensures that a given data frame having a large size does not interfere with bandwidth reserved for high-priority packets requiring guaranteed quality of service. Each token bucket filter selectively passes the corresponding assigned data frame to the corresponding priority queue based on a determined availability of at least a required number of tokens corresponding to a determined size of the corresponding assigned data frame. If the token bucket filter determines an insufficient number of tokens are available relative to the required number of tokens, the token bucket filter either drops the frame or shifts the frame to a lower priority queue. Hence, weighted fair queuing can be approximated using weighted round robin scheduling without interference by large-sized data packets.