摘要:
A high performance network interface is provided for receiving a packet from a network and transferring it to a host computer system. A header portion of a received packet is parsed by a parser module to determine the packet's compatibility with, or conformance to, one or more pre-selected protocols. If compatible, a number of processing functions may be performed to increase the efficiency with which the packet is handled. In one function, a re-assembly engine re-assembles, in a re-assembly buffer, data portions of multiple packets in a single communication flow or connection. Header portions of such packets are stored in a header buffer. An incompatible packet may be stored in another buffer. In another function, a packet batching module determines when multiple packets in one flow are transferred to the host computer system, so that their header portions are processed collectively rather than being interspersed with headers of other flows' packets. In yet another function, the processing of packets through their protocol stacks is distributed among multiple processors by a load distributor, based on their communication flows. A flow database is maintained by a flow database manager to reflect the creation, termination and activity of flows. A packet queue stores packets to await transfer to the host computer system, and a control queue stores information concerning the waiting packets. If the packet queue becomes saturated with packets, a random packet may be discarded. An interrupt modulator may modulate the rate at which interrupts associated with packet arrival events are issued to the host computer system.
摘要:
A system and method are provided for transferring a packet received from a network to a host computer according to an operation code associated with the packet. A packet received at a network interface is parsed to retrieve information from a header portion of the packet. A flow key is generated for a received packet that was formatted with one of a set of predetermined protocols. A packet's flow key identifies a communication flow that comprises the packet. Based on some of the retrieved information, a code is associated with the packet to inform a transfer engine how the packet should be transferred to host memory. Based on a packet's code, the transfer engine stores the packet in one or more host memory buffers. If the packet was formatted with one of the set of predetermined protocols, its data is re-assembled in a re-assembly buffer with data from other packets in the same communication flow. Re-assembled data may be provided to a destination application or user through page flipping. If the packet is being re-assembled, a header portion of the packet is stored in a separate header buffer. If the packet is not being re-assembled, it is stored in its entirety in the header buffer if it is smaller than a predetermined threshold. If a non-re-assembled packet is larger than the threshold for the header buffer, it is stored in another type of buffer for larger non-re-assembled packets. After a packet is stored in a buffer, the transfer engine informs the host computer of the packet by configuring a descriptor with information on the packet and releasing the descriptor to the host computer.
摘要:
A high performance network interface receives network traffic in the form of packets. Prior to being transferred to a host computer, a packet is stored in a packet queue. A system and method are provided for randomly discarding a packet if the rate of packet transfers cannot keep pace with the rate of packet arrivals at the queue. When a packet must be dropped a selected packet may be discarded as it arrives at the queue, or a packet already in the queue may be dropped. A packet queue is apportioned into multiple regions, any of which may overlap or share a common boundary. A probability indicator is associated with a region to specify the probability of a packet being discarded when the level of traffic stored in the queue is within the region. A counter may be employed in conjunction with a probability indicator to identify individual packets. Probability indicators may differ from region to region so that the probability of discarding a packet fluctuates as the level of traffic stored in the queue changes. In addition to selecting packets to be dropped on a random basis, information gleaned from a packet may be applied to prevent certain types of packets from being dropped. The information derived from a packet may be obtained during a procedure in which one or more of the packet's headers are parsed. By parsing a packet, it may be determined whether the packet conforms to a pre-selected protocol.
摘要:
A system and method are provided for identifying related packets in a communication flow for the purpose of collectively processing them through a protocol stack comprising one or more protocols under which the packets were transmitted. A packet received at a network interface is parsed to retrieve information from one or more protocol headers. A flow key is generated to identify a communication flow that includes the packet, and is stored in a database of flow keys. When the packet is placed in a queue to be transferred to a host computer, the flow key and/or its flow number (e.g., its index into the database) is stored in a separate queue. Near to the time at which the packet is transferred to the host computer, a dynamic packet batching module searches for a packet that is related to the packet being transferred (i.e., is in the same flow) but which will be transferred later in time. If a related packet is located, the host computer is alerted and, as a result, delays processing the transferred packet until the related packet is also received. By collectively processing the related packets, processor time is more efficiently utilized.
摘要:
A system and method are provided for managing information concerning a network flow comprising packets sent from a source entity to a destination entity served by a network interface. A network flow is established for each datagram sent from the source entity to the destination entity. A flow key, identifying the source and destination entities, is stored in a data structure along with information concerning validity of the flow, sequence of data in the flow datagram and how recently the flow was active. Once a flow is established, it is updated each time a packet containing data from the flow's datagram is received. When such a packet is received, an operation code is generated for identifying whether the packet is suitable for a particular network interface function. An operation code may, for example, indicate that a packet contains data to be re-assembled with other data from the same flow. Another operation code may indicate that a packet is not suitable for data re-assembly. Another operation code may specify that the packet is simply a control packet, has no data, or that the packet was received out of order.
摘要:
A system and method are provided for distributing or sharing the processing of network traffic (e.g., through a protocol stack on a host computer system) received at a multiprocessor computer system. A packet formatted according to one or more communication protocols is received from a network entity at a network interface circuit of a multiprocessor computer. A header portion of the packet is parsed to retrieve information stored in one or more protocol headers, such as source and destination identifiers or a virtual communication connection identifier. In one embodiment, a source identifier and a destination identifier are combined to form a flow key that is subjected to a hash function. The modulus of the result of the hash function over the number of processors in the multiprocessor computer is then calculated. In another embodiment a modulus operation is performed on the packet's virtual communication connection identifier. The result of the modulus operation identifies a processor to which the packet is submitted for processing.
摘要:
A method for sharing a network interface among multiple hosts and includes providing a network interface, associating a first set of the plurality of memory access channels with a first host, and associating a second set of the plurality of memory access channels with a second host is disclosed. The network interface including a plurality of memory access channels.
摘要:
A method of resolving mutex contention within a network interface unit which includes providing a plurality of memory access channels, and moving a thread via at least one of the plurality of memory access channels, the plurality of memory access channels allowing moving of the thread while avoiding mutex contention when moving the thread via the at least one of the plurality of memory access channels is disclosed.
摘要:
A method for addressing system latency within a network system which includes providing a network interface and moving data within each of the plurality of memory access channels independently and in parallel to and from a memory system so that one or more of the plurality of memory access channels operate efficiently in the presence of arbitrary memory latencies across multiple requests is disclosed. The network interface includes a plurality of memory access channels.
摘要:
Disclosed are systems and methods for reclaiming posted buffers during a direct memory access (DMA) operation executed by an input/output device (I/O device) in connection with data transfer across a network. During the data transfer, the I/O device may cancel a buffer provided by a device driver thereby relinquishing ownership of the buffer. A condition for the I/O device relinquishing ownership of a buffer may be provided by a distance vector that may be associated with the buffer. The distance vector may specify a maximum allowable distance between the buffer and a buffer that is currently fetched by the I/O device. Alternatively, a condition for the I/O device relinquishing ownership of a buffer may be provided by a timer. The timer may specify a maximum time that the I/O device may maintain ownership of a particular buffer. In other implementations, a mechanism is provided to force the I/O device to relinquish some or all of the buffers that it controls.