摘要:
A high performance network interface is provided for receiving a packet from a network and transferring it to a host computer system. A header portion of a received packet is parsed by a parser module to determine the packet's compatibility with, or conformance to, one or more pre-selected protocols. If compatible, a number of processing functions may be performed to increase the efficiency with which the packet is handled. In one function, a re-assembly engine re-assembles, in a re-assembly buffer, data portions of multiple packets in a single communication flow or connection. Header portions of such packets are stored in a header buffer. An incompatible packet may be stored in another buffer. In another function, a packet batching module determines when multiple packets in one flow are transferred to the host computer system, so that their header portions are processed collectively rather than being interspersed with headers of other flows' packets. In yet another function, the processing of packets through their protocol stacks is distributed among multiple processors by a load distributor, based on their communication flows. A flow database is maintained by a flow database manager to reflect the creation, termination and activity of flows. A packet queue stores packets to await transfer to the host computer system, and a control queue stores information concerning the waiting packets. If the packet queue becomes saturated with packets, a random packet may be discarded. An interrupt modulator may modulate the rate at which interrupts associated with packet arrival events are issued to the host computer system.
摘要:
A prefetch apparatus optimizes bandwidth in a computer network by prefetch accessing data blocks prior to their demand in an ATM network thereby effectively reducing memory read latency. The method of the preferred embodiment includes the steps of: 1) computing a prefetch address of a next sequential data block given an address of a requested data block; 2) comparing a current request address against a previously computed prefetch address; and 3) generating a hit/miss indication corresponding to whether the current request address matches the previously computed prefetch address.
摘要:
A lawnmower has an improved clutch-brake mechanism interposed between the engine drive shaft and the rotating blade to stop the blade except when an operator tensions a control cable. The clutch-brake includes cylindrical input and output members selectively drivingly coupled by a clutch spring wound around both members. A control sleeve positioned around the clutch spring connects with the input end of the clutch spring. When the control sleeve is braked, the clutch spring releases its grip on the input member and the output is no longer driven. A coiled brake band extends around the control sleeve to selectively effect this braking action. An improved floating mount is provided for the brake band. An improved lost-motion connection is provided between the control sleeve and the output member to limit the twisting of the clutch spring. An optional slip clutch is provided for connecting the output member to the blade.
摘要:
A method for reducing address space in a shared virtualized I/O device includes allocating hardware resources including variable resources and permanent resources, to one or more functions. The method also includes allocating address space for an I/O mapping of the resources in a system memory, and assigning a respective portion of that address space for each function. The method further includes assigning space within each respective portion for variable resources available for allocation to the function to which the respective portion is assigned, and further assigning space within each respective portion for a set of permanent resources that have been allocated to the function to which the respective portion is assigned. The method further includes providing a translation table having a plurality of entries, and storing within each entry of the translation table, a different internal address of a permanent resource that has been allocated to a particular function.
摘要:
An I/O device includes a host interface that may receive and process transaction packets sent by a number of processing units, with each processing unit corresponding to a respective root complex. The host interface includes an error handling unit having error logic implemented in hardware that may determine, as each packet is received, whether each transaction packet has an error and to store information corresponding to any detected errors. The error handling unit may include an error processor that may be configured to execute error processing instructions to determine any error processing operations based upon the information. The error processor may also generate and send one or more instruction operations, each corresponding to a particular error processing operation. The error handling unit may also include an error processing unit that may execute the one or more instruction operations to perform the particular error processing operations.
摘要:
The described embodiments provide a system for accessing values for configuration space registers (CSRs). This system includes a CSR data storage mechanism with an address input and a CSR data output. The CSR data storage mechanism includes a memory containing a number of memory locations for storing the true or actual values for CSRs for functions for corresponding devices. In these embodiments, the memory locations are divided into at least one shared region and at least one unique region. In these embodiments, in response to receiving an address for a memory location on the address input, the CSR data storage mechanism accesses the value for the CSR in the memory location in a corresponding shared region or unique region.
摘要:
An I/O device includes a host interface coupled to a plurality of hardware resources. The host interface includes a transaction layer packet (TLP) processing unit that may receive and process a plurality of transaction layer packets sent by a plurality of processing units. Each processing unit may correspond to a respective root complex. The TLP processing unit may identify a transaction type and a processing unit corresponding to each transaction layer packet and store each transaction layer packet within a storage according to the transaction type and the processing unit. The TLP processing unit may select one or more transaction layer packets from the storage for process scheduling based upon a set of fairness criteria using an arbitration scheme. The TLP processing unit may further select and dispatch transaction layer packets for processing by downstream application hardware based upon additional criteria.
摘要:
An I/O device includes a host interface configured to process function level reset (FLR) requests in a specified amount of time. The host interface includes a control unit and groups of configuration space registers, each group corresponding to a function. The host interface also includes application availability registers, each associated with a respective function, and which may indicate whether application hardware within the respective function is available for access by a corresponding application device driver. The I/O device also includes application hardware resources associated with a respective function. In response to receiving an FLR request of a particular function, the control unit may cause the associated application availability register to indicate that the application hardware within the particular function is not available to the driver. The control unit may reset the corresponding configuration space registers within a predetermined amount of time and reset the associated application hardware resources.
摘要翻译:I / O设备包括被配置为在指定的时间量内处理功能级别复位(FLR)请求的主机接口。 主机接口包括一个控制单元和一组配置空间寄存器,每个组对应一个功能。 主机接口还包括应用可用性寄存器,每个与相应功能相关联,并且其可以指示相应功能内的应用硬件是否可用于相应的应用设备驱动程序的访问。 I / O设备还包括与相应功能相关联的应用硬件资源。 响应于接收到特定功能的FLR请求,控制单元可以使相关联的应用可用性寄存器指示特定功能内的应用硬件对于驱动程序是不可用的。 控制单元可以在预定的时间量内重置相应的配置空间寄存器,并重置相关联的应用硬件资源。
摘要:
An I/O device includes a host interface coupled to a plurality of hardware resources. The host interface includes a transaction layer packet (TLP) processing unit that may receive and process a plurality of transaction layer packets sent by a plurality of processing units. Each processing unit may correspond to a respective root complex. The TLP processing unit may identify a transaction type and a processing unit corresponding to each transaction layer packet and store each transaction layer packet within a storage according to the transaction type and the processing unit. The TLP processing unit may select one or more transaction layer packets from the storage for process scheduling based upon a set of fairness criteria using an arbitration scheme. The TLP processing unit may further select and dispatch transaction layer packets for processing by downstream application hardware based upon additional criteria.
摘要:
An I/O device includes a host interface that may receive and process transaction packets sent by a number of processing units, with each processing unit corresponding to a respective root complex. The host interface includes an error handling unit having error logic implemented in hardware that may determine, as each packet is received, whether each transaction packet has an error and to store information corresponding to any detected errors. The error handling unit may include an error processor that may be configured to execute error processing instructions to determine any error processing operations based upon the information. The error processor may also generate and send one or more instruction operations, each corresponding to a particular error processing operation. The error handling unit may also include an error processing unit that may execute the one or more instruction operations to perform the particular error processing operations.