摘要:
Mechanisms for processing of communications between data processing devices are provided. With the mechanisms of the illustrative embodiments, a set of techniques that enables sustaining media speed by distributing transmit and receive-side processing over multiple processing cores is provided. In addition, these techniques also enable designing multi-threaded network interface controller (NIC) hardware that efficiently hides the latency of direct memory access (DMA) operations associated with data packet transfers over an input/output (I/O) bus. Multiple processing cores may operate concurrently using separate instances of a communication protocol stack and device drivers to process data packets for transmission with separate hardware implemented send queue managers in a network adapter processing these data packets for transmission. Multiple hardware receive packet processors in the network adapter may be used, along with a flow classification engine, to route received data packets to appropriate receive queues and processing cores for processing.
摘要:
A method for exchanging message data in a distributed computer system between a sending and a receiving hardware system. The sending hardware system includes a first memory system and a receiving hardware system which includes a second memory system with a second data buffer and a second memory region. The sending hardware system and the receiving hardware system are coupled via a non-transparent bridge unit. The method includes allocating empty memory, writing information about the empty memory, copying payload data directly from the sending hardware system to the empty memory locations, and writing information about the copied payload data to the second data buffer of the second memory system inside the receiving hardware system. A computer program product for carrying out the method is also provided.
摘要:
A method for exchanging message data in a distributed computer system between a sending and a receiving hardware system. The sending hardware system includes a first memory system and a receiving hardware system which includes a second memory system with a second data buffer and a second memory region. The sending hardware system and the receiving hardware system are coupled via a non-transparent bridge unit. The method includes allocating empty memory, writing information about the empty memory, copying payload data directly from the sending hardware system to the empty memory locations, and writing information about the copied payload data to the second data buffer of the second memory system inside the receiving hardware system. A system and computer program product for carrying out the method are also provided.
摘要:
Translating between an Ethernet protocol used by a first network component and a Converged Enhanced Ethernet (CEE) protocol used by a second network component, the first and second components coupled through a CEE Converter that translates by: for data flow from the first network component to the second network component: receiving, by the CEE converter, traffic flow definition parameters for a single CEE protocol data flow; calculating, by a credit manager, available buffer space in an outbound frame buffer of the CEE converter for the data flow; communicating, by the credit manager to a CEE credit driver of the first component, the calculated size of the buffer space together with a start sequence number and a flow identifier; and responding, by the CEE credit driver to the CEE converter, with Ethernet frames comprising a private header that includes the flow identifier and a sequence number.
摘要:
A method for exchanging message data in a distributed computer system between a sending and a receiving hardware system. The sending hardware system includes a first memory system and a receiving hardware system which includes a second memory system with a second data buffer and a second memory region. The sending hardware system and the receiving hardware system are coupled via a non-transparent bridge unit. The method includes allocating empty memory, writing information about the empty memory, copying payload data directly from the sending hardware system to the empty memory locations, and writing information about the copied payload data to the second data buffer of the second memory system inside the receiving hardware system. A system and computer program product for carrying out the method are also provided.
摘要:
Translating between an Ethernet protocol used by a first network component and a Converged Enhanced Ethernet (CEE) protocol used by a second network component, the first and second components coupled through a CEE Converter that translates by: for data flow from the first network component to the second network component: receiving, by the CEE converter, traffic flow definition parameters for a single CEE protocol data flow; calculating, by a credit manager, available buffer space in an outbound frame buffer of the CEE converter for the data flow; communicating, by the credit manager to a CEE credit driver of the first component, the calculated size of the buffer space together with a start sequence number and a flow identifier; and responding, by the CEE credit driver to the CEE converter, with Ethernet frames comprising a private header that includes the flow identifier and a sequence number.
摘要:
A data processing system includes a main storage, an input/output memory management unit (IOMMU) coupled to the main storage, a peripheral component interconnect (PCI) device coupled to the IOMMU, and a mapper. The system is configured to allocate an amount of physical memory in the main storage and the IOMMU is configured to provide access to the main storage and to map a PCI address from the PCI device to a physical memory address within the main storage. The mapper is configured to perform a mapping between the allocated amount of physical memory of the main storage and a contiguous PCI address space. The IOMMU is further configured to translate PCI addresses of the contiguous PCI address space to the physical memory address within the main storage.
摘要:
Translating between a first communication protocol used by a first network component and a second communication protocol used by a second network, where translating includes: receiving, by a network engine adapter operating independently from the first and second network components, data packets from the first and second network components; and performing, by the network engine, a combined communication protocol based on the first communication protocol and the second communication protocol, including manipulating data packets of at least one of the first communication protocol or the second communication protocol, thereby offloading performance requirements for the combined communication protocol from the first and second network components.
摘要:
A synchronization optimized queuing method and device to minimize software/hardware interaction in network interface hardware during an end-of-initiative process, including network adapter queue implementations for network interface hardware for optimized communication in a computer system. An end-of-initiative procedure to ensure that the network interface hardware has received an interrupt enable and to recheck the interrupt queue is unnecessary in the present invention.
摘要:
A computing system including a communication network architecture with a transport layer mechanism. The computing system is capable of supporting a multitude of different application protocols involving information and/or data exchange between an operating system instance and various firmware services. The computing system may include an operating system instance with a Generic Transport Driver supporting the application protocols in the operating system instance, a firmware service connected to a Generic Transport Facility via a Generic Firmware Service Interface and a virtual machine with a Generic Transport Passthrough. The Generic Transport Driver of the operating system instance exchanges communication protocol data with the Generic Transport Facility of the firmware component via the generic Transport Passthrough.