摘要:
An uplink signal scheduling method, a processing device, and a system. The method includes when uplink signals sent by at least one transmit device are received, preprocessing the uplink signals, to generate a data over cable service interface specification (DOCSIS) frame, where the DOCSIS frame includes at least two uplink signals, and each uplink signal of the at least two uplink signals corresponds to one uplink wavelength, and when it is detected that a signal conflict exists in the DOCSIS frame, creating at least two signal groups according to the uplink signals, and allocating, to the at least two signal groups, uplink signals that have a same uplink wavelength and cause the signal conflict, and performing scheduling on the uplink signals according to the signal groups that have undergone allocation.
摘要:
A method and an apparatus for managing one or more physical network interface cards and a physical host are provided. One or more virtual network interface cards are created, where each of the virtual network interface cards has a standard network interface card feature and an operation interface; the one or more virtual network interface cards are separately associated with one or more function modules of the physical network interface cards; and the physical network interface cards are managed by managing the one or more virtual network interface cards. In this way, differences in underlying hardware are shielded for an upper layer, and convenient and efficient centralized management are provided, thereby further improving network resource utilization.
摘要:
A data cache method, device, and system in a multi-node system are provided. The method includes: dividing a cache area of a cache medium into multiple sub-areas, where each sub-area is corresponding to a node in the system; dividing each of the sub-areas into a thread cache area and a global cache area; when a process reads a file, detecting a read frequency of the file; when the read frequency of the file is greater than a first threshold and the size of the file does not exceed a second threshold, caching the file in the thread cache area; or when the read frequency of the file is greater than the first threshold and the size of the file exceeds the second threshold, caching the file in the global cache area. Thus overheads of remote access of a system are reduced, and I/O performance of the system is improved.
摘要:
An embodiment of the present invention discloses a co-processing acceleration method, including: receiving a co-processing request message which is sent by a compute node in a computer system and carries address information of to-be-processed data; according to the co-processing request message, obtaining the to-be-processed data, and storing the to-be-processed data in a public buffer card; and allocating the to-be-processed data stored in the public buffer card to an idle co-processor card in the computer system for processing. An added public buffer card is used as a public data buffer channel between a hard disk and each co-processor card of a computer system, and to-be-processed data does not need to be transferred by a memory of the compute node, which avoids overheads of the data in transmission through the memory of the compute node, and thereby breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed.