Uplink signal transmission method, processing device, and system

    公开(公告)号:US10009104B2

    公开(公告)日:2018-06-26

    申请号:US15054718

    申请日:2016-02-26

    CPC分类号: H04B10/2575 H04Q11/0066

    摘要: An uplink signal scheduling method, a processing device, and a system. The method includes when uplink signals sent by at least one transmit device are received, preprocessing the uplink signals, to generate a data over cable service interface specification (DOCSIS) frame, where the DOCSIS frame includes at least two uplink signals, and each uplink signal of the at least two uplink signals corresponds to one uplink wavelength, and when it is detected that a signal conflict exists in the DOCSIS frame, creating at least two signal groups according to the uplink signals, and allocating, to the at least two signal groups, uplink signals that have a same uplink wavelength and cause the signal conflict, and performing scheduling on the uplink signals according to the signal groups that have undergone allocation.

    Method and Apparatus for Managing Physical Network Interface Card, and Physical Host
    32.
    发明申请
    Method and Apparatus for Managing Physical Network Interface Card, and Physical Host 有权
    用于管理物理网络接口卡和物理主机的方法和设备

    公开(公告)号:US20150207678A1

    公开(公告)日:2015-07-23

    申请号:US14676191

    申请日:2015-04-01

    摘要: A method and an apparatus for managing one or more physical network interface cards and a physical host are provided. One or more virtual network interface cards are created, where each of the virtual network interface cards has a standard network interface card feature and an operation interface; the one or more virtual network interface cards are separately associated with one or more function modules of the physical network interface cards; and the physical network interface cards are managed by managing the one or more virtual network interface cards. In this way, differences in underlying hardware are shielded for an upper layer, and convenient and efficient centralized management are provided, thereby further improving network resource utilization.

    摘要翻译: 提供了一种用于管理一个或多个物理网络接口卡和物理主机的方法和装置。 创建一个或多个虚拟网络接口卡,其中每个虚拟网络接口卡具有标准网络接口卡特征和操作界面; 一个或多个虚拟网络接口卡与物理网络接口卡的一个或多个功能模块分开关联; 通过管理一个或多个虚拟网络接口卡来管理物理网络接口卡。 以这种方式,上层硬件的差异被屏蔽,提供了方便有效的集中管理,从而进一步提高了网络资源利用率。

    Data Cache Method, Device, and System in a Multi-Node System
    33.
    发明申请
    Data Cache Method, Device, and System in a Multi-Node System 有权
    多节点系统中的数据缓存方法,设备和系统

    公开(公告)号:US20130346693A1

    公开(公告)日:2013-12-26

    申请号:US13968714

    申请日:2013-08-16

    发明人: Xiaofeng Zhang

    IPC分类号: G06F12/08

    摘要: A data cache method, device, and system in a multi-node system are provided. The method includes: dividing a cache area of a cache medium into multiple sub-areas, where each sub-area is corresponding to a node in the system; dividing each of the sub-areas into a thread cache area and a global cache area; when a process reads a file, detecting a read frequency of the file; when the read frequency of the file is greater than a first threshold and the size of the file does not exceed a second threshold, caching the file in the thread cache area; or when the read frequency of the file is greater than the first threshold and the size of the file exceeds the second threshold, caching the file in the global cache area. Thus overheads of remote access of a system are reduced, and I/O performance of the system is improved.

    摘要翻译: 提供了多节点系统中的数据高速缓存方法,设备和系统。 该方法包括:将高速缓存介质的高速缓存区域划分为多个子区域,其中每个子区域对应于系统中的节点; 将每个子区域划分为线程高速缓存区域和全局缓存区域; 当进程读取文件时,检测文件的读取频率; 当文件的读取频率大于第一阈值并且文件的大小不超过第二阈值时,将文件高速缓存在线程高速缓存区域中; 或者当文件的读取频率大于第一阈值并且文件的大小超过第二阈值时,将文件高速缓存在全局缓存区域中。 因此,减少了系统的远程访问开销,提高了系统的I / O性能。

    Co-processing acceleration method, apparatus, and system
    34.
    发明授权
    Co-processing acceleration method, apparatus, and system 有权
    协同加工方法,装置和系统

    公开(公告)号:US08478926B1

    公开(公告)日:2013-07-02

    申请号:US13622422

    申请日:2012-09-19

    IPC分类号: G06F13/36

    摘要: An embodiment of the present invention discloses a co-processing acceleration method, including: receiving a co-processing request message which is sent by a compute node in a computer system and carries address information of to-be-processed data; according to the co-processing request message, obtaining the to-be-processed data, and storing the to-be-processed data in a public buffer card; and allocating the to-be-processed data stored in the public buffer card to an idle co-processor card in the computer system for processing. An added public buffer card is used as a public data buffer channel between a hard disk and each co-processor card of a computer system, and to-be-processed data does not need to be transferred by a memory of the compute node, which avoids overheads of the data in transmission through the memory of the compute node, and thereby breaks through a bottleneck of memory delay and bandwidth, and increases a co-processing speed.

    摘要翻译: 本发明的实施例公开了一种协处理加速方法,包括:接收由计算机系统中的计算节点发送的协处理请求消息,并携带待处理数据的地址信息; 根据协同处理请求消息,获得待处理数据,并将待处理数据存储在公共缓冲卡中; 以及将存储在公共缓冲卡中的待处理数据分配给计算机系统中的空闲协处理器卡进行处理。 添加的公共缓冲卡被用作计算机系统的硬盘和每个协处理器卡之间的公共数据缓冲通道,并且待处理数据不需要由计算节点的存储器传送, 避免通过计算节点的存储器传输的数据的开销,从而突破存储器延迟和带宽的瓶颈,并增加协处理速度。