Optimal interconnect utilization in a data processing network
    2.
    发明授权
    Optimal interconnect utilization in a data processing network 失效
    数据处理网络中的最佳互连利用率

    公开(公告)号:US07821944B2

    公开(公告)日:2010-10-26

    申请号:US12059762

    申请日:2008-03-31

    IPC分类号: H04L12/56

    摘要: A method for managing packet traffic in a data processing network includes collecting data indicative of the amount of packet traffic traversing each of the links in the network's interconnect. The collected data includes source and destination information indicative of the source and destination of corresponding packets. A heavily used links are then identified from the collected data. Packet data associated with the heavily used link is then analyzed to identify a packet source and packet destination combination that is a significant contributor to the packet traffic on the heavily used link. In response, a process associated with the identified packet source and packet destination combination is migrated, such as to another node of the network, to reduce the traffic on the heavily used link. In one embodiment, an agent installed on each interconnect switch collects the packet data for interconnect links connected to the switch.

    摘要翻译: 一种用于管理数据处理网络中的分组业务的方法包括收集表示穿过网络互连中每个链路的分组流量的数据的数据。 收集的数据包括指示相应分组的源和目的地的源和目的地信息。 然后从收集的数据中识别出大量使用的链接。 然后分析与大量使用的链路相关联的分组数据,以识别作为重度使用的链路上的分组业务的重要贡献者的分组源和分组目的地组合。 作为响应,与识别的分组源和分组目的地组合相关联的进程被迁移,例如到网络的另一个节点,以减少重度使用的链路上的流量。 在一个实施例中,安装在每个互连交换机上的代理收集用于连接到交换机的互连链路的分组数据。

    Method and system for memory address translation and pinning
    3.
    发明授权
    Method and system for memory address translation and pinning 失效
    内存地址转换和固定的方法和系统

    公开(公告)号:US07636800B2

    公开(公告)日:2009-12-22

    申请号:US11426588

    申请日:2006-06-27

    摘要: A method and system for memory address translation and pinning are provided. The method includes attaching a memory address space identifier to a direct memory access (DMA) request, the DMA request is sent by a consumer and using a virtual address in a given address space. The method further includes looking up for the memory address space identifier to find a translation of the virtual address in the given address space used in the DMA request to a physical page frame. Provided that the physical page frame is found, pinning the physical page frame al song as the DMA request is in progress to prevent an unmapping operation of said virtual address in said given address space, and completing the DMA request, wherein the steps of attaching, looking up and pinning are centrally controlled by a host gateway.

    摘要翻译: 提供了一种用于存储器地址转换和钉扎的方法和系统。 该方法包括将存储器地址空间标识符附加到直接存储器访问(DMA)请求,DMA请求由消费者发送并且使用给定地址空间中的虚拟地址。 该方法还包括查找存储器地址空间标识符以找到在DMA请求中使用的给定地址空间中的虚拟地址到物理页面帧的转换。 如果发现物理页框,则在进行DMA请求时固定物理页框al歌,以防止在所述给定地址空间中所述虚拟地址的解映射操作,并完成DMA请求,其中, 查找和固定由主机网关集中控制。

    Device, method and computer program product for multi-level address translation
    4.
    发明授权
    Device, method and computer program product for multi-level address translation 有权
    用于多级地址转换的设备,方法和计算机程序产品

    公开(公告)号:US07600093B2

    公开(公告)日:2009-10-06

    申请号:US11623468

    申请日:2007-01-16

    IPC分类号: G06F12/00

    CPC分类号: G06F12/1081

    摘要: A method for retrieving information from a storage unit, the method includes: receiving, by an input output memory management unit second-level translation information representative of a partition of a storage unit address space; receiving, by a input output memory management unit, a direct memory access request that comprises a consumer identifier and a second memory address that was first-level translated by a communication circuit translation entity; performing, by the input output memory management unit, a second-level translation of the second memory address such as to provide a third memory address, in response to the identity of the consumer; and accessing the storage unit using the third memory address.

    摘要翻译: 一种从存储单元检索信息的方法,所述方法包括:通过输入输出存储器管理单元接收表示存储单元地址空间分区的二级转换信息; 由输入输出存储器管理单元接收直接存储器访问请求,该直接存储器访问请求包括消费者标识符和由通信电路转换实体首次翻译的第二存储器地址; 通过输入输出存储器管理单元,响应于消费者的身份,执行第二存储器地址的第二级转换,以提供第三存储器地址; 以及使用第三存储器地址访问存储单元。

    Cache architecture to enable accurate cache sensitivity
    5.
    发明授权
    Cache architecture to enable accurate cache sensitivity 失效
    缓存结构,以实现高速缓存灵敏度

    公开(公告)号:US06243788B1

    公开(公告)日:2001-06-05

    申请号:US09098988

    申请日:1998-06-17

    IPC分类号: G06F1200

    CPC分类号: G06F9/5033

    摘要: A technique of monitoring the cache footprint of relevant threads on a given processor and its associated cache, thus enabling operating systems to perform better cache sensitive scheduling. A function of the footprint of a thread in a cache can be used as an indication of the affinity of that thread to that cache's processor. For instance, the larger the number of cachelines already existing in a cache, the smaller the number of cache misses the thread will experience when scheduled on that processor, and hence the greater the affinity of the thread to that processor. Besides a thread's priority and other system defined parameters, scheduling algorithms can take cache affinity into account when assigning execution of threads to particular processors. This invention describes an apparatus that accurately measures the cache footprint of a thread on a given processor and its associated cache by keeping a state and ownership count of cachelines based on ownership registration and a cache usage as determined by a cache monitoring unit.

    摘要翻译: 监视给定处理器及其关联高速缓存上的相关线程的缓存占用空间的技术,从而使操作系统能够执行更好的缓存敏感调度。 高速缓存中的线程占用空间的功能可以用作该线程与该缓存处理器的亲和度的指示。 例如,缓存中已经存在的高速缓存行数越多,线程在该处理器上调度时将遇到的高速缓存未命中的数量越小,因此线程对该处理器的亲和性越大。 除了线程的优先级和其他系统定义的参数之外,调度算法可以在将特定执行线程分配给特定处理器时考虑缓存关联性。 本发明描述了一种通过基于由高速缓存监视单元确定的所有权注册和高速缓存使用来保持高速缓存行的状态和所有权计数来精确地测量给定处理器及其相关联的高速缓存上的线程的缓存占用空间的装置。

    Assist thread for injecting cache memory in a microprocessor
    6.
    发明授权
    Assist thread for injecting cache memory in a microprocessor 有权
    协助在微处理器中注入高速缓存的线程

    公开(公告)号:US08949837B2

    公开(公告)日:2015-02-03

    申请号:US13434423

    申请日:2012-03-29

    摘要: A data processing system includes a microprocessor having access to multiple levels of cache memories. The microprocessor executes a main thread compiled from a source code object. The system includes a processor for executing an assist thread also derived from the source code object. The assist thread includes memory reference instructions of the main thread and only those arithmetic instructions required to resolve the memory reference instructions. A scheduler configured to schedule the assist thread in conjunction with the corresponding execution thread is configured to execute the assist thread ahead of the execution thread by a determinable threshold such as the number of main processor cycles or the number of code instructions. The assist thread may execute in the main processor or in a dedicated assist processor that makes direct memory accesses to one of the lower level cache memory elements.

    摘要翻译: 数据处理系统包括具有访问多级缓存存储器的微处理器。 微处理器执行从源代码对象编译的主线程。 该系统包括用于执行也源自源代码对象的辅助线程的处理器。 辅助线程包括主线程的存储器参考指令和仅解析存储器参考指令所需的算术指令。 配置成与对应的执行线程一起调度辅助线程的调度器被配置为通过诸如主处理器周期的数量或代码指令的数量的可确定的阈值来执行执行线程之前的辅助线程。 辅助线程可以在主处理器或专用辅助处理器中执行,该处理器直接对下一级高速缓冲存储器元件之一进行存储器访问。

    Optimal interconnect utilization in a data processing network
    7.
    发明授权
    Optimal interconnect utilization in a data processing network 失效
    数据处理网络中的最佳互连利用率

    公开(公告)号:US07400585B2

    公开(公告)日:2008-07-15

    申请号:US10948414

    申请日:2004-09-23

    IPC分类号: H04L12/56

    摘要: A method for managing packet traffic in a data processing network includes collecting data indicative of the amount of packet traffic traversing each of the links in the network's interconnect. The collected data includes source and destination information indicative of the source and destination of corresponding packets. A heavily used links are then identified from the collected data. Packet data associated with the heavily used link is then analyzed to identify a packet source and packet destination combination that is a significant contributor to the packet traffic on the heavily used link. In response, a process associated with the identified packet source and packet destination combination is migrated, such as to another node of the network, to reduce the traffic on the heavily used link. In one embodiment, an agent installed on each interconnect switch collects the packet data for interconnect links connected to the switch.

    摘要翻译: 一种用于管理数据处理网络中的分组业务的方法包括收集表示穿过网络互连中每个链路的分组流量的数据的数据。 收集的数据包括指示相应分组的源和目的地的源和目的地信息。 然后从收集的数据中识别出大量使用的链接。 然后分析与大量使用的链路相关联的分组数据,以识别作为重度使用的链路上的分组业务的重要贡献者的分组源和分组目的地组合。 作为响应,与识别的分组源和分组目的地组合相关联的进程被迁移,例如到网络的另一个节点,以减少重度使用的链路上的流量。 在一个实施例中,安装在每个互连交换机上的代理收集用于连接到交换机的互连链路的分组数据。

    ASSIST THREAD FOR INJECTING CACHE MEMORY IN A MICROPROCESSOR
    8.
    发明申请
    ASSIST THREAD FOR INJECTING CACHE MEMORY IN A MICROPROCESSOR 有权
    在微处理器中注入高速缓存存储器的辅助螺纹

    公开(公告)号:US20120198459A1

    公开(公告)日:2012-08-02

    申请号:US13434423

    申请日:2012-03-29

    IPC分类号: G06F9/46 G06F12/08

    摘要: A data processing system includes a microprocessor having access to multiple levels of cache memories. The microprocessor executes a main thread compiled from a source code object. The system includes a processor for executing an assist thread also derived from the source code object. The assist thread includes memory reference instructions of the main thread and only those arithmetic instructions required to resolve the memory reference instructions. A scheduler configured to schedule the assist thread in conjunction with the corresponding execution thread is configured to execute the assist thread ahead of the execution thread by a determinable threshold such as the number of main processor cycles or the number of code instructions. The assist thread may execute in the main processor or in a dedicated assist processor that makes direct memory accesses to one of the lower level cache memory elements.

    摘要翻译: 数据处理系统包括具有访问多级缓存存储器的微处理器。 微处理器执行从源代码对象编译的主线程。 该系统包括用于执行也源自源代码对象的辅助线程的处理器。 辅助线程包括主线程的存储器参考指令和仅解析存储器参考指令所需的算术指令。 配置成与对应的执行线程一起调度辅助线程的调度器被配置为通过诸如主处理器周期的数量或代码指令的数量的可确定的阈值来执行执行线程之前的辅助线程。 辅助线程可以在主处理器或专用辅助处理器中执行,该处理器直接对下一级高速缓冲存储器元件之一进行存储器访问。

    Assist thread for injecting cache memory in a microprocessor
    9.
    发明授权
    Assist thread for injecting cache memory in a microprocessor 有权
    协助在微处理器中注入高速缓存的线程

    公开(公告)号:US08230422B2

    公开(公告)日:2012-07-24

    申请号:US11034546

    申请日:2005-01-13

    IPC分类号: G06F9/46 G06F9/40 G06F13/28

    摘要: A data processing system includes a microprocessor having access to multiple levels of cache memories. The microprocessor executes a main thread compiled from a source code object. The system includes a processor for executing an assist thread also derived from the source code object. The assist thread includes memory reference instructions of the main thread and only those arithmetic instructions required to resolve the memory reference instructions. A scheduler configured to schedule the assist thread in conjunction with the corresponding execution thread is configured to execute the assist thread ahead of the execution thread by a determinable threshold such as the number of main processor cycles or the number of code instructions. The assist thread may execute in the main processor or in a dedicated assist processor that makes direct memory accesses to one of the lower level cache memory elements.

    摘要翻译: 数据处理系统包括具有访问多级缓存存储器的微处理器。 微处理器执行从源代码对象编译的主线程。 该系统包括用于执行也源自源代码对象的辅助线程的处理器。 辅助线程包括主线程的存储器参考指令和仅解析存储器参考指令所需的算术指令。 配置成与对应的执行线程一起调度辅助线程的调度器被配置为通过诸如主处理器周期的数量或代码指令的数量的可确定的阈值来执行执行线程之前的辅助线程。 辅助线程可以在主处理器或专用辅助处理器中执行,该处理器直接对下一级高速缓冲存储器元件之一进行存储器访问。

    METHOD AND SYSTEM FOR MEMORY ADDRESS TRANSLATION AND PINNING
    10.
    发明申请
    METHOD AND SYSTEM FOR MEMORY ADDRESS TRANSLATION AND PINNING 审中-公开
    用于存储器翻译和引导的方法和系统

    公开(公告)号:US20100049883A1

    公开(公告)日:2010-02-25

    申请号:US12568712

    申请日:2009-09-29

    IPC分类号: G06F13/28 G06F12/10

    摘要: A method and system for memory address translation and pinning are provided. The method includes attaching a memory address space identifier to a direct memory access (DMA) request, the DMA request is sent by a consumer and using a virtual address in a given address space. The method further includes looking up for the memory address space identifier to find a translation of the virtual address in the given address space used in the DMA request to a physical page frame. Provided that the physical page frame is found, pinning the physical page frame as long as the DMA request is in progress to prevent an unmapping operation of said virtual address in said given address space, and completing the DMA request, wherein the steps of attaching, looking up and pinning are centrally controlled by a host gateway.

    摘要翻译: 提供了一种用于存储器地址转换和钉扎的方法和系统。 该方法包括将存储器地址空间标识符附加到直接存储器访问(DMA)请求,DMA请求由消费者发送并且使用给定地址空间中的虚拟地址。 该方法还包括查找存储器地址空间标识符以找到在DMA请求中使用的给定地址空间中的虚拟地址到物理页面帧的转换。 只要找到物理页面帧,只要DMA请求正在进行,固定物理页面帧,以防止所述给定地址空间中的所述虚拟地址的解映射操作,并且完成DMA请求,其中, 查找和固定由主机网关集中控制。