ADJUSTING STORE GATHER WINDOW DURATION IN A DATA PROCESSING SYSTEM SUPPORTING SIMULTANEOUS MULTITHREADING

    公开(公告)号:US20220405125A1

    公开(公告)日:2022-12-22

    申请号:US17351478

    申请日:2021-06-18

    摘要: In at least some embodiments, a store-type operation is received and buffered within a store queue entry of a store queue associated with a cache memory of a processor core capable of executing multiple simultaneous hardware threads. A thread identifier indicating a particular hardware thread among the multiple hardware threads that issued the store-type operation is recorded. An indication of whether the store queue entry is a most recently allocated store queue entry for buffering store-type operations of the hardware thread is also maintained. While the indication indicates the store queue entry is a most recently allocated store queue entry for buffering store-type operations of the particular hardware thread, the store queue extends a duration of a store gathering window applicable to the store queue entry. For example, the duration may be extended by decreasing a rate at which the store gathering window applicable to the store queue entry ends.

    INITIATING INTERCONNECT OPERATION WITHOUT WAITING ON LOWER LEVEL CACHE DIRECTORY LOOKUP

    公开(公告)号:US20210342275A1

    公开(公告)日:2021-11-04

    申请号:US16862785

    申请日:2020-04-30

    IPC分类号: G06F12/128

    摘要: An upper level cache receives from an associated processor core a plurality of memory access requests including at least first and second memory access requests of differing first and second classes. Based on class histories associated with the first and second classes of memory access requests, the upper level cache initiates, on the system interconnect fabric, a first interconnect transaction corresponding to the first memory access request without first issuing the first memory access request to the lower level cache via a private communication channel between the upper level cache and the lower level cache. The upper level cache initiates, on the system interconnect fabric, a second interconnect transaction corresponding to the second memory access request only after first issuing the second memory access request to the lower level cache via the private communication channel between the upper level cache and the lower level cache and receiving a response to the second memory access request from the lower level cache.

    TRANSLATION ENTRY INVALIDATION IN A MULTITHREADED DATA PROCESSING SYSTEM

    公开(公告)号:US20200183843A1

    公开(公告)日:2020-06-11

    申请号:US16216624

    申请日:2018-12-11

    IPC分类号: G06F12/0842 G06F12/1027

    摘要: A multiprocessor data processing system includes a processor core having a translation structure for buffering a plurality of translation entries. In response to receipt of a translation invalidation request, the processor core determines from the translation invalidation request that the translation invalidation request does not require draining of memory referent instructions for which address translation has been performed by reference to a translation entry to be invalidated. Based on the determination, the processor core invalidates the translation entry in the translation structure and confirms completion of invalidation of the translation entry without regard to draining from the processor core of memory access requests for which address translation was performed by reference to the translation entry.

    REMOTE NODE BROADCAST OF REQUESTS IN A MULTINODE DATA PROCESSING SYSTEM

    公开(公告)号:US20190220409A1

    公开(公告)日:2019-07-18

    申请号:US15873366

    申请日:2018-01-17

    摘要: A cache coherent data processing system includes at least non-overlapping first, second, and third coherency domains. A master in the first coherency domain of the cache coherent data processing system selects a scope of an initial broadcast of an interconnect operation from among a set of scopes including (1) a remote scope including both the first coherency domain and the second coherency domain, but excluding the third coherency domain that is a peer of the first coherency domain, and (2) a local scope including only the first coherency domain. The master then performs an initial broadcast of the interconnect operation within the cache coherent data processing system utilizing the selected scope, where performing the initial broadcast includes the master initiating broadcast of the interconnect operation within the first coherency domain.

    EXPEDITED SERVICING OF STORE OPERATIONS IN A DATA PROCESSING SYSTEM
    50.
    发明申请
    EXPEDITED SERVICING OF STORE OPERATIONS IN A DATA PROCESSING SYSTEM 有权
    数据处理系统中存储操作的预期服务

    公开(公告)号:US20170060757A1

    公开(公告)日:2017-03-02

    申请号:US14839264

    申请日:2015-08-28

    IPC分类号: G06F12/08 G06F9/30

    摘要: In at least some embodiments, a processor core generates a store operation by executing a store instruction in an instruction sequence. The store operation is marked as a high priority store operation operation in response to detecting a barrier instruction in the instruction sequence immediately preceding the store instruction in program order and is not so marked otherwise. The store operation is buffered in a store queue associated with a cache memory of the processor core. Handling of the store operation in the store queue is expedited in response to the store operation being marked as a high priority store operation and not expedited otherwise.

    摘要翻译: 在至少一些实施例中,处理器核心通过在指令序列中执行存储指令来产生存储操作。 响应于在程序顺序中紧接在存储指令之前的指令序列中检测到障碍指令,存储操作被标记为高优先级存储操作操作,否则没有标记。 存储操作被缓存在与处理器核心的高速缓冲存储器相关联的存储队列中。 响应于存储操作被标记为高优先级存储操作而加快存储操作在存储队列中的处理,否则不加速。