Variable hit latency cache
    2.
    发明授权

    公开(公告)号:US11893241B1

    公开(公告)日:2024-02-06

    申请号:US17823695

    申请日:2022-08-31

    Applicant: Apple Inc.

    Abstract: A variable latency cache memory is disclosed. A cache subsystem includes a pipeline control circuit configured to initiate cache memory accesses for data. The cache subsystem further includes a cache memory circuit having a data array arranged into a plurality of groups, wherein different ones of the plurality of groups have different minimum access latencies due to different distances from the pipeline control circuit. A plurality of latency control circuits configured to ensure a latency is bounded to a maximum value for a given access to the data array, wherein a given latency control circuit is associated with a corresponding group of the plurality of groups. The latency for a given access may thus vary between a minimum access latency for a group closest to the pipeline control circuit to a maximum latency for an access to the group furthest from the pipeline control circuit.

    Mechanism for allowing speculative execution of loads beyond a wait for event instruction
    5.
    发明授权
    Mechanism for allowing speculative execution of loads beyond a wait for event instruction 有权
    允许推迟执行负载超过等待事件指令的机制

    公开(公告)号:US09501284B2

    公开(公告)日:2016-11-22

    申请号:US14502901

    申请日:2014-09-30

    Applicant: Apple Inc.

    CPC classification number: G06F9/3842 G06F9/30087 G06F9/3834 G06F9/3857

    Abstract: A processor includes a mechanism that checks for and flushes only speculative loads and any respective dependent instructions that are younger than an executed wait for event (WEV) instruction, and which also match an address of a store instruction that has been determined to have been executed by a different processor prior to execution of the paired SEV instruction by the different processor. The mechanism may allow speculative loads that do not match the address of any store instruction that has been determined to have been executed by a different processor prior to execution of the paired SEV instruction by the different processor.

    Abstract translation: 处理器包括一种机制,其仅检查和刷新推测负载以及比执行的等待事件(WEV)指令更年轻的任何相应的依赖指令,并且还匹配已经被确定已被执行的存储指令的地址 在由不同处理器执行配对SEV指令之前由不同的处理器。 该机制可以允许在由不同的处理器执行配对的SEV指令之前,已经确定已被不同处理器执行的任何存储指令的地址不匹配的推测性负载。

    MECHANISM FOR ALLOWING SPECULATIVE EXECUTION OF LOADS BEYOND A WAIT FOR EVENT INSTRUCTION
    6.
    发明申请
    MECHANISM FOR ALLOWING SPECULATIVE EXECUTION OF LOADS BEYOND A WAIT FOR EVENT INSTRUCTION 有权
    允许用于事件指令等待的负载的分析执行机制

    公开(公告)号:US20160092236A1

    公开(公告)日:2016-03-31

    申请号:US14502901

    申请日:2014-09-30

    Applicant: Apple Inc.

    CPC classification number: G06F9/3842 G06F9/30087 G06F9/3834 G06F9/3857

    Abstract: A processor includes a mechanism that checks for and flushes only speculative loads and any respective dependent instructions that are younger than an executed wait for event (WEV) instruction, and which also match an address of a store instruction that has been determined to have been executed by a different processor prior to execution of the paired SEV instruction by the different processor. The mechanism may allow speculative loads that do not match the address of any store instruction that has been determined to have been executed by a different processor prior to execution of the paired SEV instruction by the different processor.

    Abstract translation: 处理器包括一种机制,其仅检查和刷新推测负载以及比执行的等待事件(WEV)指令更年轻的任何相应的依赖指令,并且还匹配已经被确定已被执行的存储指令的地址 在由不同处理器执行配对SEV指令之前由不同的处理器。 该机制可以允许在由不同的处理器执行配对的SEV指令之前,已经确定已被不同处理器执行的任何存储指令的地址不匹配的推测性负载。

    Request ordering in a cache
    8.
    发明授权

    公开(公告)号:US12216578B2

    公开(公告)日:2025-02-04

    申请号:US18353830

    申请日:2023-07-17

    Applicant: Apple Inc.

    Abstract: A cache may include multiple request handling pipes, each of which may further include multiple request buffers, for storing device requests from one or more processors to one or more devices. Some of the device requests may require to be sent to the devices according to an order. For a given one of such device requests, the cache may select a request handling pipe, based on an address indicated by the device request, and select a request buffer, based on the available entries of the request buffers of the selected request handling pipe, to store the device request. The cache may further use a first-level and a second-level token stores to track and maintain the device requests in order when transmitting the device requests to the devices.

    Victim allocations in shared system cache

    公开(公告)号:US10963392B1

    公开(公告)日:2021-03-30

    申请号:US16048645

    申请日:2018-07-30

    Applicant: Apple Inc.

    Abstract: A system and method for efficiently handling data selected for eviction in a computing system. In various embodiments, a computing system includes one or more processors, a system memory, and a victim cache. The cache controller of a particular cache in a cache memory subsystem includes an allocator for determining whether to allocate data evicted from the particular cache into the victim cache. The data fetched into the first cache includes data fetched to service miss requests, which includes demand requests and prefetch requests. To determine whether to allocate, the allocator determines whether a usefulness of data fetched into the particular cache exceeds a threshold. If so, the evicted data is stored in the victim cache. If not, the evicted data bypasses the victim cache. Data determined to be accessed by a processor is deemed to be of a higher usefulness.

Patent Agency Ranking