Apparatus and method for managing a cache

    公开(公告)号:US11036279B2

    公开(公告)日:2021-06-15

    申请号:US16397025

    申请日:2019-04-29

    Applicant: Arm Limited

    Inventor: Alex James Waugh

    Abstract: An apparatus and method are provided for managing a cache. The cache is arranged to comprise a plurality of cache sections, where each cache section is powered independently of the other cache sections in the plurality of cache sections, and the apparatus has power control circuitry to control power to each of the cache sections. The power control circuitry is responsive to a trigger condition indicative of an ability to operate the cache in a power saving mode, to perform a latency evaluation process to determine a latency indication for each of the cache sections, and to control which of a subset of the cache sections to power off in dependence on the latency indication. This can allow the power consumption savings realised by turning off one or more cache sections to be optimised to take into account the current system state.

    Apparatus and method for operating a virtually indexed physically tagged cache

    公开(公告)号:US10248572B2

    公开(公告)日:2019-04-02

    申请号:US15271611

    申请日:2016-09-21

    Applicant: ARM Limited

    Abstract: An apparatus and method are provided for operating a virtually indexed, physically tagged cache. The apparatus has processing circuitry for performing data processing operations on data, and a virtually indexed, physically tagged cache for storing data for access by the processing circuitry. The cache is accessed using a virtual address portion of a virtual address in order to identify a number of cache entries, and then physical address portions stored in those cache entries are compared with the physical address derived from the virtual address in order to detect whether a hit condition exists. Further, snoop request processing circuitry is provided that is responsive to a snoop request specifying a physical address, to determine a plurality of possible virtual address portions for the physical address, and to perform a snoop processing operation in order to determine whether the hit condition is detected for a cache entry when accessing the cache storage using the plurality of possible virtual address portions. On detection of the hit condition a coherency action is performed in respect of the cache entry causing the hit condition. This allows effective detection and removal of aliasing conditions that can arise when different virtual addresses associated with the same physical address cause cache entries in different sets of the cache to be accessed.

    Arbitration and hazard detection for a data processing apparatus

    公开(公告)号:US10061728B2

    公开(公告)日:2018-08-28

    申请号:US14801990

    申请日:2015-07-17

    Applicant: ARM LIMITED

    Inventor: Alex James Waugh

    Abstract: A device for selecting requests to be serviced in a data processing apparatus has an arbitration stage for selecting an arbitrated request from a plurality of candidate requests and a hazard detection stage for performing hazard detection to predict whether the arbitrated request selected by the arbitration stage meets a hazard condition. If the arbitrated request meets the hazard condition, the hazard detection stage returns the arbitration request to the arbitration stage for a later arbitration and sets a hazard indication for the returned request. Also, the hazard detection stage controls at least one other arbitration request to be returned if it conflicts with a candidate request having the hazard indication set. This approach prevents denial of service to requests that were hazarded.

    Detection circuitry
    7.
    发明授权

    公开(公告)号:US11334486B2

    公开(公告)日:2022-05-17

    申请号:US16305165

    申请日:2017-04-27

    Applicant: ARM LIMITED

    Abstract: An apparatus (300) for processing data comprises a plurality of memory access request sources (102,104) which generate memory access requests. Each of the memory access request sources has a local memory (106,108), and the apparatus also includes a shared memory (110). When the memory access requests are atomic memory access requests, contention may arise over common data. When this occurs, the present technique triggers a switch of processing data in the local memory of a memory access request source to processing data in the shared memory.

    Cache content management
    8.
    发明授权

    公开(公告)号:US11256623B2

    公开(公告)日:2022-02-22

    申请号:US15427459

    申请日:2017-02-08

    Applicant: ARM Limited

    Abstract: Apparatus and a corresponding method of operating a hub device, and a target device, in a coherent interconnect system are presented. A cache pre-population request of a set of coherency protocol transactions in the system is received from a requesting master device specifying at least one data item and the hub device responds by cause a cache pre-population trigger of the set of coherency protocol transactions specifying the at least one data item to be transmitted to a target device. This trigger can cause the target device to request that the specified at least one data item is retrieved and brought into cache. Since the target device can therefore decide whether to respond to the trigger or not, it does not receive cached data unsolicited, simplifying its configuration, whilst still allowing some data to be pre-cached.

    Apparatus and method for supporting multiple cache features

    公开(公告)号:US10691606B2

    公开(公告)日:2020-06-23

    申请号:US15392190

    申请日:2016-12-28

    Applicant: ARM Limited

    Abstract: An apparatus and method are provided for supporting multiple cache features. The apparatus provides cache storage comprising a plurality of cache ways and organised as a plurality of ways groups, where each way group comprises multiple cache ways from the plurality of cache ways. First cache feature circuitry is provided to implement a first cache feature that is applied to the way groups, and second cache feature circuitry is provided to implement a second cache feature that is applied to the way groups. Way group control circuitry is then arranged to provide a first mapping defining which cache ways belong to each way group when the first cache feature is applied to the way groups, and a second mapping defining which cache ways belong to each way group when the second cache feature is applied to the way groups. The first mapping and the second mapping are selected so as to prevent application of a cache feature to the way groups by one of the cache feature circuits from interfering with the ability of the other cache feature circuit to access at least one cache way in each of the way groups. Such an approach alleviates the risk of actions taken by one of the cache features from interfering with the ability of the other cache feature to operate as intended.

Patent Agency Ranking