Cache stash relay
    21.
    发明授权

    公开(公告)号:US11314645B1

    公开(公告)日:2022-04-26

    申请号:US17123527

    申请日:2020-12-16

    Applicant: Arm Limited

    Abstract: In a cache stash relay, first data, from a producer device, is stashed in a shared cache of a data processing system. The first data is associated with first data addresses in a shared memory of the data processing system. An address pattern of the first data addresses is identified. When a request for second data, associated with a second data address, is received from a processing unit of the data processing system, any data associated with data addresses in the identified address pattern are relayed from the shared cache to a local cache of the processing unit if the second data address is in the identified address pattern. The relaying may include pushing the data from the shared cache to the local cache or a pre-fetcher of the processing unit pulling the data from the shared cache to the local cache in response to a message.

    System, method and apparatus for inter-process communication

    公开(公告)号:US10901691B2

    公开(公告)日:2021-01-26

    申请号:US16261071

    申请日:2019-01-29

    Applicant: Arm Limited

    Abstract: A system, apparatus and method for enabling a FIFO-like (first-in-first-out) communication between a plurality of executing processes that are distributed throughout a computing system. Embodiments exploit locality in the hierarchy of the cache memory and communication busses within the computing system to enable the passing of messages or streams of bytes with a low latency and high throughput. In addition, this allows for participating components to be very simple, or very sophisticated, but still benefit from the improved communications patterns.

    Message passing circuitry and method

    公开(公告)号:US11960945B2

    公开(公告)日:2024-04-16

    申请号:US17225674

    申请日:2021-04-08

    Applicant: Arm Limited

    Abstract: Message passing circuitry comprises lookup circuitry responsive to a producer request indicating message data provided on a target message channel by a producer node of a system-on-chip, to obtain, from a channel consumer information structure, selected channel consumer information associated with a given consumer node subscribing to the target message channel. Control circuitry writes the message data to a location associated with an address in a consumer-defined region of address space determined based on the selected channel consumer information. When an event notification condition is satisfied for the target message channel and the given consumer node, and an event notification channel is to be used, event notification data is written to a location associated with an address in a consumer-defined region of address space determined based on event notification channel consumer information associated with the event notification channel.

    Method and apparatus for architectural cache transaction logging

    公开(公告)号:US11176042B2

    公开(公告)日:2021-11-16

    申请号:US16418380

    申请日:2019-05-21

    Applicant: Arm Limited

    Abstract: A method and apparatus for monitoring cache transactions in a cache of a data processing system is provided. Responsive to a cache transaction associated with a transaction address, when a cache controller determines that the cache transaction is selected for monitoring, the cache controller retrieves a pointer stored in a register, determines a location in a log memory from the pointer, and writes a transaction identifier to the determined location in the log memory. The transaction identifier is associated with the transaction address and may be a virtual address, for example. The pointer is updated and stored to the register. The architect of the apparatus may include a mechanism for atomically combining data access instructions with an instruction to commence monitoring.

    Range-based memory system
    27.
    发明授权

    公开(公告)号:US10592424B2

    公开(公告)日:2020-03-17

    申请号:US15819378

    申请日:2017-11-21

    Applicant: Arm Limited

    Abstract: A mechanism is provided for efficient coherence state modification of cached data stored in a range of addresses in a coherent data processing system in which data coherency is maintained across multiple caches. A tag search structure is maintained that identifies address tags and coherence states of cached data indexed by address tags. In response to a request from a device internal to or external from the coherence network, the tag search structure is searched to identify address tags of cached data for which the coherence state is to be modified and requests are issued in the data processing system to modify a coherence state of cached lines with the identified address tags. The request from the external device may specify a range of addresses for which a coherence state change is sought. The tag search structure may be implemented as search tree, for example.

    Data processing
    28.
    发明授权

    公开(公告)号:US10423446B2

    公开(公告)日:2019-09-24

    申请号:US15361871

    申请日:2016-11-28

    Applicant: ARM Limited

    Abstract: Data processing apparatus comprises one or more interconnected processing elements each configured to execute processing instructions of a program task; coherent memory circuitry storing one or more copies of data accessible by each of the processing elements, so that data written to a memory address in the coherent memory circuitry by one processing element is consistent with data read from that memory address in the coherent memory circuitry by another of the processing elements; the coherent memory circuitry comprising a memory region to store data, accessible by the processing elements, defining one or more attributes of a program task and context data associated with a most recent instance of execution of that program task; the apparatus comprising scheduling circuitry to schedule execution of a task by a processing element in response to the one or more attributes defined by data stored in the memory region corresponding to that task; and each processing element which executes a program task is configured to modify one or more of the attributes corresponding to that program task in response to execution of that program task.

    Data movement engine
    29.
    发明授权

    公开(公告)号:US10353601B2

    公开(公告)日:2019-07-16

    申请号:US15361843

    申请日:2016-11-28

    Applicant: ARM Limited

    Abstract: A memory system of a data processing system includes one or more storage devices and a data rearrangement engine for moving data between memory regions of the plurality of memory regions. The data rearrangement engine is configured to rearrange data stored at non-contiguous addresses in a source memory region into contiguous address in a destination region responsive to a rearrangement specified by a host processing unit of the data processing system. A description of the rearranged data is maintained in a metadata memory region. Rearranged data may be accessed by one or more host processing units. Write-back of data from the destination to the source region may be reduced by use of Bloom filter or the like.

    Memory synchronization filter
    30.
    发明授权

    公开(公告)号:US10067708B2

    公开(公告)日:2018-09-04

    申请号:US14978001

    申请日:2015-12-22

    Applicant: ARM Limited

    Abstract: Data synchronization between memories of a data processing system is achieved by transferring the data blocks from a first memory to a second memory, forming a hash list from addresses of data blocks that are written to the second memory or modified in the second memory. The hash list may be to identify a set of data blocks that are possibly written to or modified. Data blocks that are possibly modified may be written back from the second memory to the first memory in response to a synchronization event. The hash list may be updated by computing, in hardware or software, hash functions of an address of the transferred or modified data block to determine bit positions to be set. The hash list may be queried by computing hash functions of an address to determine bit positions, and checking bits in the hash list at those bit positions.

Patent Agency Ranking