-
公开(公告)号:US11016899B2
公开(公告)日:2021-05-25
申请号:US16403701
申请日:2019-05-06
Applicant: QUALCOMM Incorporated
Inventor: Nikhil Narendradev Sharma , Eric Francis Robinson , Garrett Michael Drapala , Perry Willmann Remaklus, Jr. , Joseph Gerald McDonald , Thomas Philip Speier
IPC: G06F13/00 , G06F12/0862
Abstract: Selective honoring of speculative memory-prefetch requests based on bandwidth constraint of a memory access path component(s) in a processor-based system. To reduce memory access latency, a CPU includes a request size in a memory read request of requested data to be read from memory and a request mode of the requested data as required or preferred. A memory access path component includes a memory read honor circuit configured to receive the memory read request and consult the request size and request mode of requested data in the memory read request. If the selective prefetch data honor circuit determines that bandwidth of the memory system is less than a defined bandwidth constraint threshold, then the memory read request is forwarded to be fulfilled, otherwise, the memory read request is downgraded to only include any requested required data.
-
2.
公开(公告)号:US20170371783A1
公开(公告)日:2017-12-28
申请号:US15191686
申请日:2016-06-24
Applicant: QUALCOMM Incorporated
Inventor: Hien Minh Le , Thuong Quang Truong , Eric Francis Robinson , Brad Herold , Robert Bell, JR.
IPC: G06F12/084 , G06F12/0842
CPC classification number: G06F12/084 , G06F12/0804 , G06F12/0811 , G06F12/0831 , G06F15/167 , G06F2212/1024 , G06F2212/272
Abstract: Self-aware, peer-to-peer cache transfers between local, shared cache memories in a multi-processor system is disclosed. A shared cache memory system is provided comprising local shared cache memories accessible by an associated central processing unit (CPU) and other CPUs in a peer-to-peer manner. When a CPU desires to request a cache transfer (e.g., in response to a cache eviction), the CPU acting as a master CPU issues a cache transfer request. In response, target CPUs issue snoop responses indicating their willingness to accept the cache transfer. The target CPUs also use the snoop responses to be self-aware of the willingness of other target CPUs to accept the cache transfer. The target CPUs willing to accept the cache transfer use a predefined target CPU selection scheme to determine its acceptance of the cache transfer. This can avoid a CPU making multiple requests to find a target CPU for a cache transfer.
-
公开(公告)号:US09934149B2
公开(公告)日:2018-04-03
申请号:US15087649
申请日:2016-03-31
Applicant: QUALCOMM Incorporated
Inventor: Khary Jason Alexander , Eric Francis Robinson
IPC: G06F12/08 , G06F12/0862 , G06F12/0811 , G06F12/0891 , G06F12/0897
CPC classification number: G06F12/0862 , G06F12/0811 , G06F12/0891 , G06F12/0897 , G06F2212/6022 , G06F2212/62
Abstract: Systems and methods relate to servicing a demand miss for a cache line in a first cache (e.g., an L1 cache) of a processing system, for example, when none of one or more fill buffers for servicing the demand miss are available. In exemplary aspects, the demand miss is converted to a prefetch operation to prefetch the cache line into a second cache (e.g., an L2 cache), wherein the second cache is a backing storage location for the first cache. Thus, servicing the demand miss is not delayed until a fill buffer becomes available, and once a fill buffer becomes available, the prefetched cache line is returned from the second cache to the available fill buffer.
-
公开(公告)号:US11226910B2
公开(公告)日:2022-01-18
申请号:US16808073
申请日:2020-03-03
Applicant: QUALCOMM Incorporated
Inventor: Joseph Gerald McDonald , Garrett Michael Drapala , Eric Francis Robinson , Thomas Philip Speier , Kevin Neal Magill , Richard Gerard Hofmann
Abstract: Disclosed are ticketed flow control mechanisms in a processing system with one or more masters and one or more slaves. In an aspect, a targeted slave receives a request from a requesting master. If the targeted slave is unavailable to service the request, a ticket for the request is provided to the requesting master. As resources in the targeted slave become available, messages are broadcasted for the requesting master to update the ticket value. When the ticket value has been updated to a final value, the requesting master may re-transmit the request.
-
公开(公告)号:US09817760B2
公开(公告)日:2017-11-14
申请号:US15063259
申请日:2016-03-07
Applicant: QUALCOMM Incorporated
Inventor: Eric Francis Robinson , Khary Jason Alexander , Zeid Hartuon Samoail , Benjamin Charles Michelson
IPC: G06F12/08 , G06F12/0815 , G06F12/0811 , G06F12/084 , G06F12/0831
CPC classification number: G06F12/0815 , G06F12/0811 , G06F12/0831 , G06F12/084 , G06F2212/1024 , G06F2212/251 , G06F2212/621
Abstract: The disclosure relates to filtering snoops in coherent multiprocessor systems. For example, in response to a request to update a target memory location at a Level-2 (L2) cache shared among multiple local processing units each having a Level-1 (L1) cache, a lookup based on the target memory location may be performed in a snoop filter that tracks entries in the L1 caches. If the lookup misses the snoop filter and the snoop filter lacks space to store a new entry, a victim entry to evict from the snoop filter may be selected and a request to invalidate every cache line that maps to the victim entry may be sent to at least one of the processing units with one or more cache lines that map to the victim entry. The victim entry may then be replaced in the snoop filter with the new entry corresponding to the target memory location.
-
公开(公告)号:US20190087333A1
公开(公告)日:2019-03-21
申请号:US16129451
申请日:2018-09-12
Applicant: QUALCOMM Incorporated
Inventor: Eric Francis Robinson , Thomas Philip Speier , Joseph Gerald McDonald , Garrett Michael Drapala , Kevin Neal Magill
IPC: G06F12/0815 , G06F13/40 , G06F13/16
Abstract: Converting a stale cache memory unique request to a read unique snoop response in a multiple (multi-) central processing unit (CPU) processor is disclosed. The multi-CPU processor includes a plurality of CPUs that each have access to either private or shared cache memories in a cache memory system. Multiple CPUs issuing unique requests to write data to a same coherence granule in a cache memory causes one unique request for a requested CPU to be serviced or “win” to allow that CPU to obtain the coherence granule in a unique state, while the other unsuccessful unique requests become stale. To avoid retried unique requests being reordered behind other pending, younger requests which would lead to lack of forward progress due to starvation or livelock, the snooped stale unique requests are converted to read unique snoop responses so that their request order can be maintained in the cache memory system.
-
-
-
-
-