-
公开(公告)号:US09817760B2
公开(公告)日:2017-11-14
申请号:US15063259
申请日:2016-03-07
Applicant: QUALCOMM Incorporated
Inventor: Eric Francis Robinson , Khary Jason Alexander , Zeid Hartuon Samoail , Benjamin Charles Michelson
IPC: G06F12/08 , G06F12/0815 , G06F12/0811 , G06F12/084 , G06F12/0831
CPC classification number: G06F12/0815 , G06F12/0811 , G06F12/0831 , G06F12/084 , G06F2212/1024 , G06F2212/251 , G06F2212/621
Abstract: The disclosure relates to filtering snoops in coherent multiprocessor systems. For example, in response to a request to update a target memory location at a Level-2 (L2) cache shared among multiple local processing units each having a Level-1 (L1) cache, a lookup based on the target memory location may be performed in a snoop filter that tracks entries in the L1 caches. If the lookup misses the snoop filter and the snoop filter lacks space to store a new entry, a victim entry to evict from the snoop filter may be selected and a request to invalidate every cache line that maps to the victim entry may be sent to at least one of the processing units with one or more cache lines that map to the victim entry. The victim entry may then be replaced in the snoop filter with the new entry corresponding to the target memory location.
-
公开(公告)号:US09934149B2
公开(公告)日:2018-04-03
申请号:US15087649
申请日:2016-03-31
Applicant: QUALCOMM Incorporated
Inventor: Khary Jason Alexander , Eric Francis Robinson
IPC: G06F12/08 , G06F12/0862 , G06F12/0811 , G06F12/0891 , G06F12/0897
CPC classification number: G06F12/0862 , G06F12/0811 , G06F12/0891 , G06F12/0897 , G06F2212/6022 , G06F2212/62
Abstract: Systems and methods relate to servicing a demand miss for a cache line in a first cache (e.g., an L1 cache) of a processing system, for example, when none of one or more fill buffers for servicing the demand miss are available. In exemplary aspects, the demand miss is converted to a prefetch operation to prefetch the cache line into a second cache (e.g., an L2 cache), wherein the second cache is a backing storage location for the first cache. Thus, servicing the demand miss is not delayed until a fill buffer becomes available, and once a fill buffer becomes available, the prefetched cache line is returned from the second cache to the available fill buffer.
-