Debugging architecture for system in package composed of multiple semiconductor chips

    公开(公告)号:US12204834B2

    公开(公告)日:2025-01-21

    申请号:US17132891

    申请日:2020-12-23

    Abstract: A method is described. The method includes maintaining a synchronized count value in each of a plurality of logic chips within a same package. The method includes comparing the count value against a same looked for count value in each of the plurality of logic chips. The method includes each of the plurality of logic chips recording in its respective local memory at least some of its state information in response to each of the plurality of logic chips recognizing within a same cycle that the count value has reached the same looked for count value.

    Method and apparatus for distributed snoop filtering

    公开(公告)号:US09727475B2

    公开(公告)日:2017-08-08

    申请号:US14497740

    申请日:2014-09-26

    CPC classification number: G06F12/0875 G06F12/0831 G06F2212/452

    Abstract: An apparatus and method are described for distributed snoop filtering. For example, one embodiment of a processor comprises: a plurality of cores to execute instructions and process data; first snoop logic to track a first plurality of cache lines stored in a mid-level cache (“MLC”) accessible by one or more of the cores, the first snoop logic to allocate entries for cache lines stored in the MLC and to deallocate entries for cache lines evicted from the MLC, wherein at least some of the cache lines evicted from the MLC are retained in a level 1 (L1) cache; and second snoop logic to track a second plurality of cache lines stored in a non-inclusive last level cache (NI LLC), the second snoop logic to allocate entries in the NI LLC for cache lines evicted from the MLC and to deallocate entries for cache lines stored in the MLC, wherein the second snoop logic is to store and maintain a first set of core valid bits to identify cores containing copies of the cache lines stored in the NI LLC.

    INSTRUCTION AND LOGIC FOR PREFETCHER THROTTLING BASED ON DATA SOURCE
    7.
    发明申请
    INSTRUCTION AND LOGIC FOR PREFETCHER THROTTLING BASED ON DATA SOURCE 有权
    基于数据源的预处理器曲线的指令和逻辑

    公开(公告)号:US20160062768A1

    公开(公告)日:2016-03-03

    申请号:US14471261

    申请日:2014-08-28

    Abstract: A processor includes a core, a prefetcher, and a prefetcher control module. The prefetcher includes logic to make speculative prefetch requests through a memory subsystem for an element for execution by the core, and logic to store prefetched elements in a cache. The prefetcher control module includes logic to determine counts of memory accesses to two types of memory and, based upon the counts and the type of memory, reduce the speculative prefetch requests of the prefetcher.

    Abstract translation: 处理器包括核心,预取器和预取器控制模块。 预取器包括用于通过存储器子系统进行推测预取请求的逻辑,用于由核心执行的元素以及将预取元素存储在高速缓存中的逻辑。 预取器控制模块包括用于确定对两种类型的存储器的存储器访问的计数的逻辑,并且基于计数和存储器的类型,减少预取器的推测性预取请求。

    Coherent accelerator fabric controller

    公开(公告)号:US11263143B2

    公开(公告)日:2022-03-01

    申请号:US15720231

    申请日:2017-09-29

    Abstract: A fabric controller is provided for a coherent accelerator fabric. The coherent accelerator fabric includes a host interconnect, a memory interconnect, and an accelerator interconnect. The host interconnect communicatively couples to a host device. The memory interconnect communicatively couples to an accelerator memory. The accelerator interconnect communicatively couples to an accelerator having a last-level cache (LLC). An LLC controller is provided that is configured to provide a bias check for memory access operations on the fabric.

    Optimized caching agent with integrated directory cache

    公开(公告)号:US10339060B2

    公开(公告)日:2019-07-02

    申请号:US15396174

    申请日:2016-12-30

    Abstract: System, method, and processor for enabling early deallocation of tracker entries which track memory accesses are described herein. One embodiment of a method includes: maintaining an RSF corresponding to a first processing unit of a plurality of processing units to track cache lines, wherein a cache line is tracked by the RSF if the cache line is stored in both a memory and one or more other processing unit, the memory is coupled to and shared by the plurality of processing units; receiving a request to access a target cache line from a processing core of the first processing unit; allocating a tracker entry corresponding to the request, the tracker entry used to track a status of the request; performing a lookup in the RSF for the target cache line; and deallocating the tracker entry responsive to a detection that the target cache line is not tracked the RSF.

Patent Agency Ranking