Abstract:
Systems, methods, and apparatuses relating to arbitration among a plurality of memory interface circuits in a configurable spatial accelerator are described. In one embodiment, a configurable spatial accelerator (CSA) includes a plurality of processing elements; a plurality of request address file (RAF) circuits, and a circuit switched interconnect network between the plurality of processing elements and the RAF circuits. As a dataflow architecture, embodiments of CSA have a unique memory architecture where memory accesses are decoupled into an explicit request and response phase allowing pipelining through memory. Certain embodiments herein provide for improved memory sub-system design via arbitration and the improvements to arbitration discussed herein.
Abstract:
Methods and apparatus for vector-matrix comparison are disclosed. In one embodiment, a processor comprises decoding and execution circuitry. The decoding circuitry decodes an instruction, where operands of the instruction specifies an output location to store output results, a vector of data element values, and a matrix of data element values. The execution circuitry executes the decoded instruction. The execution includes to map each of the data element values of the vector to one of consecutive rows of the matrix; for each data element value of the vector, to compare that data element value of the vector with data element values in a respective row of the matrix and obtain data element match results. The execution further includes to store the output results based on the data element match results, where each output result maps to a respective data element column position and indicates a vector match result.
Abstract:
A processor includes a processing core, an L1 cache, operatively coupled to the processing core, the L1 cache comprising an L1 cache entry to store a data item, an L2 cache, inclusive with respect to the L1 cache, the L2 cache comprising an L2 cache entry corresponding to the L1 cache entry, an activity flag associated with the L2 cache entry, the activity flag indicating an activity status of the L1 cache entry, and a cache controller to, in response to detecting an access operation with respect to the L1 cache entry, set the flag to an active status.
Abstract:
A processor includes a processing core, an L1 cache, operatively coupled to the processing core, the L1 cache comprising an L1 cache entry to store a data item, an L2 cache, inclusive with respect to the L1 cache, the L2 cache comprising an L2 cache entry corresponding to the L1 cache entry, an activity flag associated with the L2 cache entry, the activity flag indicating an activity status of the L1 cache entry, and a cache controller to, in response to detecting an access operation with respect to the L1 cache entry, set the flag to an active status.
Abstract:
A processor includes a core, a prefetcher, and a prefetcher control module. The prefetcher includes logic to make speculative prefetch requests through a memory subsystem for an element for execution by the core, and logic to store prefetched elements in a cache. The prefetcher control module includes logic to determine counts of memory accesses to two types of memory and, based upon the counts and the type of memory, reduce the speculative prefetch requests of the prefetcher.
Abstract:
A processor includes a core, a prefetcher, and a prefetcher control module. The prefetcher includes logic to make speculative prefetch requests through a memory subsystem for an element for execution by the core, and logic to store prefetched elements in a cache. The prefetcher control module includes logic to determine counts of memory accesses to two types of memory and, based upon the counts and the type of memory, reduce the speculative prefetch requests of the prefetcher.
Abstract:
Embodiments detailed herein relate to reduction operations on a plurality of data element values. In one embodiment, a process comprises decoding circuitry to decode an instruction and execution circuitry to execute the decoded instruction. The instruction specifies a first input register containing a plurality of data element values, a first index register containing a plurality of indices, and an output register, where each index of the plurality of indices maps to one unique data element position of the first input register. The execution includes to identify data element values that are associated with one another based on the indices, perform one or more reduction operations on the associated data element values based on the identification, and store results of the one or more reduction operations in the output register.
Abstract:
Methods and apparatus for vector-matrix comparison are disclosed. In one embodiment, a processor comprises decoding and execution circuitry. The decoding circuitry decodes an instruction, where operands of the instruction specifies an output location to store output results, a vector of data element values, and a matrix of data element values. The execution circuitry executes the decoded instruction. The execution includes to map each of the data element values of the vector to one of consecutive rows of the matrix; for each data element value of the vector, to compare that data element value of the vector with data element values in a respective row of the matrix and obtain data element match results. The execution further includes to store the output results based on the data element match results, where each output result maps to a respective data element column position and indicates a vector match result.
Abstract:
A processor includes a processing core, a L1 cache comprising a first processing core and a first L1 cache comprising a first L1 cache data entry of a plurality of L1 cache data entries to store data. The processor also includes an L2 cache comprising a first L2 cache data entry of a plurality of L2 cache data entries. The first L2 cache data entry corresponds to the first L1 cache data entry and each of the plurality of L2 cache data entries are associated with a corresponding presence bit (pbit) of a plurality of pbits. Each of the plurality of pbits indicates a status of a corresponding one of the plurality of L2 cache data entries. The processor also includes a cache controller, which in response to a first request among a plurality of requests to access the data at the first L1 cache data entry, determines that a copy of the data is stored in the first L2 cache data entry; and retrieves the copy of the data from the L2 cache data entry in view of the status of the pbit.
Abstract:
A processor includes a processing core, a L1 cache comprising a first processing core and a first L1 cache comprising a first L1 cache data entry of a plurality of L1 cache data entries to store data. The processor also includes an L2 cache comprising a first L2 cache data entry of a plurality of L2 cache data entries. The first L2 cache data entry corresponds to the first L1 cache data entry and each of the plurality of L2 cache data entries are associated with a corresponding presence bit (pbit) of a plurality of pbits. Each of the plurality of pbits indicates a status of a corresponding one of the plurality of L2 cache data entries. The processor also includes a cache controller, which in response to a first request among a plurality of requests to access the data at the first L1 cache data entry, determines that a copy of the data is stored in the first L2 cache data entry; and retrieves the copy of the data from the L2 cache data entry in view of the status of the pbit.