Prefetching into a cache to minimize main memory access time and cache
size in a computer system
    3.
    发明授权
    Prefetching into a cache to minimize main memory access time and cache size in a computer system 失效
    预取到缓存中以最小化计算机系统中的主存储器访问时间和缓存大小

    公开(公告)号:US5499355A

    公开(公告)日:1996-03-12

    申请号:US339920

    申请日:1994-11-15

    摘要: A cache subsystem for a computer system having a processor and a main memory is described. The cache subsystem includes a prefetch buffer coupled to the processor and the main memory. The prefetch buffer stores a first data prefetched from the main memory in accordance with a predicted address for a next memory fetch by the processor. The predicted address is based upon an address for a last memory fetch from the processor. A main cache is coupled to the processor and the main memory. The main cache is not coupled to the prefetch buffer and does not receive data from the prefetch buffer. The main cache stores a second data fetched from the main memory in accordance with the address for the last memory fetch by the processor only if the address for the last memory fetch is an unpredictable address. The address for the last memory fetch is the unpredictable address if both of the prefetch buffer and the main cache do not contain the address and the second data associated with the address.

    摘要翻译: 描述了具有处理器和主存储器的计算机系统的缓存子系统。 缓存子系统包括耦合到处理器和主存储器的预取缓冲器。 预取缓冲器根据由处理器进行的下一个存储器提取的预测地址存储从主存储器预取的第一数据。 预测地址是基于来自处理器的最后一次内存提取的地址。 主缓存耦合到处理器和主存储器。 主缓存不耦合到预取缓冲区,并且不从预取缓冲区接收数据。 如果最后一次存储器提取的地址是不可预测的地址,则主缓存器存储根据处理器的最后一次存储器提取的地址从主存储器提取的第二数据。 如果预取缓冲区和主缓存都不包含与地址相关联的地址和第二个数据,则最后一次内存提取的地址是不可预测的地址。

    Method and apparatus for address mapping of dynamic random access memory
    6.
    发明授权
    Method and apparatus for address mapping of dynamic random access memory 失效
    动态随机存取存储器的地址映射方法和装置

    公开(公告)号:US5390308A

    公开(公告)日:1995-02-14

    申请号:US869529

    申请日:1992-04-15

    IPC分类号: G06F12/02 G06F12/06

    CPC分类号: G06F12/0215 G06F12/0607

    摘要: A method and apparatus for remapping of row addresses of memory requests to random access memory. A master device such as a central processing unit (CPU) issues a memory request comprising a memory address to the memory. The memory consists of multiple memory banks, each bank having a plurality of rows of memory elements. Associated with each memory bank is a sense amplifier latch which, in the present invention, functions as a row cache to the memory bank. The memory address issued as part of the memory request is composed of device identification bits to identify the memory bank to access, row bits which identify the row to access, and column address bits which identify the memory element within the row to access. When memory is to be accessed the row of data identified by the row bits is loaded into the sense amplifier latch and then is provided to the requesting master device. When a memory request is issued control logic determines whether the requested row is already located in the sense amplifier latch. If the row is already located in the sense amplifier latch, data is immediately provided to the requesting master device. If the row is not loaded into the sense amplifier latch, the memory bank is first accessed to load the row into the latch prior to providing the data to the requesting master device. As the memory access is faster if the requested row is already located in the latch and memory accesses frequently experience spatial and temporal locality, address remapping is performed to distribute neighboring accesses among the banks of memory. By distributing accesses among the banks of memory, the probability that the requested row is located in a latch in increased and the contention for a single latch is decreased.

    摘要翻译: 一种用于将存储器请求的行地址重新映射到随机存取存储器的方法和装置。 诸如中央处理单元(CPU)的主设备向存储器发出包括存储器地址的存储器请求。 存储器由多个存储器组成,每个存储体具有多行存储器元件。 与每个存储体相关联的是读出放大器锁存器,其在本发明中用作到存储体的行缓存。 作为存储器请求的一部分发布的存储器地址由识别要存取的存储体的器件识别位组成,标识要访问的行的行位和标识行中的存储器元件的列地址位进行访问。 当要访问存储器时,将由行位标识的数据行加载到读出放大器锁存器中,然后提供给请求主器件。 当发出存储器请求时,控制逻辑确定所请求的行是否已经位于读出放大器锁存器中。 如果行已经位于读出放大器锁存器中,则立即将数据提供给请求主设备。 如果该行没有被加载到读出放大器锁存器中,则在将数据提供给请求主器件之前,首先访问存储体以将行加载到锁存器中。 由于当所请求的行已经位于锁存器中并且存储器访问频繁地经历空间和时间局部性时,存储器访问更快,所以执行地址重映射以分配存储器组之间的相邻访问。 通过在存储器组之间分配访问,所请​​求的行位于锁存器中的概率增加并且单个锁存器的争用被减少。

    Slot determination mechanism using pulse counting
    7.
    发明授权
    Slot determination mechanism using pulse counting 失效
    插槽确定机制采用脉冲计数

    公开(公告)号:US5179670A

    公开(公告)日:1993-01-12

    申请号:US444633

    申请日:1989-12-01

    IPC分类号: G06F13/40

    CPC分类号: G06F13/4068

    摘要: A slot determination mechanism wherein a number of bus units establish their positions along the bus and the total number of units on the bus. The units are connected in a bidirectional daisy chain. A one-cycle reset pulse is sent downstream to unit 1 (the upstream unit). Each unit on receiving one or more pulses from upstream sends that many plus one pulses downstream and then sends a one pulse upstream. Each unit then transmits upstream whatever it receives from downstream. The number of pulses received from upstream provide the slot number. The total number of pulses received from upstream and downstream provide the total number of units.

    摘要翻译: 一种槽确定机构,其中多个总线单元沿总线建立其位置以及总线上的单元总数。 这些单元以双向菊花链连接。 下游向单元1(上游单元)发送单周期复位脉冲。 从上游接收一个或多个脉冲的每个单元在下游发送许多加一个脉冲,然后向上发送一个脉冲。 然后,每个单元向上游发送从下游接收的任何数据。 从上游接收的脉冲数提供槽号。 从上游和下游接收的总脉冲数总数为单位数。