Mechanism for broadside reads of CAM structures
    1.
    发明授权
    Mechanism for broadside reads of CAM structures 失效
    CAM结构的宽边读取机制

    公开(公告)号:US06493792B1

    公开(公告)日:2002-12-10

    申请号:US09495155

    申请日:2000-01-31

    IPC分类号: G11C1500

    CPC分类号: G11C15/00 G06F12/1027

    摘要: A CAM providing for the identification of a plurality of multiple bit tag values stored in the CAM, having logic circuitry for comparing each bit of an inputted test value to the corresponding bits of all stored tag values. A bit select is employed for generating a plurality of test bits for sequential input into the logic circuitry. The logic circuitry compares the plurality of test bits to the corresponding bit of each stored tag value and generates a “hit” signal if the selected bit is the same as the corresponding bit of the stored tag value. Storage means are employed for recording the results of the compare with the M hit signal.

    摘要翻译: 提供用于识别存储在CAM中的多个多个位标签值的CAM,其具有用于将输入的测试值的每个比特与所有存储的标签值的对应比特进行比较的逻辑电路。 采用位选择来产生用于顺序输入到逻辑电路中的多个测试位。 逻辑电路将多个测试位与每个存储的标签值的相应位进行比较,如果所选择的位与所存储的标签值的相应位相同,则产生“命中”信号。 采用存储手段来记录与M命中信号进行比较的结果。

    Masking error detection/correction latency in multilevel cache transfers
    2.
    发明授权
    Masking error detection/correction latency in multilevel cache transfers 有权
    多级缓存传输中的掩码错误检测/校正延迟

    公开(公告)号:US06874116B2

    公开(公告)日:2005-03-29

    申请号:US10443103

    申请日:2003-05-22

    CPC分类号: G06F11/1064 G06F12/0897

    摘要: A method, and a corresponding apparatus, mask error detection and correction latency during multilevel cache transfers. The method includes the steps of transferring error protection encoded data lines from a first cache, checking the error protection encoded data lines for errors, wherein the checking is completed after the transferring begins, receiving the error protection encoded data lines in a second cache, and upon detecting an error in a data line, preventing further transfer of the data line from the second cache.

    摘要翻译: 一种方法和相应的装置,在多级缓存传输期间的掩码错误检测和校正延迟。 该方法包括以下步骤:从第一高速缓存传送错误保护编码数据线,检查错误保护编码数据线的错误,其中在传送开始之后完成检查,在第二高速缓存中接收错误保护编码数据线,以及 在检测到数据线中的错误时,防止数据线从第二高速缓存进一步传送。

    Unified cache port consolidation
    3.
    发明授权
    Unified cache port consolidation 有权
    统一缓存端口整合

    公开(公告)号:US06704820B1

    公开(公告)日:2004-03-09

    申请号:US09507033

    申请日:2000-02-18

    IPC分类号: G06F1200

    CPC分类号: G06F12/0857

    摘要: A method and apparatus consolidate ports on a unified cache. The apparatus uses plurality of access connections with a single port of a memory. The apparatus comprises multiplexor and a logic circuit. The multiplexor is connected to the plurality of access connections. The multiplexor has a control input and a memory connection. The logic circuit produces an output signal tied to the control input. In another form, the apparatus comprises means for selectively coupling a single one of the plurality of access connections to the memory, and a means for controlling the means for coupling. Preferably, the plurality of access connections comprise a data connection and an instruction connection, and the memory is cache memory. The method uses a single memory access connection for a plurality of access types. The method accepts one or more memory access requests on one or more respective ones of a plurality of connections. If there are memory access requests simultaneously active on two or more of the plurality of connections, then the method selects one of the simultaneously active connections and connects the selected connection to the single memory access connection.

    摘要翻译: 方法和装置整合统一缓存上的端口。 该装置使用与存储器的单个端口的多个接入连接。 该装置包括多路复用器和逻辑电路。 复用器连接到多个接入连接。 多路复用器具有控制输入和存储器连接。 逻辑电路产生一个与控制输入相连的输出信号。 在另一种形式中,该装置包括用于选择性地将多个接入连接中的单个接口连接到存储器的装置,以及用于控制用于耦合的装置的装置。 优选地,多个接入连接包括数据连接和指令连接,并且存储器是高速缓冲存储器。 该方法使用用于多个接入类型的单个存储器访问连接。 该方法在多个连接中的一个或多个相应的连接上接受一个或多个存储器访问请求。 如果在多个连接中的两个或更多个连接上同时存在存储器访问请求,则该方法选择同时活动的连接之一并将所选择的连接连接到单个存储器访问连接。

    Updating and invalidating store data and removing stale cache lines in a prevalidated tag cache design
    4.
    发明授权
    Updating and invalidating store data and removing stale cache lines in a prevalidated tag cache design 失效
    更新并使存储数据无效,并在预先生效的标签缓存设计中删除过时的高速缓存行

    公开(公告)号:US06470437B1

    公开(公告)日:2002-10-22

    申请号:US09466306

    申请日:1999-12-17

    申请人: Terry L Lyon

    发明人: Terry L Lyon

    IPC分类号: G06F1210

    摘要: In a computer architecture using a prevalidated tag cache design, logic circuits are added to enable store and invalidation operations without impacting integer load data access times and to invalidate stale cache lines. The logic circuits may include a translation lookaside buffer (TLB) architecture to handle store operations in parallel with a smaller, faster integer load TLB architecture. A store valid module is added to the TLB architecture. The store valid module sets a valid bit when a new cache line is written. The valid bit is cleared on the occurrence of an invalidation operation. The valid bit prevents multiple store updates or invalidates for cache lines that are already invalid. In addition, an invalidation will block load hits on the cache line. A control logic is added to remove stale cache lines. When a cache line fill is being processed, the control logic determines if the cache line exists in any other cache segments. If the cache line exists, the control logic directs the clearing of store valid bits associated with the cache line.

    摘要翻译: 在使用预验证标签高速缓存设计的计算机体系结构中,添加逻辑电路以实现存储和无效操作,而不影响整数负载数据访问时间,并使陈旧的高速缓存行无效。 逻辑电路可以包括用更小,更快的整数负载TLB架构并行地处理存储操作的翻译后备缓冲器(TLB)架构。 存储有效模块被添加到TLB架构。 存储有效模块在写入新的高速缓存行时设置有效位。 发生无效操作时,有效位被清除。 有效的位可以防止多个存储更新或无效的高速缓存行已经无效。 此外,无效将阻止缓存行上的加载命中。 添加一个控制逻辑来删除陈旧的缓存行。 当正在处理高速缓存行填充时,控制逻辑确定高速缓存行是否存在于任何其他高速缓存段中。 如果存在高速缓存行,则控制逻辑指示清除与高速缓存行相关联的存储有效位。

    Parallel distributed function translation lookaside buffer
    5.
    发明授权
    Parallel distributed function translation lookaside buffer 有权
    并行分布式函数翻译后备缓冲区

    公开(公告)号:US06874077B2

    公开(公告)日:2005-03-29

    申请号:US10648405

    申请日:2003-08-27

    申请人: Terry L Lyon

    发明人: Terry L Lyon

    IPC分类号: G06F12/08 G06F12/10

    CPC分类号: G06F12/1054 G06F12/1027

    摘要: In a computer system, a parallel, distributed function lookaside buffer (TLB) includes a small, fast TLB and a second larger, but slower TLB. The two TLBs operate in parallel, with the small TLB receiving integer load data and the large TLB receiving other virtual address information. By distributing functions, such as load and store instructions, and integer and floating point instructions, between the two TLBs, the small TLB can operate with a low latency and avoid thrashing and similar problems while the larger TLB provides high bandwidth for memory intensive operations. This mechanism also provides a parallel store update and invalidation mechanism which is particularly useful for prevalidated cache tag designs.

    摘要翻译: 在计算机系统中,并行分布式功能后备缓冲器(TLB)包括小型,快速的TLB和第二较大但较慢的TLB。 两个TLB并行运行,小型TLB接收整数负载数据,大型TLB接收其他虚拟地址信息。 通过在两个TLB之间分配诸如加载和存储指令以及整数和浮点指令的功能,小型TLB可以以低延迟进行操作,并避免抖动和类似问题,而较大的TLB为存储器密集型操作提供高带宽。 该机制还提供了一种并行存储更新和无效机制,对于预先验证的缓存标签设计特别有用。

    Parallel distributed function translation lookaside buffer
    6.
    发明授权
    Parallel distributed function translation lookaside buffer 失效
    并行分布式函数翻译后备缓冲区

    公开(公告)号:US06625714B1

    公开(公告)日:2003-09-23

    申请号:US09466494

    申请日:1999-12-17

    申请人: Terry L Lyon

    发明人: Terry L Lyon

    IPC分类号: G06F1210

    CPC分类号: G06F12/1054 G06F12/1027

    摘要: In a computer system, a parallel, distributed function lookaside buffer (TLB) includes a small, fast TLB and a second larger, but slower TLB. The two TLBs operate in parallel, with the small TLB receiving integer load data and the large TLB receiving other virtual address information. By distributing functions, such as load and store instructions, and integer and floating point instructions, between the two TLBs, the small TLB can operate with a low latency and avoid thrashing and similar problems while the larger TLB provides high bandwidth for memory intensive operations. This mechanism also provides a parallel store update and invalidation mechanism which is particularly useful for prevalidated cache tag designs.

    摘要翻译: 在计算机系统中,并行分布式功能后备缓冲器(TLB)包括小型,快速的TLB和第二较大但较慢的TLB。 两个TLB并行运行,小型TLB接收整数负载数据,大型TLB接收其他虚拟地址信息。 通过在两个TLB之间分配诸如加载和存储指令以及整数和浮点指令的功能,小型TLB可以以低延迟进行操作,并避免抖动和类似问题,而较大的TLB为存储器密集型操作提供高带宽。 该机制还提供了一种并行存储更新和无效机制,对于预先验证的缓存标签设计特别有用。

    Method and system for early tag accesses for lower-level caches in parallel with first-level cache
    7.
    发明授权
    Method and system for early tag accesses for lower-level caches in parallel with first-level cache 有权
    与一级缓存并行的低级缓存的早期标签访问方法和系统

    公开(公告)号:US06427188B1

    公开(公告)日:2002-07-30

    申请号:US09501396

    申请日:2000-02-09

    IPC分类号: G06F1200

    摘要: A system and method are disclosed which determine in parallel for multiple levels of a multi-level cache whether any one of such multiple levels is capable of satisfying a memory access request. Tags for multiple levels of a multi-level cache are accessed in parallel to determine whether the address for a memory access request is contained within any of the multiple levels. For instance, in a preferred embodiment, the tags for the first level of cache and the tags for the second level of cache are accessed in parallel. Also, additional levels of cache tags up to N levels may be accessed in parallel with the first-level cache tags. Thus, by the end of the access of the first-level cache tags it is known whether a memory access request can be satisfied by the first-level, second-level, or any additional N-levels of cache that are accessed in parallel. Additionally, in a preferred embodiment, the multi-level cache is arranged such that the data array of a level of cache is accessed only if it is determined that such level of cache is capable of satisfying a received memory access request. Additionally, in a preferred embodiment the multi-level cache is partitioned into N ways of associativity, and only a single way of a data array is accessed to satisfy a memory access request, thereby preserving the remaining ways of a data array to save power and resources that may be accessed to satisfy other instructions.

    摘要翻译: 公开了一种系统和方法,其并行地确定多级高速缓存的多个级别,无论这样的多个级别中的任何一个是否能够满足存储器访问请求。 并行访问多级高速缓存的多级别的标签,以确定存储器访问请求的地址是否包含在多个级别中的任一级内。 例如,在优选实施例中,并行地访问用于第一级高速缓存的标签和用于第二级高速缓存的标签。 此外,可以与第一级缓存标签并行访问高达N级的缓存标签的附加级别。 因此,通过第一级缓存标签的访问结束,已知存储器访问请求是否可以由并行访问的高级缓存的第一级,第二级或任何附加N级满足。 此外,在优选实施例中,多级缓存被布置成使得仅当确定这种级别的高速缓存能够满足所接收的存储器访问请求时才能访问高速缓存级的数据阵列。 此外,在优选实施例中,多级缓存被分为N个关联方式,并且仅访问数据阵列的单一方式以满足存储器访问请求,从而保留数据阵列的剩余方式以节省功率, 可以访问的资源以满足其他指令。

    Cache connection with bypassing feature
    8.
    发明授权
    Cache connection with bypassing feature 失效
    缓存连接与旁路功能

    公开(公告)号:US06728823B1

    公开(公告)日:2004-04-27

    申请号:US09507203

    申请日:2000-02-18

    IPC分类号: G06F1314

    CPC分类号: G06F12/0897 G06F12/0888

    摘要: A source cache transfers data to an intermediate cache along a data connection. The intermediate cache is provided between the source cache and a target, and includes a memory array. The source cache may also transfer data to the target along the data connection while bypassing the memory array of the intermediate cache.

    摘要翻译: 源缓存将数据沿数据连接传输到中间缓存。 中间缓存在源缓存和目标之间提供,并且包括存储器阵列。 源缓存还可以沿着数据连接将数据传送到目标,同时绕过中间高速缓存的存储器阵列。

    Cache chain structure to implement high bandwidth low latency cache memory subsystem
    9.
    发明授权
    Cache chain structure to implement high bandwidth low latency cache memory subsystem 有权
    缓存链结构实现高带宽低延迟高速缓存存储器子系统

    公开(公告)号:US06557078B1

    公开(公告)日:2003-04-29

    申请号:US09510283

    申请日:2000-02-21

    IPC分类号: G06F1300

    摘要: The inventive cache uses a queuing structure which provides out-of-order cache memory access support for multiple accesses, as well as support for managing bank conflicts and address conflicts. The inventive cache can support four data accesses that are hits per clocks, support one access that misses the L1 cache every clock, and support one instruction access every clock. The responses are interspersed in the pipeline, so that conflicts in the queue are minimized. Non-conflicting accesses are not inhibited, however, conflicting accesses are held up until the conflict clears. The inventive cache provides out-of-order support after the retirement stage of a pipeline.

    摘要翻译: 本发明的高速缓存使用排队结构,其为多个访问提供无序高速缓存存储器访问支持,以及用于管理银行冲突和地址冲突的支持。 本发明的高速缓存可以支持每个时钟命中的四个数据访问,支持每个时钟丢失L1缓存的一个访问,并且每个时钟支持一个指令访问。 响应散布在流水线中,从而使队列中的冲突最小化。 不冲突的访问不被禁止,但冲突的冲突消除之后,冲突的访问将被阻止。 本发明的缓存在管道的退役阶段之后提供无序支持。

    Apparatus and method for virtual address aliasing and multiple page size support in a computer system having a prevalidated cache
    10.
    发明授权
    Apparatus and method for virtual address aliasing and multiple page size support in a computer system having a prevalidated cache 有权
    具有预验证缓存的计算机系统中的虚拟地址混叠和多页大小支持的装置和方法

    公开(公告)号:US06493812B1

    公开(公告)日:2002-12-10

    申请号:US09465722

    申请日:1999-12-17

    申请人: Terry L Lyon

    发明人: Terry L Lyon

    IPC分类号: G06F1210

    CPC分类号: G06F12/1054 G06F2212/652

    摘要: A computer micro-architecture employing a prevalidated cache tag design includes circuitry to support virtual address aliasing and multiple page sizes. Support for various levels of address aliasing are provided through a physical address CAM, page size mask compares and a column copy tag function. Also supported are address aliasing that invalidates aliased lines, address aliasing with TLB entries with the same page sizes, and address aliasing the TLB entries of different sizes. Multiple page sizes are supported with extensions to the prevalidated cache tag design by adding page size mask RAMs and virtual and physical address RAMs.

    摘要翻译: 采用预先验证的高速缓存标签设计的计算机微架构包括支持虚拟地址混叠和多页大小的电路。 通过物理地址CAM,页面大小掩码比较和列复制标签功能提供对各种地址混叠的支持。 还支持地址别名,使别名行无效,使用具有相同页大小的TLB条目进行地址混叠,以及对不同大小的TLB条目进行地址混叠。 通过添加页面大小的掩码RAM和虚拟和物理地址RAM,可以对预先生效的缓存标签设计进行扩展,支持多页大小。