Method and apparatus for increasing the speed of memory access in a
virtual memory system having fast page mode
    1.
    发明授权
    Method and apparatus for increasing the speed of memory access in a virtual memory system having fast page mode 失效
    一种用于在具有快速页面模式的虚拟存储器系统中增加存储器访问速度的方法和装置

    公开(公告)号:US5265236A

    公开(公告)日:1993-11-23

    申请号:US47876

    申请日:1993-04-12

    IPC分类号: G06F12/08 G06F12/10

    摘要: In the memory access unit of the present invention, the memory request logic is centralized in the memory management unit (MMU). The MMU instructs the MCU, which interfaces directly with the DRAMs, on the type of memory access to perform. By centralizing the memory requests, the MMU is able to maintain an account of each memory access, thereby providing the MMU the means to determine if a memory access fulfills the requirements of a fast page mode access before a request is made to the MCU. The MMU comprises the row address comparator which can execute the row address comparison in parallel with the cache lookup. Therefore, if the cache lookup determines a memory access is required, a specific fast page mode memory access request can be made, without the memory controller incurring the additional delay of checking the row address. Thus, by using the memory access unit of the present invention, the system can default to fast page mode access without the additional penalty normally incurred by comparing the row address in a serial manner.

    摘要翻译: 在本发明的存储器访问单元中,存储器请求逻辑集中在存储器管理单元(MMU)中。 MMU指示MCU直接与DRAM进行接口的内存访问类型。 通过集中存储器请求,MMU能够维护每个存储器访问的帐户,从而向MMU提供用于在向MCU发出请求之前确定存储器访问是否满足快速页面模式访问的要求的手段。 MMU包括可以与高速缓存查找并行地执行行地址比较的行地址比较器。 因此,如果缓存查找确定需要存储器访问,则可以进行特定的快速页面模式存储器访问请求,而没有存储器控制器引起检查行地址的附加延迟。 因此,通过使用本发明的存储器访问单元,系统可以默认为快速页面模式访问,而不需要通过串行方式比较行地址而通常发生的额外损失。

    Apparatus and method for a space saving translation lookaside buffer for
content addressable memory
    2.
    发明授权
    Apparatus and method for a space saving translation lookaside buffer for content addressable memory 失效
    用于内存可寻址存储器的空间保存翻译缓冲区的设备和方法

    公开(公告)号:US5222222A

    公开(公告)日:1993-06-22

    申请号:US629258

    申请日:1990-12-18

    IPC分类号: G06F12/08 G06F12/10

    CPC分类号: G06F12/1027 G06F2212/652

    摘要: A method and apparatus for saving memory space in a buffer whereby the valid bit in the entry of the translation lookaside buffer for a cache memory is collapsed into one of the level bits indicating the length of the virtual address. During the lookup of the translation lookaside buffer, the virtual address in each entry is compared with the virtual address from the CPU if the level/valid bit is set, i.e. the entry is valid. If the level/valid bit is not set, then no compare takes place and the lookup continues to the next entry. The length of the virtual address to be compared is further determined by the status of the remaining level bits.

    Method and apparatus for optimizing supervisor mode store operations in
a data cache
    3.
    发明授权
    Method and apparatus for optimizing supervisor mode store operations in a data cache 失效
    用于在数据高速缓存中优化管理员模式存储操作的方法和装置

    公开(公告)号:US5606687A

    公开(公告)日:1997-02-25

    申请号:US132795

    申请日:1993-10-07

    IPC分类号: G06F12/08

    CPC分类号: G06F12/0888

    摘要: A system and method for performing conditionally cache allocate operations to a data cache in a computer system. As supervisor mode operations typically do not experience data locality of accesses frequently found in user mode operations, it has been determined that performance benefits can be achieved by inhibiting cache allocate operations during supervisor mode. When a write miss to the cache occurs, the memory management unit checks the state of the processor status register to determine the mode of the processor. If the processor status register indicates that the processor is in supervisor mode, the memory management unit issues a signal to the data cache controller that the data is non-cacheable. When the data cache controller receives a non-cacheable signal, the cache allocate process is not performed. The non-cacheable signal is issued by the memory management unit while the processor is in supervisor mode regardless of the state of the cacheable status bit associated with the memory. Thus, if the processor is not in supervisor mode, the memory management unit will issue a non-cacheable signal to-the data cache controller based upon the state of the cacheable status bit associated with the memory. This status bit is typically found in the corresponding page table entry in the page table of a translation look aside buffer. Therefore, although a supervisor mode operation inhibits a cache allocate operation, subsequent non-supervisor mode operations to the same data will proceed based upon the state of the cacheable status bit associated with the memory.

    摘要翻译: 一种用于对计算机系统中的数据高速缓存进行条件缓存分配操作的系统和方法。 由于管理员模式操作通常不会经历在用户模式操作中频繁发现的访问的数据位置,因此已经确定通过在管理程序模式下禁止高速缓存分配操作可以实现性能优势。 当发生对高速缓存的写入错误时,存储器管理单元检查处理器状态寄存器的状态以确定处理器的模式。 如果处理器状态寄存器指示处理器处于管理器模式,则存储器管理单元向数据高速缓存控制器发出数据不可缓存的信号。 当数据高速缓存控制器接收到不可缓存信号时,不执行高速缓存分配处理。 无论缓存状态位与存储器相关的状态如何,处理器处于管理程序模式时,由存储器管理单元发出非可缓存信号。 因此,如果处理器不处于管理器模式,则存储器管理单元将基于与存储器相关联的可高速缓存状态位的状态向数据高速缓存控制器发出不可缓存的信号。 该状态位通常在翻译旁边缓冲区的页表中的相应页表项中找到。 因此,虽然管理员模式操作禁止高速缓存分配操作,但是基于与存储器相关联的可高速缓存状态位的状态,对相同数据的后续非监督模式操作将继续进行。

    Method and apparatus for interconnection of modular electronic components
    4.
    发明授权
    Method and apparatus for interconnection of modular electronic components 失效
    模块化电子部件互连的方法和装置

    公开(公告)号:US5984732A

    公开(公告)日:1999-11-16

    申请号:US970659

    申请日:1997-11-14

    申请人: Peter A. Mehring

    发明人: Peter A. Mehring

    IPC分类号: H01R13/514 H05K5/00

    CPC分类号: H05K5/0021 H01R13/514

    摘要: A method and an apparatus for interconnection of modular electronic components are provided. At least one foot is located on the bottom side of a component for providing mechanical support to the component. At least one receptacle is located on the top side of the component. The size and number of the receptacles correspond to the size and number of the feet. The feet contain at least one electrical connector. When multiple components are stacked, the receptacles of the lower component accept the corresponding feet and electrical connectors of the upper component thereby forming an electrical connection. The modular electronic components comprise computer components and stereo system components. The electrical connections comprise power connections and signaling connections.

    摘要翻译: 提供了用于模块化电子部件互连的方法和装置。 至少一个脚位于部件的底侧,用于为部件提供机械支撑。 至少一个容器位于组件的顶侧。 容器的尺寸和数量对应于脚的尺寸和数量。 脚包含至少一个电连接器。 当多个部件堆叠时,下部部件的插座接受上部部件的对应的脚和电连接器,从而形成电连接。 模块化电子部件包括计算机部件和立体声系统部件。 电气连接包括电源连接和信令连接。

    Method and apparatus for the pipelining of data during direct memory
accesses
    5.
    发明授权
    Method and apparatus for the pipelining of data during direct memory accesses 失效
    在直接存储器访问期间流水线数据的方法和装置

    公开(公告)号:US5590286A

    公开(公告)日:1996-12-31

    申请号:US131970

    申请日:1993-10-07

    IPC分类号: G06F13/28 G06F13/38

    CPC分类号: G06F13/28

    摘要: A method and apparatus for the pipelining of data during direct memory accesses. The processor includes an external bus controller, which receives data transmitted across the external bus from an external device, and forwards the data onto the memory bus for transfer to the memory. Similarly, the bus controller receives data to be written to external device from the memory and transfers it across the external bus to the external device. The bus controller includes logic to detect burst transfers and word alignment to determine the minimum number of words that can be transferred across the memory bus while the data transfer from the external device is ongoing. Therefore, instead of waiting for the entire block of data to be received into the processor before transferring it to the memory, portions of the block transferred, for example, two words at a time, are transferred to the memory, while additional data is being received at the processor. If two words are transferred at a time across the memory bus, then at the end of a block transfer only one additional cycle is required to transfer the last two words of data to the memory. Similarly, for a write operation to the external device, data can be piecewise transferred across the slower external bus as it is received in the bus controller in order to minimize the time required to complete the transfer.

    摘要翻译: 一种用于在直接存储器访问期间流水线数据的方法和装置。 处理器包括外部总线控制器,其接收来自外部设备的外部总线传输的数据,并将数据转发到存储器总线上以传送到存储器。 类似地,总线控制器从存储器接收要写入外部设备的数据,并将其通过外部总线传输到外部设备。 总线控制器包括用于检测突发传输和字对齐的逻辑,以确定当外部设备的数据传输正在进行时可以跨存储器总线传送的最小字数。 因此,代替等待整个数据块在将其传送到存储器之前被接收到处理器中,一次传送的块例如两个字的部分被传送到存储器,而另外的数据是 在处理器处收到。 如果两个字一次在存储器总线上传输,则在块传输结束时,只需要一个附加周期将最后两个数据字传输到存储器。 类似地,对于对外部设备的写入操作,数据可以在较慢的外部总线上分段传输,因为它在总线控制器中被接收,以便最小化完成传输所需的时间。

    Method and apparatus for the reduction of tablewalk latencies in a
translation look aside buffer
    6.
    发明授权
    Method and apparatus for the reduction of tablewalk latencies in a translation look aside buffer 失效
    用于在翻译旁边缓冲区中减少桌面延时的方法和装置

    公开(公告)号:US5586283A

    公开(公告)日:1996-12-17

    申请号:US132796

    申请日:1993-10-07

    IPC分类号: G06F12/10

    CPC分类号: G06F12/1027 G06F2212/684

    摘要: A translation look aside buffer including virtual page table pointer tags provides a system and method for accessing page table entries in page memory of the translation look aside buffer with decrease latencies caused by accesses to increasing levels of page tables during a table walk of the page table. Virtual tags identifying page table pointers at a predetermined level of the page table higher than the initial context level of the page table are included in the tag memory of the translation look aside buffer. These virtual tags provide a pointer which directly points to the page table pointer at that predetermined level of the page table. Therefore, if a TLB miss occurs wherein a tag for a page table entry corresponding to the virtual address is not found, a comparison is performed to determined if a corresponding virtual tag PTP is located in the tag memory. If the corresponding virtual tag PTP is found in the tag memory, access is gained to the PTP in the page table without the need for performing a time consuming table walk through the lower levels of the page table.

    摘要翻译: 包括虚拟页表指针标签的翻译旁边的缓冲器提供了一种系统和方法,用于访问页面存储器中的页表条目,该页面存储器在页表的表移动期间由对访问级别增加的页表导致的具有减少的延迟的缓冲器 。 在页面表的预定级别上标识页表指针的虚拟标签高于页表的初始上下文级别包括在翻译旁边缓冲器的标签存储器中。 这些虚拟标签提供了一个指向页面表指针的指针。 因此,如果没有找到与虚拟地址相对应的页表项的标签未发现的TLB未命中,则执行比较以确定对应的虚拟标签PTP是否位于标签存储器中。 如果在标签存储器中找到相应的虚拟标签PTP,则可以在页表中获取对PTP的访问,而不需要执行页表的较低级别的耗时表。

    Computer system with a shared address bus and pipelined write operations
    7.
    发明授权
    Computer system with a shared address bus and pipelined write operations 失效
    具有共享地址总线和流水线写入操作的计算机系统

    公开(公告)号:US6141741A

    公开(公告)日:2000-10-31

    申请号:US705057

    申请日:1996-08-29

    摘要: A computer system with a multiplexed address bus that is shared by both system memory and by slave devices is described. The slave devices are incorporated into an existing system memory configuration by providing a bus controller to execute a two-cycle address sequence on the multiplexed address bus. The address sequence is followed by a transfer of data. A random latency can exist between the time of receiving address information and the time of receiving data corresponding to the address information. This random latency can be exploited by the system CPU for other computational purposes. The bus controller of the system executes multiple, or pipelined, data writes to the bus before an acknowledgement for the first data write is received. In this scheme, the acknowledgement for the first data write is typically sent during the same time period that the subsequent data writes are being received. Consequently, data transfer acknowledgements overlap data writes. This overlapping operation allows the bus to be completely utilized during write operations, thereby improving data bandwidth.

    摘要翻译: 描述了由系统存储器和从设备共享的具有多路复用地址总线的计算机系统。 通过提供总线控制器来在多路复用地址总线上执行两周期地址序列,将从设备并入现有系统存储器配置。 地址序列之后是数据传输。 在接收地址信息的时间和接收与地址信息对应的数据的时间之间可以存在随机延迟。 这个随机延迟可以被系统CPU用于其他计算目的。 在接收到第一个数据写入的确认之前,系统的总线控制器在总线上执行多个或流水线的数据写入。 在该方案中,通常在接收后续数据写入的相同时间段期间发送对第一数据写入的确认。 因此,数据传输确认与数据写入重叠。 这种重叠操作允许在写入操作期间完全利用总线,从而提高数据带宽。