ADDRESS-PARTITIONED MULTI-CLASS PHYSICAL MEMORY SYSTEM
    1.
    发明申请
    ADDRESS-PARTITIONED MULTI-CLASS PHYSICAL MEMORY SYSTEM 审中-公开
    地址分配多类物理存储系统

    公开(公告)号:US20150261662A1

    公开(公告)日:2015-09-17

    申请号:US14206512

    申请日:2014-03-12

    CPC classification number: G06F12/023 G06F12/0284 G06F2212/1044

    Abstract: A multilevel memory system includes a plurality of memories and a processor having a memory controller. The memory controller classifies each memory in accordance with a plurality of memory classes based on its level, its type, or both. The memory controller partitions a unified memory address space into contiguous address blocks and allocates the address blocks among the memory classes. In some implementations, the memory controller then can partition the address blocks assigned to each given memory class into address subblocks and interleave the address subblocks among the memories of the memory class.

    Abstract translation: 多级存储器系统包括多个存储器和具有存储器控制器的处理器。 存储器控制器根据其级别,类型或两者根据多个存储器类别对每个存储器进行分类。 存储器控制器将统一的存储器地址空间划分成连续的地址块,并在存储器类之间分配地址块。 在一些实现中,存储器控制器然后可以将分配给每个给定存储器类的地址块划分为地址子块,并且交织存储器类的存储器中的地址子块。

    INTERCONNECT ARCHITECTURE FOR THREE-DIMENSIONAL PROCESSING SYSTEMS

    公开(公告)号:US20210312952A1

    公开(公告)日:2021-10-07

    申请号:US17224603

    申请日:2021-04-07

    Abstract: A processing system includes a plurality of processor cores formed in a first layer of an integrated circuit device and a plurality of partitions of memory formed in one or more second layers of the integrated circuit device. The one or more second layers are deployed in a stacked configuration with the first layer. Each of the partitions is associated with a subset of the processor cores that have overlapping footprints with the partitions. The processing system also includes first memory paths between the processor cores and their corresponding subsets of partitions. The processing system further includes second memory paths between the processor cores and the partitions.

    Write Endurance Management Techniques in the Logic Layer of a Stacked Memory
    3.
    发明申请
    Write Endurance Management Techniques in the Logic Layer of a Stacked Memory 有权
    在堆叠存储器的逻辑层中写入耐力管理技术

    公开(公告)号:US20140181457A1

    公开(公告)日:2014-06-26

    申请号:US13725305

    申请日:2012-12-21

    CPC classification number: G06F12/10 G06F11/1666 G06F11/2094

    Abstract: A system, method, and memory device embodying some aspects of the present invention for remapping external memory addresses and internal memory locations in stacked memory are provided. The stacked memory includes one or more memory layers configured to store data. The stacked memory also includes a logic layer connected to the memory layer. The logic layer has an Input/Output (I/O) port configured to receive read and write commands from external devices, a memory map configured to maintain an association between external memory addresses and internal memory locations, and a controller coupled to the I/O port, memory map, and memory layers, configured to store data received from external devices to internal memory locations.

    Abstract translation: 提供体现本发明的一些方面的用于重新映射外部存储器地址和堆叠存储器中的内部存储器位置的系统,方法和存储器件。 堆叠的存储器包括被配置为存储数据的一个或多个存储器层。 堆叠的存储器还包括连接到存储器层的逻辑层。 逻辑层具有被配置为从外部设备接收读取和写入命令的输入/输出(I / O)端口,被配置为保持外部存储器地址和内部存储器位置之间的关联的存储器映射以及耦合到I / O端口,内存映射和内存层,配置为将从外部设备接收的数据存储到内部存储器位置。

    MEMORY SYSTEM WITH REGION-SPECIFIC MEMORY ACCESS SCHEDULING

    公开(公告)号:US20210200433A1

    公开(公告)日:2021-07-01

    申请号:US17199949

    申请日:2021-03-12

    Abstract: An integrated circuit device includes a memory controller coupleable to a memory. The memory controller to schedule memory accesses to regions of the memory based on memory timing parameters specific to the regions. A method includes receiving a memory access request at a memory device. The method further includes accessing, from a timing data store of the memory device, data representing a memory timing parameter specific to a region of the memory cell circuitry targeted by the memory access request. The method also includes scheduling, at the memory controller, the memory access request based on the data.

    Invalidation of Dead Transient Data in Caches
    5.
    发明申请
    Invalidation of Dead Transient Data in Caches 审中-公开
    缓存中死循环数据的无效

    公开(公告)号:US20140173216A1

    公开(公告)日:2014-06-19

    申请号:US13718398

    申请日:2012-12-18

    CPC classification number: G06F12/0891 Y02D10/13

    Abstract: Embodiments include methods, systems, and articles of manufacture directed to identifying transient data upon storing the transient data in a cache memory, and invalidating the identified transient data in the cache memory.

    Abstract translation: 实施例包括在将瞬态数据存储在高速缓冲存储器中时识别瞬态数据的方法,系统和制品,并使所识别的高速缓冲存储器中的瞬态数据无效。

    Processor with Host and Slave Operating Modes Stacked with Memory
    7.
    发明申请
    Processor with Host and Slave Operating Modes Stacked with Memory 审中-公开
    具有主机和从机操作模式的处理器与内存堆叠

    公开(公告)号:US20140181453A1

    公开(公告)日:2014-06-26

    申请号:US13721395

    申请日:2012-12-20

    CPC classification number: G11C5/06 G06F12/02 G06F12/10 G06F13/1694 G11C7/1006

    Abstract: A system, method, and computer program product are provided for a memory device system. One or more memory dies and at least one logic die are disposed in a package and communicatively coupled. The logic die comprises a processing device configurable to manage virtual memory and operate in an operating mode. The operating mode is selected from a set of operating modes comprising a slave operating mode and a host operating mode.

    Abstract translation: 为存储器件系统提供了一种系统,方法和计算机程序产品。 一个或多个存储器管芯和至少一个逻辑管芯设置在封装中并且通信耦合。 逻辑管芯包括可配置为管理虚拟存储器并以操作模式操作的处理设备。 从包括从动操作模式和主机操作模式的一组操作模式中选择操作模式。

    Compound Memory Operations in a Logic Layer of a Stacked Memory
    8.
    发明申请
    Compound Memory Operations in a Logic Layer of a Stacked Memory 审中-公开
    堆叠存储器的逻辑层中的复合存储器操作

    公开(公告)号:US20140181427A1

    公开(公告)日:2014-06-26

    申请号:US13724338

    申请日:2012-12-21

    CPC classification number: G06F9/3004 G06F9/3455 G06F15/7821

    Abstract: Some die-stacked memories will contain a logic layer in addition to one or more layers of DRAM (or other memory technology). This logic layer may be a discrete logic die or logic on a silicon interposer associated with a stack of memory dies. Additional circuitry/functionality is placed on the logic layer to implement functionality to perform various data movement and address calculation operations. This functionality would allow compound memory operations—a single request communicated to the memory that characterizes the accesses and movement of many data items. This eliminates the performance and power overheads associated with communicating address and control information on a fine-grain, per-data-item basis from a host processor (or other device) to the memory. This approach also provides better visibility of macro-level memory access patterns to the memory system and may enable additional optimizations in scheduling memory accesses.

    Abstract translation: 除了一层或多层DRAM(或其他存储器技术)之外,一些堆叠堆叠的存储器将包含逻辑层。 该逻辑层可以是与存储器管芯堆叠相关联的硅插入器上的离散逻辑管芯或逻辑。 额外的电路/功能被放置在逻辑层上以实现执行各种数据移动和地址计算操作的功能。 该功能将允许复合存储器操作 - 传达到存储器的单个请求,其表征许多数据项的访问和移动。 这消除了与从主处理器(或其他设备)到存储器的以细粒度,每数据项为基础传送地址和控制信息相关联的性能和功耗开销。 这种方法还提供了对存储器系统的宏级存储器访问模式的更好的可见性,并且可以在调度存储器访问中实现附加优化。

    Prefetch Kernels on Data-Parallel Processors
    9.
    发明申请
    Prefetch Kernels on Data-Parallel Processors 审中-公开
    在数据并行处理器上预取内核

    公开(公告)号:US20140149677A1

    公开(公告)日:2014-05-29

    申请号:US13685133

    申请日:2012-11-26

    Abstract: Embodiments include methods, systems and computer readable media configured to execute a first kernel (e.g. compute or graphics kernel) with reduced intermediate state storage resource requirements. These include executing a first and second (e.g. prefetch) kernel on a data-parallel processor, such that the second kernel begins executing before the first kernel. The second kernel performs memory operations that are based upon at least a subset of memory operations in the first kernel.

    Abstract translation: 实施例包括被配置为执行具有减少的中间状态存储资源需求的第一内核(例如计算或图形内核)的方法,系统和计算机可读介质。 这些包括在数据并行处理器上执行第一和第二(例如预取)内核,使得第二内核在第一内核之前开始执行。 第二个内核执行基于第一个内核中至少一个内存操作子集的内存操作。

Patent Agency Ranking