Abstract:
According to one general aspect, an apparatus may include a memory circuit die configured to store a lookup table that converts first data to second data. The apparatus may also include a logic circuit die comprising combinatorial logic circuits configured to receive the second data. The apparatus may further include an optical via coupled between the memory circuit die and the logical circuit die and configured to transfer second data between the memory circuit die and the logic circuit die.
Abstract:
A high-bandwidth memory (HBM) system includes an HBM device and a logic circuit. The logic circuit receives a first command from the host device and converts the received first command to a processing-in-memory (PIM) command that is sent to the HBM device through the second interface. A time between when the first command is received from the host device and when the HBM system is ready to receive another command from the host device is deterministic. The logic circuit further receives a fourth command and a fifth command from the host device. The fifth command requests time-estimate information relating to a time between when the fifth command is received and when the HBM system is ready to receive another command from the host device. The time-estimate information includes a deterministic period of time and an estimated period of time for a non-deterministic period of time.
Abstract:
A data chip that may pollute data is disclosed. The data chip may include a data array, read circuitry to read raw data from the data array, and a buffer to store the raw data. Using a pollution pattern stored in a mask register, a data pollution engine may pollute the raw data. Transmission circuitry may then transmit the polluted data.
Abstract:
A memory module includes one or more memory devices, a memory interface to a host computer, and a memory overprovisioning logic. The memory overprovisioning logic is configured to monitor memory usage of the one or more memory devices and provide a compression and/or deduplication ratio of the memory module to a kernel driver module of the host computer. The kernel driver module of the host computer is configured to update a virtual memory capacity of the memory module based on the compression and/or deduplication ratio.
Abstract:
A hybrid memory module includes a dynamic random access memory (DRAM) cache, a flash storage, and a memory controller. The DRAM cache includes one or more DRAM devices and a DRAM controller, and the flash storage includes one or more flash devices and a flash controller. The memory controller interfaces with the DRAM controller and the flash controller.
Abstract:
An embodiment includes a module, comprising: a memory bus interface; circuitry; and a controller coupled to the memory bus interface and the circuitry, and configured to: collect meta-data associated with the circuitry; and enable access to the meta-data in response to a memory access received through the memory bus interface.
Abstract:
According to one general aspect, an apparatus may include a random access memory array that, in turn, includes a reconfigurable look-up table. The reconfigurable look-up table may include memory cells configured to simultaneously store a plurality of look-up tables, wherein each look-up table is associated with a respective logic function. The reconfigurable look-up table may include a local row decoder configured to activate one or more rows of memory cells based upon a set of input signals. The reconfigurable look-up table may be configured to perform one logic function at a time, and wherein the logic function is dynamically selected. The plurality of look up tables stored in the memory cells may be configured to be dynamically altered via a write operation to the random access memory array.
Abstract:
A data structure and a mechanism to manage storage of objects is disclosed. The data structure can be used to manage storage of objects on any storage device, whether in memory or some other storage device. Given an object ID (OID) for an object, the system can identify a tuple that includes a device ID and an address. The device ID specifies the device storing the object, and the address specifies the address on the device where the object is stored. The application can then access the object using the device ID and the address.
Abstract:
In an Error Correction Code (ECC)-based memory, a Single Error Correction Double Error Detection (SECDED) scheme is used with data aggregation to correct more than one error in a memory word received in a memory burst. By completely utilizing the Hamming distance of the SECDED (128,120) code, 8 ECC bits can potentially correct one error in 120 data bits. Each memory burst is effectively “expanded” from its actual 64 data bits to 120 data bits by “sharing” additional 56 data bits from all of the other related bursts. When a cache line of 512 bits is read, the SECDED (128,120) code is used in conjunction with all the received 64 ECC bits to correct more than one error in the actual 64 bits of data in a memory word. The data mapping of the present disclosure translates to a higher rate of error correction than the existing (72,64) SECDED code.
Abstract:
According to one general aspect, an apparatus may include a plurality of stacked integrated circuit dies that include a memory cell die and a logic die. The memory cell die may be configured to store data at a memory address. The logic die may include an interface to the stacked integrated circuit dies and configured to communicate memory accesses between the memory cell die and at least one external device. The logic die may include a reliability circuit configured to ameliorate data errors within the memory cell die. The reliability circuit may include a spare memory configured to store data, and an address table configured to map a memory address associated with an error to the spare memory. The reliability circuit may be configured to determine if the memory access is associated with an error, and if so completing the memory access with the spare memory.