Abstract:
According to one general aspect, an apparatus may include a memory, an erasure-based, non-volatile memory, and a processor. The memory may be configured to store a mapping table, wherein the mapping table indicates a rewriteable state of a plurality of memory addresses. The erasure-based, non-volatile memory may be configured to store information, at respective memory addresses, in an encoded format. The encoded format may include more bits than the unencoded version of the information and the encoded format may allow the information be over-written, at least once, without an intervening erase operation. The processor may be configured to perform garbage collection based, at least in part upon, the rewriteable state associated with the respective memory addresses.
Abstract:
According to one general aspect, a memory management unit (MMU) may be configured to interface with a heterogeneous memory system that comprises a plurality of types of storage mediums. Each type of storage medium may be based upon a respective memory technology and may be associated with performance characteristic(s). The MMU may receive a data access for the heterogeneous memory system. The MMU may also determine at least one of the storage mediums of the heterogeneous memory system to service the data access. The target storage medium may be selected based upon at least one performance characteristic associated with the target storage medium and a quality of service tag that is associated with the virtual machine and that indicates one or more performance characteristics. The MMU may route the data access by the virtual machine to the at least one of the storage mediums.
Abstract:
According to one general aspect, an apparatus may include a memory, an erasure-based, non-volatile memory, and a processor. The memory may be configured to store a mapping table, wherein the mapping table indicates a rewriteable state of a plurality of memory addresses. The erasure-based, non-volatile memory may be configured to store information, at respective memory addresses, in an encoded format. The encoded format may include more bits than the unencoded version of the information and the encoded format may allow the information be over-written, at least once, without an intervening erase operation. The processor may be configured to perform garbage collection based, at least in part upon, the rewriteable state associated with the respective memory addresses.
Abstract:
According to one general aspect, a computational memory may include memory cells configured to store data and a page table, wherein the page table maps, at least in part, a virtual address to a physical address. The computational memory may also include at least one processor-in-memory. Each processor-in-memory may be configured to: receive a request to execute an instruction utilizing the portion of the data stored by the memory cells, wherein the request includes the virtual address, request the physical address from a translator, and execute the instruction utilizing the physical address. The computational memory may further include the translator which may be configured to, for each processor-in-memory, convert, by accessing the page table, a virtual address associated with a portion of the data to a physical address associated with the portion of the data.
Abstract:
According to one general aspect, a memory management unit (MMU) may be configured to interface with a heterogeneous memory system that comprises a plurality of types of storage mediums. Each type of storage medium may be based upon a respective memory technology and may be associated with performance characteristic(s). The MMU may receive a data access for the heterogeneous memory system. The MMU may also determine at least one of the storage mediums of the heterogeneous memory system to service the data access. The target storage medium may be selected based upon at least one performance characteristic associated with the target storage medium and a quality of service tag that is associated with the virtual machine and that indicates one or more performance characteristics. The MMU may route the data access by the virtual machine to the at least one of the storage mediums.
Abstract:
According to one general aspect, an apparatus may include a host interface, a memory, a processor, and an erasure-based, non-volatile memory. The host interface may receive a write command, wherein the write command includes unencoded data. The memory may store a mapping table, wherein the mapping table indicates a rewriteable state of a plurality of memory addresses. The processor may select a memory address to store information included by the unencoded data based, at least in part, upon the rewriteable state of the memory address. The erasure-based, non-volatile memory may store, at the memory address, the unencoded data's information as encoded data, wherein the encoded data includes more bits than the unencoded data and wherein the encoded data can be over-written with a second unencoded data without an intervening erase operation.
Abstract:
A method for providing a memory translation layer includes: receiving write request streams from a host computer; selectively storing each write request stream into a sequential zone, a K-associative zone, and a random zone of log blocks of a nonvolatile memory based on the characteristics. A first group of the write request streams that are sequential and start from a header page of a log block are stored in the sequential zone. A second group of the write request streams that are sequential but do not start from a header page of a log block are stored in the K-associative zone. A third group of the write request streams that are random are stored in the random zone.
Abstract:
Inventive aspects include a high bandwidth peer-to-peer switched key-value system, method, and section. The system can include a high bandwidth switch, multiple network interface cards communicatively coupled to the switch, one or more key-value caches to store a plurality of key-values, and one or more memory controllers communicatively coupled to the key-value caches and to the network interface cards. The memory controllers can include a key-value peer-to-peer logic section that can coordinate peer-to-peer communication between the memory controllers and the multiple network interface cards through the switch. The system can further include multiple transmission control protocol (TCP) offload engines that are each communicatively coupled to a corresponding one of the network interface cards. Each of the TCP offload engines can include a packet peer-to-peer logic section that can coordinate the peer-to-peer communication between the memory controllers and the network interface cards through the switch.
Abstract:
According to one general aspect, a memory management unit (MMU) may be configured to interface with a heterogeneous memory system that comprises a plurality of types of storage mediums. Each type of storage medium may be based upon a respective memory technology and may be associated with performance characteristic(s). The MMU may receive a data access for the heterogeneous memory system. The MMU may also determine at least one of the storage mediums of the heterogeneous memory system to service the data access. The target storage medium may be selected based upon at least one performance characteristic associated with the target storage medium and a quality of service tag that is associated with the virtual machine and that indicates one or more performance characteristics. The MMU may route the data access by the virtual machine to the at least one of the storage mediums.
Abstract:
A method for migrating disks includes: dividing a disk pool including a plurality of disks into a random zone and a sequential zone based on sequentiality and randomness of workloads running on the plurality of disks; monitoring a status of each disk in the disk pool based on a total cost of ownership (TCO); migrating one or more workloads of an overheated disk to an idle disk based on the status of each disk. The overheated disk has a first TCO higher than a migration threshold, and the idle disk has a second TCO lower than an idling threshold.