Abstract:
A system includes a plurality of storage processing accelerators (SPAs), at least one SPA of the plurality of SPAs including a plurality of programmable processors or storage processing engines (SPEs), the plurality of SPEs including n SPEs (n is a natural number greater than zero), where 1st to (n−1) SPEs of the n SPEs are configured to provide an output of the SPE to a next SPE of the n SPEs in a pipeline to be used as an input of the next SPE; and an acceleration platform manager (APM) connected to the plurality of the SPAs and the plurality of SPEs, and configured to control data processing in the plurality of SPAs and the plurality of SPEs.
Abstract:
A method includes: receiving, at an acceleration platform manager (APM) from an application service manager (ASM), application function processing information; allocating, by the APM, a first storage processing accelerator (SPA) from a plurality of SPAs, wherein at least one SPA of the plurality of SPAs comprises a plurality of programmable processors or storage processing engines (SPEs), the plurality of SPEs comprising n SPEs, enabling the plurality of SPEs in the first SPA, wherein once enabled, the at least one SPE of the plurality of SPEs in the first SPA is configured to process data based on the application function processing information; determining, by the APM, if data processing is completed by the at least one SPE of the plurality of SPEs in the first SPA; and sending, by the APM, a result of the data processing by the SPEs of the first SPA, to the ASM.
Abstract:
A method includes: receiving, at an acceleration platform manager (APM) from an application service manager (ASM), application function processing information; allocating, by the APM, a first storage processing accelerator (SPA) from a plurality of SPAs, wherein at least one SPA of the plurality of SPAs comprises a plurality of programmable processors or storage processing engines (SPEs), the plurality of SPEs comprising n SPEs, enabling the plurality of SPEs in the first SPA, wherein once enabled, the at least one SPE of the plurality of SPEs in the first SPA is configured to process data based on the application function processing information; determining, by the APM, if data processing is completed by the at least one SPE of the plurality of SPEs in the first SPA; and sending, by the APM, a result of the data processing by the SPEs of the first SPA, to the ASM.
Abstract:
A system includes a plurality of storage processing accelerators (SPAs), at least one SPA of the plurality of SPAs including a plurality of programmable processors or storage processing engines (SPEs), the plurality of SPEs including n SPEs (n is a natural number greater than zero), where 1st to (n−1) SPEs of the n SPEs are configured to provide an output of the SPE to a next SPE of the n SPEs in a pipeline to be used as an input of the next SPE; and an acceleration platform manager (APM) connected to the plurality of the SPAs and the plurality of SPEs, and configured to control data processing in the plurality of SPAs and the plurality of SPEs.
Abstract:
A distributed storage system can include a storage node (125, 130, 135). The storage node (125, 130, 135) can include a Solid State Drive (SSD) or other storage device that employs garbage collection (140, 145, 150, 155, 160, 165, 225, 230), a device garbage collection monitor (205), a garbage collection coordinator (210), an Input/Output (I/O) redirector (215), and an I/O resynchronizer (220). The device garbage collection monitor (205) can determine whether any storage devices (140, 145, 150, 155, 160, 165, 225, 230) need to perform garbage collection. The garbage collection coordinator (210) can schedule when the storage device (140, 145, 150, 155, 160, 165, 225, 230) can perform garbage collection. The I/O redirector (215) can redirect read requests (905) and write requests (1005) away from the storage device (140, 145, 150, 155, 160, 165, 225, 230) when it is performing garbage collection. The I/O resynchronizer (220) can ensure that data on the storage device (140, 145, 150, 155, 160, 165, 225, 230) is up-to-date after garbage collection finishes.
Abstract:
A computing system includes: a monitor block configured to calculate a total access time based on a device access time, a traffic latency, a traffic information, or a combination thereof; a name node block, coupled to the monitor block, configured to determine a data location of a data content; and a scheduler block, coupled to the name node block, configured to distribute a task assignment based on the total access time, the data location, device performance criteria, or a combination thereof for accessing the data content from a target device.
Abstract:
A system and method for leveraging a native operating system page cache when using non-block system storage devices is disclosed. A computer may include a processor, memory, and a non-block system storage device. A file system may be stored in memory and running on the processor, which may include a page cache. A key-value file system (KVFS) may reside between the file system and the storage device and may map received file system commands to key-value system commands that may be executed by the storage device. Results of the key-value system commands may be returned to the file system, permitting the operating system to cache data in the page cache.
Abstract:
A system includes a plurality of storage processing accelerators (SPAs), at least one SPA of the plurality of SPAs including a plurality of programmable processors or storage processing engines (SPEs), the plurality of SPEs including n SPEs (n is a natural number greater than zero), where 1st to (n−1) SPEs of the n SPEs are configured to provide an output of the SPE to a next SPE of the n SPEs in a pipeline to be used as an input of the next SPE; and an acceleration platform manager (APM) connected to the plurality of the SPAs and the plurality of SPEs, and configured to control data processing in the plurality of SPAs and the plurality of SPEs.
Abstract:
A method of managing a database, the method including determining whether a deterministic threshold has occurred, determining whether a random threshold has occurred, and initiating a maintenance process on the database when either the deterministic threshold or the random threshold has occurred.
Abstract:
According to one general aspect, a device may include a host interface circuit configured to communicate with a host device via a data protocol that employs data messages. The device may include a storage element configured to store data in response to a data message. The host interface circuit may be configured to detect when a tunneling command is embedded within the data message; extract a tunneled message address information from the data message; retrieve, via the tunneled message address information, a tunneled message stored in a memory of the host device; and route the tunneled message to an on-board processor and/or data processing logic. The on-board processor and/or data processing logic may be configured to execute one or more instructions in response to the tunneled message.