Abstract:
A multi-device storage system can be arranged into power saving systems by placing one or more storage devices into a reduced power consuming state when the storage activity associated with the system is sufficiently reduced that an attendant decrease in throughput will not materially affect users of the storage system. Where data redundancy is provided for, a redundant storage device can be placed into the reduced power consuming state and its redundancy responsibilities can be transitioned to a partition of a larger storage device. Such transitions can be based on specific parameters, such as write cycles or latency, crossing thresholds, including upper and lower thresholds, they can also be based on pre-set times, or a combination thereof. Lifecycle information, including lifecycle information collected in real-time by storage devices on a block-by-block basis, can be utilized to obtain historical empirical data from which to select the pre-set times.
Abstract:
A secure container can comprise a security server, one or more container servers, and one or more sensors that can detect a breach of the physically secure computing environment provided by the container. A management server external to the container can be informed when the container is sealed and authorized and can subsequently provide a cryptographic key enabling the security server in the container to boot. Each container server can request and receive a cryptographic key from the security server enabling them to boot. If the container is breached, such keys can be withheld and any computing device that is powered off, or restarted, will be unable to complete a subsequent boot. If the container loses a support system and is degraded, so long as the security server does not lose power, it can provide the cryptographic keys to container servers restarted after the degradation is removed.
Abstract:
A storage system can comprise storage devices having storage media with differing characteristics. An eviction handler can receive information regarding the state of storage media or of data stored thereon, as well as information regarding application or operating system usage, or expected usage, of data, or information regarding policy, including user-selected policy. Such information can be utilized by the eviction handler to optimize the use of the storage system by evicting data from storage media, including evicting data in order to perform maintenance on, or replace, such storage media, and evicting data to make room for other data, such as data copied to such storage media to facilitate pre-fetching or implement policy. The eviction handler can be implemented by any one or more of processes executing on a computing device, control circuitry of any one or more of the storage devices, or intermediate storage-centric devices.
Abstract:
The method may query the disk drive for a size where size may be a total number of logical blocks on the disk drive. The drive may receive a size response where the size includes a total number of logical blocks on the disk drive. The number of usage blocks necessary to represent the number of logical blocks on the disk drive may then be determined and usage data may be stored in the usage blocks. The data may be stored in the buffer of the disk drive. The data may also be stored in the DDF of a RAID drive. The data may be used to permit incremental backups of disk drives by backing up only the blocks that are indicated as having been changed. In addition, information about the access to the drive may be collected and stored for later analysis.
Abstract:
A heterogeneous and scalable bridge capable of translating a plurality of network protocols is adapted for coupling to a network switch fabric. The bridge uses at least one egress buffer interface and can perform port aggregation and bandwidth matching for various different port standards. The bridge is adapted for both networking and storage area networking protocols. A control unit is implemented with the bridge is able to identify control and flow information from different protocols and adapt them to the respective interface to which they are to be transmitted.
Abstract:
In a computer system, a host-to-I/O bridge (e.g., a host-to-PCI-X bridge) includes core logic that interfaces with at least two host buses for coupling a central processing unit(s) and the bridge, and interfaces with at least two PCI-X buses for coupling at least two PCI-X devices and the bridge. The core logic includes at least two PCI-X configuration registers adapted to have bits set to partition the core logic for resource allocation. Embodiments of the present invention allow the system to program the logical bus zero to behave as a single logical bus zero or partition into two or more separate “bus zero” PCI-X buses with their own configuration resources. Partitioning allows performance and functional isolation and transparent I/O sharing.
Abstract:
A core logic chip set in a computer system provides a bridge between processor host and memory buses and a plurality of Accelerated Graphics Port (AGP) buses. Each of the plurality of AGP buses have the same logical bus number. The core logic chip set has an arbiter having Request (“REQ”) and Grant (“GNT”) signal lines for each AGP device connected to the plurality of AGP physical buses. Each of the plurality of AGP buses has its own read and write queues to provide transaction concurrency of AGP devices on different ones of the plurality of AGP buses when the transaction addresses are not the same or are M byte aligned. Upper and lower memory address range registers store upper and lower memory addresses associated with each AGP device. Whenever a transaction occurs, the transaction address is compared with the stored range of memory addresses. If a match between addresses is found then strong ordering is used. If no match is found then weak ordering may be used to improve transaction latency times. AGP device to AGP device transactions may occur without being starved by CPU host bus to AGP bus transactions.
Abstract:
A cache system and method in accordance with the invention includes a cache near the target devices and another cache at the requesting host side so that the data traffic across the computer network is reduced. A cache updating and invalidation method are described.
Abstract:
A memory controller capable of supporting heterogeneous memory configurations enables seamless communications between a bus and memory modules having different characteristics. Thus, owners of computer systems need no longer replace entire memory arrays to take advantage of new memory modules; some memory modules may be upgraded to a new type while other memory modules of an older type remain. The memory controller receives memory requests from multiple processors and bus masters, identifies a memory module and memory access parameters for each request, accesses the memory and returns the resulting data (during a read request) or stores the data (during a write request). In some systems, the memory controller of the present invention is a two-tier memory controller system having a first memory controller coupled to the bus and to the second tier of memory controllers or RAM personality modules that translate between the first memory controller and a particular type of memory module. Typically, between the tiers a protocol is used which is representative of a typical clocked synchronous dynamic random access memory (SDRAM), although another protocol could be used. From the perspective of the processor bus or host bus coupled to the front end of the first memory controller, the entire memory controller system behaves as a single memory controller. From the perspective of memory, the back end of the RAM personality module is seen as a memory controller designed specifically to be configured for that memory type. Consequently, although the front end of the RAM personality module can be standardized across the system, compatible with the back end of the first memory controller, and in most embodiments of the present invention, the back end of the RAM personality module differs among the controller modules in the second tier, according to the variety of the memory modules in the memory system.
Abstract:
A computer system having at least one central processing unit, system memory, and a core logic capable of accepting an AGP bus is provided with an AGP to AGP bridge connected to the standard AGP bus. The AGP to AGP bridge can accommodate two or more AGP-compatible devices that can be accessed through the standard AGP bus via the AGP to AGP bridge. A PCI to memory bridge is also provided within the AGP to AGP bridge so that PCI devices may be connected to the AGP to AGP bridge. The AGP to AGP bridge is fitted with an overall flow control logic that controls the transfer of data to or from the various AGP devices and the standard AGP bus that is connected to the core logic of the computer system. The AGP to AGP Bridge can utilize a standard 32-bit AGP bus as well as (two) dual 32-bit buses to enhance bandwidth. In an alternate embodiment of the invention, the dual 32-bit buses can be combined to form a single 64-bit bus to increase the available bandwidth. Alternate embodiments of the AGP to AGP Bridge can accommodate the single 64-bit AGP bus for increased performance. Another alternate embodiment can accommodate peer-to-peer transfer of data between AGP busses on the bridge.