摘要:
A write-through cache scheme is created. A store data command is sent to a cache line of a cache array from a processing unit. It is then determined whether the address of the store data is valid, wherein the original data from the store's address has been previously loaded into the cache. A write-through command is sent to a system bus as a function of whether the address of the store data is valid. The bus controller is employed to sense the write-through command. If the write-through command is sensed, a clean command is generated by the bus controller. If the write-through command is sensed, the store data is written into the cache array, and the data is marked as modified. If the write-through command is sensed, the clean command is sent onto the system bus by the bus controller, thereby causing modified data to be written to memory.
摘要:
The present invention provides for a cache-accessing system employing a binary tree with decision nodes. A cache comprising a plurality of sets is provided. A locking or streaming replacement strategy is employed for individual sets of the cache. A replacement management table is also provided. The replacement management table is employable for managing a replacement policy of information associated with the plurality of sets. A pseudo least recently used function is employed to determine the least recently used set of the cache, for such reasons as set replacement. An override signal line is also provided. The override signal is employable to enable an overwrite of a decision node of the binary tree. A value signal is also provided. The value signal is employable to overwrite the decision node of the binary tree.
摘要:
The present invention provides a method and apparatus for creating memory barriers in a Direct Memory Access (DMA) device. A memory barrier command is received and a memory command is received. The memory command is executed based on the memory barrier command. A bus operation is initiated based on the memory barrier command. A bus operation acknowledgment is received based on the bus operation. The memory barrier command is executed based on the bus operation acknowledgment. In a particular aspect, memory barrier commands are direct memory access sync (dmasync) and direct memory access enforce in-order execution of input/output (dmaeieio) commands.
摘要:
A method, an apparatus, and a computer program are provided for controlling memory access. Direct Memory Access (DMA) units have become commonplace in a number of bus architectures. However, managing limited system resources has become a challenge with multiple DMA units. In order to mange the multitude of commands generated and preserve dependencies, embedded flags in commands or a barrier command are used. These operations then can control the order in which commands are executed so as to preserve dependencies.
摘要:
The present invention provides for managing an atomic facility cache write back state machine. A first write back selection is made. A reservation pointer pointing to the reserved line in the atomic facility data array is established. A next write back selection is made. An entry for the reservation point for the next write back selection is removed, whereby the valid reservation line is precluded form being selected for the write back. This prevents a modified command from being invalidated.
摘要:
Disclosed is a coherent cache system that operates in conjunction with non-homogeneous processing units. A set of processing units of a first configuration has conventional cache and directly accesses common or shared system physical and virtual address memory through the use of a conventional MMU (Memory Management Unit). Additional processors of a different configuration and/or other devices that need to access system memory are configured to store accessed data in compatible caches. Each of the caches is compatible with a given protocol coherent memory management bus interspersed between the caches and the system memory.
摘要:
The present invention provides a method for a processor to write data to a cache or other fast memory, without also writing it to main memory. Further, the data is “locked” into the cache or other fast memory until it is loaded for use. Data remains in the locking cache until it is specifically overwritten under software control. The locking cache or other fast memory can be used as additional system memory. In an embodiment of the invention, the locking cache is one or more sets of ways, but not all of the sets or ways, of a multiple set associative cache.
摘要:
The present invention provides for atomic update primitives in an asymmetric single-chip heterogeneous multiprocessor computer system having a shared memory with DMA transfers. At least one lock line command is generated from a set comprising a get lock line command with reservation, a put lock line conditional command, and a put lock line unconditional command.
摘要:
The present invention provides a method for a processor to write data to a cache or other fast memory, without also writing it to main memory. Further, the data is “locked” into the cache or other fast memory until it is loaded for use. Data remains in the locking cache until it is specifically overwritten under software control. The locking cache or other fast memory can be used as additional system memory. In an embodiment of the invention, the locking cache is one or more sets of ways, but not all of the sets or ways, of a multiple set associative cache.
摘要:
Disclosed is an apparatus for controlling or managing the transmission of data packets over a multiplexed communication path, referred to herein as a bus, on a priority basis up to a given authorized BW (Bandwidth), in a given operational time period, for presently authorized devices or applications. Non-managed (not presently authorized) bus requests are handled in a prior art “best effort” basis.