Abstract:
A content aware application processing system is provided for allowing directed access to data stored in a non-cache memory thereby bypassing cache coherent memory. The processor includes a system interface to cache coherent memory and a low latency memory interface to a non-cache coherent memory. The system interface directs memory access for ordinary load/store instructions executed by the processor to the cache coherent memory. The low latency memory interface directs memory access for non-ordinary load/store instructions executed by the processor to the non-cache memory, thereby bypassing the cache coherent memory. The non-ordinary load/store instruction can be a coprocessor instruction. The memory can be a low-latency type memory. The processor can include a plurality of processor cores.
Abstract:
Methods and apparatus are provided for selectively replicating a data structure in a low-latency memory. The memory includes multiple individual memory banks configured to store replicated copies of the same data structure. Upon receiving a request to access the stored data structure, a low-latency memory access controller selects one of the memory banks, then accesses the stored data from the selected memory bank. Selection of a memory bank can be accomplished using a thermometer technique comparing the relative availability of the different memory banks. Exemplary data structures that benefit from the resulting efficiencies include deterministic finite automata (DFA) graphs and other data structures that are loaded (i.e., read) more often than they are stored (i.e., written).
Abstract:
A computer-readable instruction is described for traversing deterministic finite automata (DFA) graphs to perform a pattern search in the in-coming packet data in real-time. The instruction includes one or more pre-defined fields. One of the fields includes a DFA graph identifier for identifying one of several previously-stored DFA graphs. Another one of the fields includes an input reference for identifying input data to be processed using the identified DFA graphs. Yet another one of the fields includes an output reference for storing results generated responsive to the processed input data. The instructions are forwarded to a DFA engine adapted to process the input data using the identified DFA graph and to provide results as instructed by the output reference.
Abstract:
A processor for traversing deterministic finite automata (DFA) graphs with incoming packet data in real-time. The processor includes at least one processor core and a DFA module operating asynchronous to the at least one processor core for traversing at least one DFA graph stored in a non-cache memory with packet data stored in a cache-coherent memory.
Abstract:
In one embodiment, a system includes memory ports distributed into subsets identified by a subset index, where each memory port has an individual wait time based on a respective workload. The system further comprises a first address hashing unit configured to receive a read request including a virtual memory address associated with a replication factor and referring to graph data. The first address hashing unit translates the replication factor into a corresponding subset index based on the virtual memory address, and converts the virtual memory address to a hardware based memory address referring to graph data in the memory ports within a subset indicated by the corresponding subset index. The system further comprises a memory replication controller configured to direct read requests to the hardware based address to the one of the memory ports within the subset indicated by the corresponding subset index with a lowest individual wait time.
Abstract:
In one embodiment, a system comprises multiple memory ports distributed into multiple subsets, each subset identified by a subset index and each memory port having an individual wait time. The system further comprises a first address hashing unit configured to receive a read request including a virtual memory address associated with a replication factor, and referring to graph data. The first address hashing unit translates the replication factor into a corresponding subset index based on the virtual memory address, and converts the virtual memory address to a hardware based memory address that refers to graph data in the memory ports within a subset indicated by the corresponding subset index. The system further comprises a memory replication controller configured to direct read requests to the hardware based address to the one of the memory ports within the subset indicated by the corresponding subset index with a lowest individual wait time.
Abstract:
A method and apparatus for optimizing IPsec processing by providing execution units with windowing data during prefetch and managing coherency of security association data by management of security association accesses. Providing execution units with windowing data allows initial parallel processing of IPsec packets. The security association access ordering apparatus serializes access to the dynamic section of security association data according to packet order arrival while otherwise allowing parallel processing of the IPsec packet by multiple execution units in a security processor.
Abstract:
An efficient system and method for managing reads and writes on a bi-directional bus to optimize bus performance while avoiding bus contention and avoiding read/write starvation. In particular, by intelligently managing reads and writes on a bi-directional bus, bus latency can be reduced while still ensuring no bus contention or read/write starvation. This is accomplished by utilizing bus streaming control logic, separate queues for reads and writes, and a simple 2 to 1 mux.
Abstract:
A system and method is disclosed to track a large number of open pages in a computer memory system. The computer system contains one or more processors each including a memory controller containing a page table, the page table organized into a plurality of rows with each row able to store an address of an open memory page. A RIMM module containing RDRAM devices is coupled to each processor, each RDRAM containing a plurality of memory banks. The page table increases system memory performance by tracking a large number of open memory pages. Associated with the page table is a bank active table that indicates the memory banks in each RDRAM device having open memory pages. The page table enqueues accesses to the RIMM module in a precharge queue resulting from a page miss caused by the address of an open memory page occupying the same row of the page table as the address of the system memory access resulting in the page miss. The page table also enqueues accesses to system memory in a Row-address-select (“RAS”) queue resulting from a page miss caused by a row of the page table not containing any open memory page address. The page table enqueues accesses to system memory resulting in page hits to open memory pages in a Column-address-select (“CAS”) queue. An entry in the precharge queue is then enqueued into the RAS queue. An entry in the RAS queue after completion is enqueued into the CAS Read or CAS Write queue.
Abstract:
In one embodiment, a system comprises a memory and a memory controller that provides a cache access path to the memory and a bypass-cache access path to the memory, receives requests to read graph data from the memory on the bypass-cache access path and receives requests to read non-graph data from the memory on the cache access path. A method comprises receiving a request at a memory controller to read graph data from a memory on a bypass-cache access path, receiving a request at the memory controller to read non-graph data from the memory through a cache access path, and arbitrating, in the memory controller, among the requests using arbitration.