Abstract:
A resource interconnect architecture and associated descriptor protocol provides more efficient communication between different resources in a data processing system. One embodiment uses a backdoor interconnect that allows some resources to communicate without using a central resource interconnect. Another embodiment uses nested descriptors that allow operations by different resources to be chained together without having to communicate back to an originating descriptor resource. In another embodiment, the descriptors are generated in hardware or in software. Other embodiments assign priority or privilege values to the descriptors that optimize processing and error handling performance.
Abstract:
A resource interconnect architecture and associated descriptor protocol provides more efficient communication between different resources in a data processing system. One embodiment uses a backdoor interconnect that allows some resources to communicate without using a central resource interconnect. Another embodiment uses nested descriptors that allow operations by different resources to be chained together without having to communicate back to an originating descriptor resource. In another embodiment, the descriptors are generated in hardware or in software. Other embodiments assign priority or privilege values to the descriptors that optimize processing and error handling performance.
Abstract:
Methods and apparatus are disclosed for performing lookup operations using associative memories, including, but not limited to modifying search keys within an associative memory based on modification mappings, forcing a no-hit condition in response to a highest-priority matching entry including a force no-hit indication, selecting among various sets or banks of associative memory entries in determining a lookup result, and detecting and propagating error conditions. In one implementation, each block retrieves a modification mapping from a local memory and modifies a received search key based on the mapping and received modification data. In one implementation, each of the associative memory entries includes a field for indicating that a successful match on the entry should or should not force a no-hit result. In one implementation, an indication of which associative memory blocks or sets of entries to use in a particular lookup operation is retrieved from a memory.
Abstract:
Methods and apparatus are disclosed for generating and using an enhanced tree bitmap data structure in determining a longest prefix match, such as in a router, packet switching system. One implementation organizes the tree bitmap to minimize the number of internal nodes that must be accessed during a lookup operation. A pointer is included in each of the trie or search nodes to the best match so far entry in the leaf or results array which allows direct access to this result without having to parse a corresponding internal node. Moreover, one implementation stores the internal node for a particular level as a first element in its child array. Additionally, one implementation uses a general purpose lookup engine that can traverse multiple tree bitmaps or other data structures simultaneously, and perform complete searches, partial searches, and resume partial searches such as after receiving additional data on which to search.