Abstract:
Systems and methods for maintaining an order of read and write transactions for each source through a bridge in a bus fabric. The bridge provides a connection from a first bus to a second bus within the bus fabric. The first bus has a single path for read and write transactions and the second bus has separate paths for read and write transactions. The bridge maintains a pair of counters for each source in a SoC to track the numbers of outstanding read and write transactions. The bridge prevents a read transaction from being forwarded to the second bus if the corresponding write counter is non-zero, and the bridge prevents a write transaction from being forwarded to the second bus if the corresponding read counter is non-zero.
Abstract:
An apparatus and associated method/processing unit are provided for utilizing a memory subsystem including NAND flash memory and dynamic random access memory. Further included is a first circuit for receiving DDR signals and converting the DDR signals to SATA signals. The first circuit includes embedded dynamic random access memory. Also provided is a second circuit for receiving the SATA signals and converting the SATA signals to NAND flash signals. The second circuit is communicatively coupled to the first circuit via a first memory bus associated with a SATA protocol, the NAND flash memory via a second memory bus associated with a NAND flash protocol, and the dynamic random access memory. In operation, data is fetched using a time between an execution of a plurality of threads.
Abstract:
A method and system are provided for processing a reply message out of order from a first-in-first-out (FIFO) storage, and processing other messages in an order as received in the FIFO storage. The system provides a second FIFO storage for storing any messages that have been retrieved from the first FIFO while searching for the reply message.
Abstract:
A message channel optimization method and system enables multi-flow access to the message channel infrastructure within a CPU of a processor-based system. A user (pcode) employs a virtual channel to submit message channel transactions, with the message channel driver processing the transaction “behind the scenes”. The message channel driver thus allows the user to continue processing without having to block other transactions from being processed. Each transaction will be processed, either immediately or at some future time, by the message channel driver. The message channel optimization method and system are useful for tasks involving message channel transactions as well as non-message channel transactions.
Abstract:
A dual host system and method with back to back non-transparent bridges and a proxy packet generating mechanism. The proxy packet generating mechanism enables the hosts to send interrupt generating packets to each other.
Abstract:
A distributed interconnect bus apparatus for connecting peripheral devices. The apparatus can be utilized to wirelessly connect peripheral devices or to allow the connectivity of such devices over a network. The apparatus includes a first bridge coupled to a root component of an interconnect bus; and a second bridge coupled to an endpoint component of an interconnect bus. The apparatus may further include an acknowledgment (ACK) termination for generating at least an ACK signal; and a flow control mechanism including at least one receiver buffer for temporarily saving data packets of multiple different transactions.
Abstract:
Methods and structure for emulating wide ports at an expander are provided. An exemplary system includes a Serial Attached Small Computer System Interface (SAS) expander. The expander includes a plurality of physical links, and a controller. The controller is able to identify a physical link coupled with a device, to generate a plurality of virtual physical links that are configured as a virtual wide port coupled with the device, and to present the virtual wide port at the expander in place of the physical link.
Abstract:
A unified request queue includes multiple entries for servicing multiple types of requests. Each of the entries of the unified request queue is generally allocable to requests of any of the multiple request types. A number of entries in the unified request queue is reserved for a first request type among the multiple types of requests. The number of entries reserved for the first request type is dynamically varied based on a number of requests of the first request type rejected by the unified request queue due to allocation of entries in the unified request queue to other requests.