Abstract:
A sliced graphics processing unit (GPU) architecture in processor-based devices is disclosed. In some aspects, a GPU based on a sliced GPU architecture includes multiple hardware slices. The GPU further includes a command processor (CP) circuit and an unslice primitive controller (PC_US). Upon receiving a graphics instruction from a central processing unit (CPU), the CP circuit determines a graphics workload, and transmits the graphics workload to the PC_US. The PC_US then partitions the graphics workload into multiple subbatches and distributes each subbatch to a PC_S of a hardware slice for processing.
Abstract:
A cache memory may receive, from a client, a request for a long cache line of data. The cache memory may receive, from a memory, the requested long cache line of data. The cache memory may store the requested long cache line of data into a plurality of data stores across a plurality of memory banks as a plurality of short cache lines of data distributed across the plurality of data stores in the cache memory. The cache memory may also store a plurality of tags associated with the plurality of short cache lines of data into one of a plurality of tag stores in the plurality of memory banks.
Abstract:
This disclosure describes techniques for supporting inter-task communication in a parallel computing system. The techniques for supporting inter-task communication may use hardware-based atomic operations to maintain the state of a pipe. A pipe may refer to a First-In, First-Out (FIFO)-organized buffer that allows various tasks to interact with the buffer as data producers or data consumers. Various pipe implementations may use multiple state parameters to define the state of a pipe. The hardware-based atomic operations described in this disclosure may modify multiple pipe state parameters in an atomic fashion. Modifying multiple pipe state parameters in an atomic fashion may avoid race conditions that would otherwise occur when multiple producers and/or multiple consumers attempt to modify the state of a pipe at the same time. In this way, pipe-based inter-task communication may be supported in a parallel computing system.
Abstract:
An apparatus and method are disclosed for transferring data from a first core to a second core of an integrated circuit (IC). The first core includes first and second memory blocks (e.g., first and second portions of a first-in-first-out (FIFO) memory coupled to first and second pre-multiplexers, respectively). The second core includes a multiplexer including first and second inputs coupled to the first and second memory blocks, respectively. Additionally, the second core includes a read controller configured to generate a first read control signal to cause the first and second memory blocks to transfer data to the first and second inputs of the multiplexer, respectively; and generate a second read control signal to cause the multiplexer to transfer data from the first and inputs to an output of the multiplexer.
Abstract:
This disclosure describes techniques for supporting inter-task communication in a parallel computing system. The techniques for supporting inter-task communication may use hardware-based atomic operations to maintain the state of a pipe. A pipe may refer to a First-In, First-Out (FIFO)-organized buffer that allows various tasks to interact with the buffer as data producers or data consumers. Various pipe implementations may use multiple state parameters to define the state of a pipe. The hardware-based atomic operations described in this disclosure may modify multiple pipe state parameters in an atomic fashion. Modifying multiple pipe state parameters in an atomic fashion may avoid race conditions that would otherwise occur when multiple producers and/or multiple consumers attempt to modify the state of a pipe at the same time. In this way, pipe-based inter-task communication may be supported in a parallel computing system.
Abstract:
A cache memory system includes a cache memory including a plurality of cache memory lines and a dirty buffer including a plurality of dirty masks. A cache controller is configured to allocate one of the dirty masks to each of the cache memory lines when a write to the respective cache memory line is not a full write to that cache memory line. Each of the dirty masks indicates dirty states of data units in one of the cache memory lines. The cache controller may include a dirty buffer index which stores an identification (ID) information that associates the dirty masks with the cache memory lines to which the dirty masks are allocated. A cache line may include a fully dirty flag indicating when each byte in that cache line is dirty, so that a dirty mask does not need to be allocated for that cache line.
Abstract:
Systems and methods related to a memory system including a cache memory are disclosed. The cache memory system includes a cache memory including a plurality of cache memory lines and a dirty buffer including a plurality of dirty masks. A cache controller is configured to allocate one of the dirty masks to each of the cache memory lines when a write to the respective cache memory line is not a full write to that cache memory line. Each of the dirty masks indicates dirty states of data units in one of the cache memory lines. The cache controller stores an identification (ID) information that associates the dirty masks with the cache memory lines to which the dirty masks are allocated.