Abstract:
An example a method of optimizing a neural network having a plurality of layers includes: obtaining an architecture constraint for circuitry of an inference platform that implements the neural network; training the neural network on a training platform to generate network parameters and feature maps for the plurality of layers; and constraining the network parameters, the feature maps, or both based on the architecture constraint.
Abstract:
A circuit for controlling the operation of a memory system having different types of memory is described. The circuit comprises a first memory having a first type of memory element and having a first access time; a second memory having a second type of memory element and having a second access time, wherein the second type of memory element is different than the first type of memory element; a memory control circuit enabling access to the first memory and the second memory; a delay buffer coupled to the second memory to compensate for a difference in the first access time and the second access time; and a circuit for merging outputs of the first memory and delayed outputs of the second memory to generate ordered output data. A method of controlling the operation of a memory system is also disclosed.
Abstract:
A device includes a data processing engine array having a plurality of data processing engines organized in a grid having a plurality of rows and a plurality of columns. Each data processing engine includes a core, a memory module including a memory and a direct memory access engine. Each data processing engine includes a stream switch connected to the core, the direct memory access engine, and the stream switch of one or more adjacent data processing engines. Each memory module includes a first memory interface directly coupled to the core in the same data processing engine and one or more second memory interfaces directly coupled to the core of each of the one or more adjacent data processing engines.
Abstract:
The coherent accelerator processor interface (CAPI) provides a high-performance when using heterogeneous compute architectures, but CAPI is not compatible with the advanced extensible interface (AXI) which is used by many accelerators. The examples herein describe an AXI-CAPI adapter (e.g., a hardware architecture) that converts AXI signals to CAPI signals and vice versus. In one example, the AXI-CAPI adapter includes four modules: a low-level shim, a high-level shim, an AXI full module, and an AXI Lite module which are organized in a hierarchy of hardware elements. Each of the modules outputs can output a different version of the AXI signals using the hierarchical structure.
Abstract:
A circuit for processing data is described. The circuit comprises an input for receiving a request for implementing a key-value store data transaction; a plurality of memory interfaces associated with different memory types enabling access to a plurality of memory devices associated with a key-value store; and a memory management circuit controlling the routing of data by way of the plurality of memory interfaces based upon a data transfer criterion.
Abstract:
Embodiments herein describe a learnable transform block disposed before, or in between, the neural network layers to transform received data into a more computational-friendly domain while preserving discriminative features required for the neural network to generate accurate results. In one embodiment, during a training phase, an AI system learns parameters for the transform block that are then used during the inference phase to transform received data into the computational-friendly domain that has a reduced size input. The transformed data may require less compute resources or less memory usage to process by the underlying hardware device that hosts the neural network.
Abstract:
Examples herein propose operating redundant ML models which have been trained using a boosting technique that considers hardware faults. The embodiments herein describe performing an evaluation process where the performance of a first ML model is measured in the presence of a hardware fault. The errors introduced by the hardware fault can then be used to train a second ML model. In one embodiment, a second evaluation process is performed where the combined performance of both the first and second trained ML models is measured in the presence of a hardware fault. The resulting errors can then be used when training a third ML model. In this manner, the three trained ML models are trained to be error aware. As a result, during operation, if a hardware fault occurs, the three ML models have better performance relative to three ML models that where not trained to be error aware.
Abstract:
An integrated circuit (IC) can include a decomposer data mover circuit configured to read sub-arrays from array data stored in a source memory; generate metadata headers for the sub-arrays, wherein each metadata header includes location information indicating location of a corresponding sub-array within the array data; create data tiles, wherein each data tile includes a sub-array and a corresponding metadata header; and output the data tiles to compute circuitry within the IC. The IC can include a composer data mover circuit configured to receive processed versions of the data tiles from the compute circuitry; extract valid data regions from the processed versions of the data tiles; and write the valid data regions to a destination memory based on the location information from the metadata headers of the processed versions of the data tiles.
Abstract:
A device may include a plurality of data processing engines. Each data processing engine may include a core and a memory module. Each core may be configured to access the memory module in the same data processing engine and a memory module within at least one other data processing engine of the plurality of data processing engines.
Abstract:
A device may include a plurality of data processing engines. Each data processing engine may include a core and a memory module. Each core may be configured to access the memory module in the same data processing engine and a memory module within at least one other data processing engine of the plurality of data processing engines.