-
1.
公开(公告)号:US20240211212A1
公开(公告)日:2024-06-27
申请号:US18601259
申请日:2024-03-11
CPC分类号: G06F7/5443 , G06F9/3867 , G06F9/522 , G06F40/20 , G06N3/063
摘要: Systems and methods are provided for a multi-die dot-product engine (DPE) to provision large-scale machine learning inference applications. The multi-die DPE leverages a multi-chip architecture. For example, a multi-chip interface can include a plurality of DPE chips, where each DPE chip performs inference computations for performing deep learning operations. A hardware interface between a memory of a host computer and the plurality of DPE chips communicatively connects the plurality of DPE chips to the memory of the host computer system during an inference operation such that the deep learning operations are spanned across the plurality of DPE chips. Due to the multi-die architecture, multiple silicon devices are allowed to be used for inference, thereby enabling power-efficient inference for large-scale machine learning applications and complex deep neural networks. The multi-die DPE can be used to build a multi-device DNN inference system performing specific applications, such as object recognition, with high accuracy.
-
公开(公告)号:US20210110243A1
公开(公告)日:2021-04-15
申请号:US16598329
申请日:2019-10-10
摘要: Systems are methods are provided for implementing a deep learning accelerator system interface (DLASI). The DLASI connects an accelerator having a plurality of inference computation units to a memory of the host computer system during an inference operation. The DLASI allows interoperability between a main memory of a host computer, which uses 64 B cache lines, for example, and inference computation units, such as tiles, which are designed with smaller on-die memory using 16-bit words. The DLASI can include several components that function collectively to provide the interface between the server memory and a plurality of tiles. For example, the DLASI can include: a switch connected to the plurality of tiles; a host interface; a bridge connected to the switch and the host interface; and a deep learning accelerator fabric protocol. The fabric protocol can also implement a pipelining scheme which optimizes throughput of the multiple tiles of the accelerator.
-
3.
公开(公告)号:US11947928B2
公开(公告)日:2024-04-02
申请号:US17017557
申请日:2020-09-10
CPC分类号: G06F7/5443 , G06F9/3867 , G06F9/522 , G06F40/20 , G06N3/063
摘要: Systems and methods are provided for a multi-die dot-product engine (DPE) to provision large-scale machine learning inference applications. The multi-die DPE leverages a multi-chip architecture. For example, a multi-chip interface can include a plurality of DPE chips, where each DPE chip performs inference computations for performing deep learning operations. A hardware interface between a memory of a host computer and the plurality of DPE chips communicatively connects the plurality of DPE chips to the memory of the host computer system during an inference operation such that the deep learning operations are spanned across the plurality of DPE chips. Due to the multi-die architecture, multiple silicon devices are allowed to be used for inference, thereby enabling power-efficient inference for large-scale machine learning applications and complex deep neural networks. The multi-die DPE can be used to build a multi-device DNN inference system performing specific applications, such as object recognition, with high accuracy.
-
4.
公开(公告)号:US20220075597A1
公开(公告)日:2022-03-10
申请号:US17017557
申请日:2020-09-10
摘要: Systems and methods are provided for a multi-die dot-product engine (DPE) to provision large-scale machine learning inference applications. The multi-die DPE leverages a multi-chip architecture. For example, a multi-chip interface can include a plurality of DPE chips, where each DPE chip performs inference computations for performing deep learning operations. A hardware interface between a memory of a host computer and the plurality of DPE chips communicatively connects the plurality of DPE chips to the memory of the host computer system during an inference operation such that the deep learning operations are spanned across the plurality of DPE chips. Due to the multi-die architecture, multiple silicon devices are allowed to be used for inference, thereby enabling power-efficient inference for large-scale machine learning applications and complex deep neural networks. The multi-die DPE can be used to build a multi-device DNN inference system performing specific applications, such as object recognition, with high accuracy.
-
-
-