-
公开(公告)号:US20180276068A1
公开(公告)日:2018-09-27
申请号:US15468619
申请日:2017-03-24
发明人: Gregg B. Lesartre , Craig Warner , Martin Foltin , Chris Michael Brueggen , Brian S. Birk , Harvey Ray
CPC分类号: H03M13/2906 , G06F11/1048 , G11C29/52 , G11C2029/0409 , G11C2029/0411
摘要: In one example in accordance with the present disclosure, a system comprises a plurality of memory dies, a first region of memory allocated for primary ECC spread across a first subset of at least one memory die belonging to the plurality of memory die, wherein a portion of the primary ECC is allocated to each data block and a second region of memory allocated for secondary ECC spread across a second subset of at least one memory die included in the plurality of memory die. The system also comprises a memory controller configured to determine that an error within the first data block cannot be corrected using a first portion of the primary ECC allocated to the first data block, access the second region allocated for secondary ECC stored on the at least one memory die belonging to the plurality of memory die and attempt to correct the error using the primary and secondary ECC.
-
2.
公开(公告)号:US20240211212A1
公开(公告)日:2024-06-27
申请号:US18601259
申请日:2024-03-11
CPC分类号: G06F7/5443 , G06F9/3867 , G06F9/522 , G06F40/20 , G06N3/063
摘要: Systems and methods are provided for a multi-die dot-product engine (DPE) to provision large-scale machine learning inference applications. The multi-die DPE leverages a multi-chip architecture. For example, a multi-chip interface can include a plurality of DPE chips, where each DPE chip performs inference computations for performing deep learning operations. A hardware interface between a memory of a host computer and the plurality of DPE chips communicatively connects the plurality of DPE chips to the memory of the host computer system during an inference operation such that the deep learning operations are spanned across the plurality of DPE chips. Due to the multi-die architecture, multiple silicon devices are allowed to be used for inference, thereby enabling power-efficient inference for large-scale machine learning applications and complex deep neural networks. The multi-die DPE can be used to build a multi-device DNN inference system performing specific applications, such as object recognition, with high accuracy.
-
公开(公告)号:US09830283B2
公开(公告)日:2017-11-28
申请号:US14786822
申请日:2013-05-16
发明人: Gary Gostin , Martin Goldstein , Russ W. Herrell , Craig Warner
CPC分类号: G06F13/20 , G06F13/124 , G06F13/4282
摘要: According to an example, a multi-mode agent may include a processor interconnect (PI) interface to receive data from a processor and to selectively route the data to a node controller logic block, a central switch, or an optical interface based on one of a plurality of modes of operation of the multi-mode agent. The modes of operation may include a glueless mode where the PI interface is to route the data directly to the optical interface and bypass the node controller logic block and the central switch, a switched glueless mode where the PI interface is to route the data directly to the central switch for routing to the optical interface, and bypass the node controller logic block, and a glued mode where the PI interface is to route the data directly to the node controller logic block for routing to the central switch and further to the optical interface.
-
公开(公告)号:US11532356B2
公开(公告)日:2022-12-20
申请号:US17223435
申请日:2021-04-06
发明人: Amit S. Sharma , John Paul Strachan , Catherine Graves , Suhas Kumar , Craig Warner , Martin Foltin
摘要: A DPE memristor crossbar array system includes a plurality of partitioned memristor crossbar arrays. Each of the plurality of partitioned memristor crossbar arrays includes a primary memristor crossbar array and a redundant memristor crossbar array. The redundant memristor crossbar array includes values that are mathematically related to values within the primary memristor crossbar array. In addition, the plurality of partitioned memristor crossbar arrays includes a block of shared analog circuits coupled to the plurality of partitioned memristor crossbar arrays. The block of shared analog circuits is to determine a dot product value of voltage values generated by at least one partitioned memristor crossbar array of the plurality of partitioned memristor crossbar arrays.
-
公开(公告)号:US10275307B2
公开(公告)日:2019-04-30
申请号:US15454813
申请日:2017-03-09
摘要: A method is provided. In an example, the method includes identifying a memory module that includes a plurality of memory dies. Each memory die of the plurality of memory dies includes a plurality of memory regions, and each memory die of the plurality of memory dies services a respective portion of a data access. An error pattern is detected in a first memory region of the plurality of memory regions. The first memory region is associated with a first memory die of the plurality of memory dies. Based on the detected error pattern, the first memory region of the first memory die is marked as erased without marking a second memory region of the first memory die as erased.
-
公开(公告)号:US20180336034A1
公开(公告)日:2018-11-22
申请号:US15597757
申请日:2017-05-17
发明人: Craig Warner , Qiong Cai , Paolo Faraboschi , Gregg B Lesartre
IPC分类号: G06F9/30 , G06F12/0804 , G06F12/0875 , G06F15/78 , G06F12/128
CPC分类号: G06F9/30185 , G06F9/3004 , G06F9/30043 , G06F9/30047 , G06F9/30076 , G06F9/30181 , G06F12/0804 , G06F12/0875 , G06F12/128 , G06F15/7825 , G06F2212/452 , G06F2212/60 , G06F2212/69
摘要: In one example in accordance with the present disclosure, a compute engine block may comprise a data port connecting a processing core to a data cache, wherein the data port receives requests for accessing a memory and a data communication pathway to enable servicing of data requests of the memory. The processing core may be configured to identify a value in a predetermined address range of a first data request and adjust the bit size of a load instruction used by the processing core when a first value is identified.
-
公开(公告)号:US20180260273A1
公开(公告)日:2018-09-13
申请号:US15454813
申请日:2017-03-09
CPC分类号: G11C29/52 , G06F11/1048 , G11C2029/0401 , G11C2029/0409
摘要: A method is provided. In an example, the method includes identifying a memory module that includes a plurality of memory dies. Each memory die of the plurality of memory dies includes a plurality of memory regions, and each memory die of the plurality of memory dies services a respective portion of a data access. An error pattern is detected in a first memory region of the plurality of memory regions. The first memory region is associated with a first memory die of the plurality of memory dies. Based on the detected error pattern, the first memory region of the first memory die is marked as erased without marking a second memory region of the first memory die as erased.
-
8.
公开(公告)号:US11947928B2
公开(公告)日:2024-04-02
申请号:US17017557
申请日:2020-09-10
CPC分类号: G06F7/5443 , G06F9/3867 , G06F9/522 , G06F40/20 , G06N3/063
摘要: Systems and methods are provided for a multi-die dot-product engine (DPE) to provision large-scale machine learning inference applications. The multi-die DPE leverages a multi-chip architecture. For example, a multi-chip interface can include a plurality of DPE chips, where each DPE chip performs inference computations for performing deep learning operations. A hardware interface between a memory of a host computer and the plurality of DPE chips communicatively connects the plurality of DPE chips to the memory of the host computer system during an inference operation such that the deep learning operations are spanned across the plurality of DPE chips. Due to the multi-die architecture, multiple silicon devices are allowed to be used for inference, thereby enabling power-efficient inference for large-scale machine learning applications and complex deep neural networks. The multi-die DPE can be used to build a multi-device DNN inference system performing specific applications, such as object recognition, with high accuracy.
-
9.
公开(公告)号:US20220075597A1
公开(公告)日:2022-03-10
申请号:US17017557
申请日:2020-09-10
摘要: Systems and methods are provided for a multi-die dot-product engine (DPE) to provision large-scale machine learning inference applications. The multi-die DPE leverages a multi-chip architecture. For example, a multi-chip interface can include a plurality of DPE chips, where each DPE chip performs inference computations for performing deep learning operations. A hardware interface between a memory of a host computer and the plurality of DPE chips communicatively connects the plurality of DPE chips to the memory of the host computer system during an inference operation such that the deep learning operations are spanned across the plurality of DPE chips. Due to the multi-die architecture, multiple silicon devices are allowed to be used for inference, thereby enabling power-efficient inference for large-scale machine learning applications and complex deep neural networks. The multi-die DPE can be used to build a multi-device DNN inference system performing specific applications, such as object recognition, with high accuracy.
-
公开(公告)号:US11024379B2
公开(公告)日:2021-06-01
申请号:US16667773
申请日:2019-10-29
发明人: Amit Sharma , John Paul Strachan , Suhas Kumar , Catherine Graves , Martin Foltin , Craig Warner
摘要: Systems and methods for providing write process optimization for memristors are described. Write process optimization circuitry manipulates the memristor's write operation, allowing the number of cycles in the write process is reduced. Write process optimization circuitry can include write current integration circuitry that measures an integral of a write current over time. The write optimization circuitry can also include shaping circuitry. The shaping circuitry can shape a write pulse, by determining the pulse's termination, width, and slope. The write pulse is shaped depending upon whether the target memristor device exhibits characteristics of “maladroit” cells or “adroit” cells. The pulse shaping circuitry uses the integral and measured write current to terminate the write pulse in a manner that allows the memristor, wherein having maladroit cells and adroit cells, to reach a target state. Thus, utility of memristors is enhanced by realizing an optimized write process with decrease latency and improved efficiency.
-
-
-
-
-
-
-
-
-