-
公开(公告)号:US20200104126A1
公开(公告)日:2020-04-02
申请号:US16147696
申请日:2018-09-29
Applicant: Intel Corporation
Inventor: Jonathan Pearce , David Sheffield , Srikanth Srinivasan , Jeffrey Cook , Deborah Marr , Abhijit Davare , Asit Mishra , Steven Burns , Desmond Kirkpatrick , Andrey Ayupov , Anton Alexandrovich Sorokin , Eriko Nurvitadhi
IPC: G06F9/30 , G06F9/38 , G06F17/16 , G06F12/0831 , G06F12/084 , G06F7/57
Abstract: An apparatus and method for performing efficient, adaptable tensor operations. For example, one embodiment of a processor comprises: front end circuitry to schedule a plurality of matrix operations responsive to a tensor matrix multiplication instruction; a plurality of lanes to perform parallel execution of the matrix operations, each lane comprising: first, second, and third tile registers to store blocks of a first matrix (A), second matrix (B), and third matrix (C), respectively; at least one tensor arithmetic logic unit (TALU) to multiply a block of elements of the first matrix with a block of elements of the second matrix to generate a product and to accumulate the product with a block of elements of the third matrix, wherein each lane is to multiply one or more different blocks of the first and second matrix and to accumulate the resulting one or more products with one or more different blocks of the third matrix; and broadcast circuitry to broadcast one or more invariant matrix blocks to different tile registers within a lane and/or different tile registers across different lanes.
-
公开(公告)号:US11093250B2
公开(公告)日:2021-08-17
申请号:US16147694
申请日:2018-09-29
Applicant: Intel Corporation
Inventor: Jonathan Pearce , David Sheffield , Srikanth Srinivasan , Jaewoong Sim , Andrey Ayupov
Abstract: An apparatus and method for efficiently processing invariant operations on a parallel execution engine. For example, one embodiment of a processor comprises: a plurality of parallel execution lanes comprising execution circuitry and registers to concurrently execute a plurality of threads; front end circuitry coupled to the plurality of parallel execution lanes, the front end circuitry to arrange the threads into parallel execution groups and schedule operations of the threads to be executed across the parallel execution lanes, wherein the front end circuitry is to dynamically evaluate one or more variables associated with the operations to determine if one or more conditionally invariant operations will be invariant across threads of a parallel execution group and/or across the parallel execution lanes; a scheduler of the front end circuitry to responsively schedule a shared thread upon a determination that a conditionally invariant operation will be invariant across threads of a parallel execution group and/or across the parallel execution lanes.
-
公开(公告)号:US10776110B2
公开(公告)日:2020-09-15
申请号:US16147696
申请日:2018-09-29
Applicant: Intel Corporation
Inventor: Jonathan Pearce , David Sheffield , Srikanth Srinivasan , Jeffrey Cook , Deborah Marr , Abhijit Davare , Asit Mishra , Steven Burns , Desmond Kirkpatrick , Andrey Ayupov , Anton Alexandrovich Sorokin , Eriko Nurvitadhi
IPC: G06F9/30 , G06F9/38 , G06F17/16 , G06F7/57 , G06F12/0831 , G06F12/084
Abstract: An apparatus and method for performing efficient, adaptable tensor operations. For example, one embodiment of a processor comprises: front end circuitry to schedule a plurality of matrix operations responsive to a tensor matrix multiplication instruction; a plurality of lanes to perform parallel execution of the matrix operations, each lane comprising: first, second, and third tile registers to store blocks of a first matrix (A), second matrix (B), and third matrix (C), respectively; at least one tensor arithmetic logic unit (TALU) to multiply a block of elements of the first matrix with a block of elements of the second matrix to generate a product and to accumulate the product with a block of elements of the third matrix, wherein each lane is to multiply one or more different blocks of the first and second matrix and to accumulate the resulting one or more products with one or more different blocks of the third matrix; and broadcast circuitry to broadcast one or more invariant matrix blocks to different tile registers within a lane and/or different tile registers across different lanes.
-
4.
公开(公告)号:US20200104139A1
公开(公告)日:2020-04-02
申请号:US16147692
申请日:2018-09-29
Applicant: Intel Corporation
Inventor: Jonathan Pearce , David Sheffield , Srikanth Srinivasan , Jeffrey Cook , Deborah Marr , Abhijit Davare , Andrey Ayupov
Abstract: An apparatus and method for data parallel single program multiple data (SPMD) execution. For example, one embodiment of a processor comprises: instruction fetch circuitry to fetch instructions of one or more primary threads; a decoder to decode the instructions to generate uops; a data parallel cluster (DPC) to execute microthreads comprising a subset of the uops, the DPC further comprising: a plurality of execution lanes to perform parallel execution of the microthreads; an instruction decode queue (IDQ) to store the uops prior to execution; and a scheduler to evaluate the microthreads based on associated variables including instruction pointer (IP) values, the scheduler to gang microthreads into fragments for parallel execution on the execution lanes based on the evaluation.
-
公开(公告)号:US11379229B2
公开(公告)日:2022-07-05
申请号:US16987838
申请日:2020-08-07
Applicant: INTEL CORPORATION
Inventor: Jonathan Pearce , David Sheffield , Srikanth Srinivasan , Jeffrey Cook , Debbie Marr , Abhijit Davare , Asit Mishra , Steven Burns , Desmond A. Kirkpatrick , Andrey Ayupov , Anton Alexandrovich Sorokin , Eriko Nurvitadhi
IPC: G06F9/30 , G06F9/38 , G06F17/16 , G06F7/57 , G06F12/0831 , G06F12/084
Abstract: An apparatus and method for performing efficient, adaptable tensor operations. For example, one embodiment of a processor comprises: front end circuitry to schedule matrix operations responsive to a matrix multiplication instruction; a plurality of lanes to perform parallel execution of the matrix operations, wherein a lane comprises an arithmetic logic unit to multiply a block of a first matrix with a block of a second matrix to generate a product and to accumulate the product with a block of a third matrix, and wherein the matrix blocks are to be stored in registers within the lane; and broadcast circuitry to broadcast one or more invariant matrix blocks to at least one of different registers within the lane and different registers across different lanes.
-
公开(公告)号:US10915328B2
公开(公告)日:2021-02-09
申请号:US16220528
申请日:2018-12-14
Applicant: Intel Corporation
Inventor: Jonathan Pearce , David Sheffield , Srikanth Srinivasan , Jeffrey Cook , Deborah Marr
Abstract: An apparatus and method for offloading iterative, parallel work to a data parallel cluster. For example, one embodiment of a processor comprises: a host processor to execute a primary thread; a data parallel cluster coupled to the host processor over a high speed interconnect, the data parallel cluster comprising a plurality of execution lanes to perform parallel execution of one or more secondary threads related to the primary thread; and a data parallel cluster controller integral to the host processor to offload processing of the one or more secondary threads to the data parallel cluster in response to one of the cores executing a parallel processing call instruction from the primary thread.
-
公开(公告)号:US12223318B2
公开(公告)日:2025-02-11
申请号:US17214572
申请日:2021-03-26
Applicant: Intel Corporation
Inventor: Tyler Sondag , David Sheffield , Sofia Pediaditaki
Abstract: An apparatus and method for supporting deprecated instructions. For example, one embodiment of a processor comprises: A processor comprising: a plurality of cores, each core comprising a current microarchitecture to execute instructions and process data, the current microarchitecture including hardware support for virtual execution environment comprising a hypervisor running at a first privilege level and one or more virtual machines each running at a second privilege level, the microarchitecture further including partial hardware support for executing deprecated instructions associated with a prior microarchitecture; at least one core of the plurality of cores comprising: a decoder to decode the instructions, the decoder to specify one or more microoperations corresponding to each of the instructions; execution circuitry to execute the corresponding microoperations; wherein either a first type or a second type of virtual machine exit is to be performed responsive to detecting a deprecated instruction in a first virtual machine, wherein responsive to the first type of virtual machine exit, the hypervisor is to perform a first emulation of the prior microarchitecture without reliance on the partial hardware support, and wherein responsive to the second type of virtual machine exit, the hypervisor is to perform a second emulation of the prior microarchitecture relying on the partial hardware support.
-
公开(公告)号:US10831505B2
公开(公告)日:2020-11-10
申请号:US16147692
申请日:2018-09-29
Applicant: Intel Corporation
Inventor: Jonathan Pearce , David Sheffield , Srikanth Srinivasan , Jeffrey Cook , Deborah Marr , Abhijit Davare , Andrey Ayupov
Abstract: An apparatus and method for data parallel single program multiple data (SPMD) execution. For example, one embodiment of a processor comprises: instruction fetch circuitry to fetch instructions of one or more primary threads; a decoder to decode the instructions to generate uops; a data parallel cluster (DPC) to execute microthreads comprising a subset of the uops, the DPC further comprising: a plurality of execution lanes to perform parallel execution of the microthreads; an instruction decode queue (IDQ) to store the uops prior to execution; and a scheduler to evaluate the microthreads based on associated variables including instruction pointer (IP) values, the scheduler to gang microthreads into fragments for parallel execution on the execution lanes based on the evaluation.
-
-
-
-
-
-
-