-
公开(公告)号:US20200233663A1
公开(公告)日:2020-07-23
申请号:US16843015
申请日:2020-04-08
Applicant: Google LLC
Inventor: William Lacy , Gregory Michael Thorson , Christopher Aaron Clark , Norman Paul Jouppi , Thomas Norrie , Andrew Everett Phelps
Abstract: A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.
-
公开(公告)号:US20190332509A1
公开(公告)日:2019-10-31
申请号:US16411569
申请日:2019-05-14
Applicant: Google LLC
Inventor: Thomas Norrie , Naveen Kumar
Abstract: A computer-implemented method executed by one or more processors, the method includes monitoring execution of program code executed by a first processor component; and monitoring execution of program code executed by a second processor component. A computing system stores data identifying hardware events in a memory buffer. The stored events occur across processor units that include at least the first and second processor components. The hardware events each include an event time stamp and metadata characterizing the event. The system generates a data structure identifying the hardware events. The data structure arranges the events in a time ordered sequence and associates events with at least the first or second processor components. The system stores the data structure in a memory bank of a host device and uses the data structure to analyze performance of the program code executed by the first or second processor components.
-
公开(公告)号:US20240160909A1
公开(公告)日:2024-05-16
申请号:US18423203
申请日:2024-01-25
Applicant: Google LLC
Inventor: Thomas Norrie , Andrew Everett Phelps , Norman Paul Jouppi , Matthew Leever Hedlund
CPC classification number: G06N3/063 , G06F9/544 , G06F13/16 , G06F15/8061 , G06F15/8092 , G06F17/16 , G06F2213/28
Abstract: Methods, systems, and apparatus, including computer-readable media, are described for a hardware circuit configured to implement a neural network. The circuit includes a first memory, respective first and second processor cores, and a shared memory. The first memory provides data for performing computations to generate an output for a neural network layer. Each of the first and second cores include a vector memory for storing vector values derived from the data provided by the first memory. The shared memory is disposed generally intermediate the first memory and at least one core and includes: i) a direct memory access (DMA) data path configured to route data between the shared memory and the respective vector memories of the first and second cores and ii) a load-store data path configured to route data between the shared memory and respective vector registers of the first and second cores.
-
公开(公告)号:US11922292B2
公开(公告)日:2024-03-05
申请号:US15931970
申请日:2020-05-14
Applicant: Google LLC
Inventor: Thomas Norrie , Andrew Everett Phelps , Norman Paul Jouppi , Matthew Leever Hedlund
CPC classification number: G06N3/063 , G06F9/544 , G06F13/16 , G06F15/8061 , G06F15/8092 , G06F17/16 , G06F2213/28
Abstract: Methods, systems, and apparatus, including computer-readable media, are described for a hardware circuit configured to implement a neural network. The circuit includes a first memory, respective first and second processor cores, and a shared memory. The first memory provides data for performing computations to generate an output for a neural network layer. Each of the first and second cores include a vector memory for storing vector values derived from the data provided by the first memory. The shared memory is disposed generally intermediate the first memory and at least one core and includes: i) a direct memory access (DMA) data path configured to route data between the shared memory and the respective vector memories of the first and second cores and ii) a load-store data path configured to route data between the shared memory and respective vector registers of the first and second cores.
-
公开(公告)号:US20230297372A1
公开(公告)日:2023-09-21
申请号:US18074990
申请日:2022-12-05
Applicant: Google LLC
Inventor: William Lacy , Gregory Michael Thorson , Christopher Aaron Clark , Norman Paul Jouppi , Thomas Norrie , Andrew Everett Phelps
CPC classification number: G06F9/3001 , G06F7/588 , G06F9/30032 , G06F9/30036 , G06F9/30043 , G06F9/30098 , G06F9/3891 , G06F13/36 , G06F13/4068 , G06F13/4282 , G06F15/8053 , G06F15/8092 , G06F17/16 , G06F15/8046 , G06N3/063
Abstract: A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.
-
公开(公告)号:US20220129364A1
公开(公告)日:2022-04-28
申请号:US17571373
申请日:2022-01-07
Applicant: Google LLC
Inventor: Thomas Norrie , Naveen Kumar
Abstract: A computer-implemented method that includes monitoring execution of program code by first and second processor components. A computing system detects that a trigger condition is satisfied by: i) identifying an operand in a portion of the program code; or ii) determining that a current time of a clock of the computing system indicates a predefined time value. The operand and the predefined time value are used to initiate trace events. When the trigger condition is satisfied the system initiates trace events that generate trace data identifying respective hardware events occurring across the computing system. The system uses the trace data to generate a correlated set of trace data. The correlated trace data indicates a time ordered sequence of the respective hardware events. The system uses the correlated set of trace data to analyze performance of the executing program code.
-
公开(公告)号:US11232012B2
公开(公告)日:2022-01-25
申请号:US16520558
申请日:2019-07-24
Applicant: Google LLC
Inventor: Thomas Norrie , Naveen Kumar
Abstract: A computer-implemented method that includes monitoring execution of program code by first and second processor components. A computing system detects that a trigger condition is satisfied by: i) identifying an operand in a portion of the program code; or ii) determining that a current time of a clock of the computing system indicates a predefined time value. The operand and the predefined time value are used to initiate trace events. When the trigger condition is satisfied the system initiates trace events that generate trace data identifying respective hardware events occurring across the computing system. The system uses the trace data to generate a correlated set of trace data. The correlated trace data indicates a time ordered sequence of the respective hardware events. The system uses the correlated set of trace data to analyze performance of the executing program code.
-
公开(公告)号:US20210263739A1
公开(公告)日:2021-08-26
申请号:US17007569
申请日:2020-08-31
Applicant: Google LLC
Inventor: Thomas Norrie , Gurushankar Rajamani , Andrew Everett Phelps , Matthew Leever Hedlund , Norman Paul Jouppi
Abstract: Methods, systems, and apparatus, including computer-readable media, are described for performing vector reductions using a shared scratchpad memory of a hardware circuit having processor cores that communicate with the shared memory. For each of the processor cores, a respective vector of values is generated based on computations performed at the processor core. The shared memory receives the respective vectors of values from respective resources of the processor cores using a direct memory access (DMA) data path of the shared memory. The shared memory performs an accumulation operation on the respective vectors of values using an operator unit coupled to the shared memory. The operator unit is configured to accumulate values based on arithmetic operations encoded at the operator unit. A result vector is generated based on performing the accumulation operation using the respective vectors of values.
-
公开(公告)号:US20210232898A1
公开(公告)日:2021-07-29
申请号:US15931970
申请日:2020-05-14
Applicant: Google LLC
Inventor: Thomas Norrie , Andrew Everett Phelps , Norman Paul Jouppi , Matthew Leever Hedlund
Abstract: Methods, systems, and apparatus, including computer-readable media, are described for a hardware circuit configured to implement a neural network. The circuit includes a first memory, respective first and second processor cores, and a shared memory. The first memory provides data for performing computations to generate an output for a neural network layer. Each of the first and second cores include a vector memory for storing vector values derived from the data provided by the first memory. The shared memory is disposed generally intermediate the first memory and at least one core and includes: i) a direct memory access (DMA) data path configured to route data between the shared memory and the respective vector memories of the first and second cores and ii) a load-store data path configured to route data between the shared memory and respective vector registers of the first and second cores.
-
公开(公告)号:US11016764B2
公开(公告)日:2021-05-25
申请号:US16843015
申请日:2020-04-08
Applicant: Google LLC
Inventor: William Lacy , Gregory Michael Thorson , Christopher Aaron Clark , Norman Paul Jouppi , Thomas Norrie , Andrew Everett Phelps
IPC: G06F9/302 , G06F9/312 , G06F15/80 , G06F13/40 , G06F7/57 , G06N3/063 , G06N20/00 , G06F17/16 , G06F9/30 , G06F9/38 , G06F7/58 , G06F13/36 , G06F13/42
Abstract: A vector processing unit is described, and includes processor units that each include multiple processing resources. The processor units are each configured to perform arithmetic operations associated with vectorized computations. The vector processing unit includes a vector memory in data communication with each of the processor units and their respective processing resources. The vector memory includes memory banks configured to store data used by each of the processor units to perform the arithmetic operations. The processor units and the vector memory are tightly coupled within an area of the vector processing unit such that data communications are exchanged at a high bandwidth based on the placement of respective processor units relative to one another, and based on the placement of the vector memory relative to each processor unit.
-
-
-
-
-
-
-
-
-