-
公开(公告)号:US11308396B2
公开(公告)日:2022-04-19
申请号:US16455329
申请日:2019-06-27
Applicant: Amazon Technologies, Inc.
Inventor: Jindrich Zejda , Jeffrey T. Huynh , Drazen Borkovic , Se jong Oh , Ron Diamant , Randy Renfu Huang
Abstract: Techniques are disclosed for debugging a neural network execution on a target processor. A reference processor may generate a plurality of first reference tensors for the neural network. The neural network may be repeatedly reduced to produce a plurality of lengths. For each of the lengths, a compiler converts the neural network into first machine instructions, the target processor executes the first machine instructions to generate a first device tensor, and the debugger program determines whether the first device tensor matches a first reference tensor. A shortest length is identified for which the first device tensor does not match the first reference tensor. Tensor output is enabled for a lower-level intermediate representation of the shortest neural network, and the neural network is converted into second machine instructions, which are executed by the target processor to generate a second device tensor.
-
公开(公告)号:US11232016B1
公开(公告)日:2022-01-25
申请号:US16138145
申请日:2018-09-21
Applicant: Amazon Technologies, Inc.
Inventor: Jeffrey T. Huynh , Ron Diamant , Sundeep Amirineni , Randy Renfu Huang
Abstract: Techniques disclosed herein relate generally to debugging complex computing systems, such as those executing neural networks. A neural network processor includes a processing engine configured to execute instructions to implement multiple layers of a neural network. The neural network processor includes a debugging circuit configured to generate error detection codes for input data to the processing engine or error detection codes for output data generated by the processing engine. The neural network processor also includes an interface to a memory device, where the interface is configured to save the error detection codes generated by the debugging circuit into the memory device. The error detection codes generated by the debugging circuit are compared with expected error detection codes generated using a function model of the neural network to identify defects of the neural network.
-
公开(公告)号:US11188302B1
公开(公告)日:2021-11-30
申请号:US16267031
申请日:2019-02-04
Applicant: Amazon Technologies, Inc.
Inventor: Ron Diamant , Randy Renfu Huang , Richard John Heaton
Abstract: Top-k is a process by which the largest elements among a set of elements is found. In various implementations, a top-k computation can be executed by a neural network accelerator, where the top-k computation is performed using a process that makes use of the accelerators memory array. A set of numerical values on which to perform top-k can be stored in the memory array. The accelerator can locate the maximum value from among the set of numerical values, and can store the maximum value back into the memory array. The accelerator can next remove the maximum value from the set of numerical values, so that a next largest value can be found. To remove the maximum value, the accelerator can write a value representing negative infinity to the memory array at each location of the maximum value.
-
公开(公告)号:US11144291B1
公开(公告)日:2021-10-12
申请号:US16698320
申请日:2019-11-27
Applicant: Amazon Technologies, Inc.
Inventor: Hongbin Zheng , Preston Pengra Briggs , Tobias Joseph Kastulus Edler von Koch , Taemin Kim , Randy Renfu Huang
Abstract: Methods of accelerating the execution of neural networks are disclosed. A description of a neural network may be received. A plurality of operators may be identified based on the description of the neural network. A plurality of symbolic models associated with the plurality of operators may be generated. For each symbolic model, a nested loop associated with an operator may be identified, a loop order may be defined, and a set of data dependencies may be defined. A set of inter-operator dependencies may be extracted based on the description of the neural network. The plurality of symbolic models and the set of inter-operator dependencies may be analyzed to identify a combinable pair of nested loops. The combinable pair of nested loops may be combined to form a combined nested loop.
-
公开(公告)号:US20210304010A1
公开(公告)日:2021-09-30
申请号:US16836421
申请日:2020-03-31
Applicant: Amazon Technologies, Inc.
Inventor: Sudipta Sengupta , Randy Renfu Huang , Ron Diamant , Vignesh Vivekraja
Abstract: Methods and systems for training a neural network are provided. In one example, an apparatus comprises a memory that stores instructions; and a hardware processor configured to execute the instructions to: control a neural network processor to perform a loss gradient operation to generate data gradients; after the loss gradient operation completes, control the neural network processor to perform a forward propagation operation to generate intermediate outputs; control the neural network processor to perform a backward propagation operation based on the data gradients and the intermediate outputs to generate weight gradients; receive the weight gradients from the neural network processor; and update weights of a neural network based on the weight gradients.
-
公开(公告)号:US20210247984A1
公开(公告)日:2021-08-12
申请号:US17243415
申请日:2021-04-28
Applicant: Amazon Technologies, Inc.
Inventor: Jeffrey T. Huynh , Drazen Borkovic , Jindrich Zejda , Randy Renfu Huang , Ron Diamant
Abstract: Techniques are disclosed for reordering operations of a neural network to improve runtime efficiency. In some examples, a compiler receives a description of the neural network comprising a plurality of operations. The compiler may determine which execution engine of a plurality of execution engines is to perform each of the plurality of operations. The compiler may determine an order of performance associated with the plurality of operations. The compiler may identify a runtime inefficiency based on the order of performance and a hardware usage for each of the plurality of operations. An operation may be reordered to reduce the runtime inefficiency. Instructions may be compiled based on the plurality of operations, which include the reordered operation.
-
公开(公告)号:US20210097396A1
公开(公告)日:2021-04-01
申请号:US16588603
申请日:2019-09-30
Applicant: Amazon Technologies, Inc.
Inventor: Vignesh Vivekraja , Thiam Khean Hah , Randy Renfu Huang , Ron Diamant , Richard John Heaton
Abstract: Methods and systems for performing a training operation of a neural network are provided. In one example, a method comprises: performing backward propagation computations for a second layer of a neural network to generate second weight gradients; splitting the second weight gradients into portions; causing a hardware interface to exchange a first portion of the second weight gradients with the second computer system; performing backward propagation computations for a first layer of the neural network to generate first weight gradients when the exchange of the first portion of the second weight gradients is underway, the first layer being a lower layer than the second layer in the neural network; causing the hardware interface to transmit the first weight gradients to the second computer system; and causing the hardware interface to transmit the remaining portions of the second weight gradients to the second computer system.
-
公开(公告)号:US12205013B1
公开(公告)日:2025-01-21
申请号:US17009483
申请日:2020-09-01
Applicant: Amazon Technologies, Inc.
Inventor: Thiam Khean Hah , Randy Renfu Huang , Richard John Heaton , Ron Diamant , Vignesh Vivekraja
Abstract: Accelerated convolution of neural networks can be performed by executing N computing engines (CEs) of a neural network processor in parallel. An input dataset can be divided spatially into N chunks such that a respective last portion of each chunk overlaps with a respective first portion of a subsequent chunk. Portions of each chunk can be processed by a respective CE to generate a respective portion of an output dataset. The overlapping intermediate states computed by each CE from processing the overlapping portion can be stored locally for sharing with a subsequent CE using an on-chip bus.
-
公开(公告)号:US12182695B1
公开(公告)日:2024-12-31
申请号:US18474129
申请日:2023-09-25
Applicant: Amazon Technologies, Inc.
Inventor: Paul Gilbert Meyer , Thiam Khean Hah , Randy Renfu Huang , Ron Diamant , Vignesh Vivekraja
Abstract: A systolic array can implement an architecture tailored to perform matrix multiplications on sparse matrices. Each processing element in the systolic array may include a register configured to store a value, and a multiplexor configured to select an input element from multiple input data buses based on metadata associated with the value. Each processing element may also include a multiplier configured to multiply the selected input element with the value to generate a multiplication result, and an adder configured to add the multiplication result to a partial sum input to generate a partial sum output.
-
公开(公告)号:US12093806B1
公开(公告)日:2024-09-17
申请号:US16459501
申请日:2019-07-01
Applicant: Amazon Technologies, Inc.
Inventor: Jindrich Zejda , Ron Diamant , Jeffrey T. Huynh , Drazen Borkovic , Randy Renfu Huang , Richard John Heaton
Abstract: Static memory allocation may be performed for weight values across multiple processing units executing a neural network. A neural network may be received for execution across multiple processing units. A partitioning scheme may be applied to divide the neural network into subgraphs. The subgraphs may be assigned to different processing units. The weights for the operations of the subgraph may be statically allocated in dedicated caches for the processing units as part of the instructions to execute the neural network across the processing units.
-
-
-
-
-
-
-
-
-