Configurable function approximation based on switching mapping table content

    公开(公告)号:US11423313B1

    公开(公告)日:2022-08-23

    申请号:US16218082

    申请日:2018-12-12

    Abstract: Methods and systems for performing hardware approximation of function are provided. In one example, a system comprises a controller, configurable arithmetic circuits, and a mapping table. The mapping table stores a first set of function parameters in a first mode of operation and stores a second set of function parameters in a second mode of operation. Depending on the mode of operation, the controller may configure the arithmetic circuits to compute a first approximation result of a function at an input value based on the first set of function parameters, or to compute a second approximation result of the function at the input value based on the second set of function parameters and to perform post-processing, such as quantization, of the second approximation result.

    Breakpoints in neural network accelerator

    公开(公告)号:US11467946B1

    公开(公告)日:2022-10-11

    申请号:US16368351

    申请日:2019-03-28

    Abstract: Techniques are disclosed for setting a breakpoint for debugging a neural network. User input is received by a debugger program executable by a host processor indicating a target layer of a neural network at which to halt execution of the neural network. The neural network includes a first set of instructions to be executed by a first execution engine and a second set of instructions to be executed by a second execution engine. A first halt point is set within the first set of instructions and a second halt point is set within the second set of instructions. It is then determined that operation of the first execution engine and the second execution engine has halted. It is then determined that the first execution engine has reached the first halt point. The second execution engine is then caused to move through instructions until reaching the second halt point.

    Hardware engine with configurable instructions

    公开(公告)号:US10942742B1

    公开(公告)日:2021-03-09

    申请号:US16216212

    申请日:2018-12-11

    Abstract: A reconfigurable processing circuit and system are provided. The system allows a user to program machine-level instructions in order to reconfigure the way the circuit behaves, including by adding new operations. The system can include a profile access content-addressable memory (CAM) configured to receive an execution step value from a step counter. The execution step value can be incremented and/or reset by a step management logic. The profile access CAM can select an entry of a profile table based on an opcode and the execution step value, and the processing engine can execute microcode based on the selected entry of the profile table. The profile access CAM can translate the opcode to an internal short instruction identifier in order to select the entry of the profile table. The system can further include an instruction decoding module configured to merge multiple instruction fields into a single effective instruction field.

    Dynamic code loading for multiple executions on a sequential processor

    公开(公告)号:US11809953B1

    公开(公告)日:2023-11-07

    申请号:US17902702

    申请日:2022-09-02

    CPC classification number: G06N3/063 G06N5/04

    Abstract: Embodiments include techniques for enabling execution of N inferences on an execution engine of a neural network device. Instruction code for a single inference is stored in a memory that is accessible by a DMA engine, the instruction code forming a regular code block. A NOP code block and a reset code block for resetting an instruction DMA queue are stored in the memory. The instruction DMA queue is generated such that, when it is executed by the DMA engine, it causes the DMA engine to copy, for each of N inferences, both the regular code block and an additional code block to an instruction buffer. The additional code block is the NOP code block for the first N−1 inferences and is the reset code block for the Nth inference. When the reset code block is executed by the execution engine, the instruction DMA queue is reset.

    Parametric mathematical function approximation in integrated circuits

    公开(公告)号:US10733498B1

    公开(公告)日:2020-08-04

    申请号:US16215405

    申请日:2018-12-10

    Abstract: Methods and systems for supporting parametric function computations in hardware circuits are proposed. In one example, a system comprises a hardware mapping table, a control circuit, and arithmetic circuits. The control circuit is configured to: in a first mode of operation, forward a set of parameters of a non-parametric function associated with an input value from the hardware mapping table to the arithmetic circuits to compute a first approximation of the non-parametric function at the input value; and in a second mode of operation, based on information indicating whether the input value is in a first input range or in a second input range from the hardware mapping table, forward a first parameter or a second parameter of a parametric function to the arithmetic circuits to compute, respectively, a second approximation or a third approximation of the parametric function at the input value.

    Breakpoints in neural network accelerator

    公开(公告)号:US12210438B1

    公开(公告)日:2025-01-28

    申请号:US17947949

    申请日:2022-09-19

    Abstract: Techniques are disclosed for setting a breakpoint for debugging a neural network. User input is received by a debugger program executable by a host processor indicating a target layer of a neural network at which to halt execution of the neural network. The neural network includes a first set of instructions to be executed by a first execution engine and a second set of instructions to be executed by a second execution engine. A first halt point is set within the first set of instructions and a second halt point is set within the second set of instructions. It is then determined that operation of the first execution engine and the second execution engine has halted. It is then determined that the first execution engine has reached the first halt point. The second execution engine is then caused to move through instructions until reaching the second halt point.

Patent Agency Ranking