-
公开(公告)号:US10199987B2
公开(公告)日:2019-02-05
申请号:US15776780
申请日:2016-01-29
Applicant: SOUTHEAST UNIVERSITY
Inventor: Chao Chen , Jianhui Wu , Hong Li , Cheng Huang , Meng Zhang
Abstract: A self-reconfigurable returnable mixer includes a self-reconfigurable transconductance stage. The input RF voltage signal is converted into RF current through the self-reconfigurable transconductance stage. The RF current is converted into an IF signal through down-conversion and low-pass filtering. The IF signal is fed back to the reconfigurable transconductance stage; the self-reconfigurable transconductance stage presents an open-loop structure to the input RF voltage signal, and the self-reconfigurable transconductance stage presents the topology structure of the negative feedback amplifier to the fed-back IF signal. The self-reconfigurable transconductance stage circuit achieves a high-linearity IF gain while providing a high bandwidth for the RF signal, effectively alleviating the contradiction between the conversion gain and the IF linearity in the conventional returnable structure.
-
公开(公告)号:US12154026B2
公开(公告)日:2024-11-26
申请号:US17284480
申请日:2020-01-09
Applicant: SOUTHEAST UNIVERSITY
Inventor: Shengli Lu , Wei Pang , Ruili Wu , Yingbo Fan , Hao Liu , Cheng Huang
Abstract: A deep neural network hardware accelerator comprises: an AXI-4 bus interface, an input cache area, an output cache area, a weighting cache area, a weighting index cache area, an encoding module, a configurable state controller module, and a PE array. The input cache area and the output cache area are designed as a line cache structure; an encoder encodes weightings according to an ordered quantization set, the quantization set storing the possible value of the absolute value of all of the weightings after quantization. During the calculation of the accelerator, the PE unit reads data from the input cache area and the weighting index cache area to perform shift calculation, and sends the calculation result to the output cache area. The accelerator uses shift operations to replace floating point multiplication operations, reducing the requirements for computing resources, storage resources, and communication bandwidth, and increasing the calculation efficiency of the accelerator.
-