-
公开(公告)号:US20240118932A1
公开(公告)日:2024-04-11
申请号:US18276372
申请日:2022-01-20
Applicant: LYNXI TECHNOLOGIES CO., LTD
Inventor: Zhenzhi WU , Ruiqiang DING , Wei HE
IPC: G06F9/50
CPC classification number: G06F9/5038
Abstract: Provided are a signal processing method based on a many-core chip, an electronic device and a medium. The method includes: determining, according to a time domain signal to be processed and a time-frequency transform type of the time domain signal, a transform kernel matrix of the time domain signal; mapping the transform kernel matrix to a plurality of processing cores of the many-core chip; and mapping the time domain signal to the plurality of processing cores so that the plurality of processing cores determine, according to the transform kernel matrix and the time domain signal, a frequency domain signal corresponding to the time domain signal.
-
2.
公开(公告)号:US20230196103A1
公开(公告)日:2023-06-22
申请号:US18015065
申请日:2021-07-08
Applicant: LYNXI TECHNOLOGIES CO., LTD.
Inventor: Wei HE , Yaolong ZHU , Han LI
IPC: G06N3/08
CPC classification number: G06N3/08
Abstract: An embodiment of the present disclosure provides a weight precision configuration method, including: determining a pre-trained preset neural network including a plurality of layers each having a preset weight precision; reducing, based on a current threshold, the weight precision of at least one layer in the preset neural network to obtain a corrected neural network having a recognition rate greater than the current threshold; and reducing the weight precision of a layer includes: adjusting the weight precision of the layer; setting, if a termination condition is met, the weight precision of the layer to a corrected weight precision that is less than or equal to the preset weight precision of the layer; and returning, if the termination condition is not met, to the operation of adjusting the weight precision of the layer; and determining a final weight precision of each layer to obtain a final neural network.
-
公开(公告)号:US20230089320A1
公开(公告)日:2023-03-23
申请号:US17909417
申请日:2021-08-17
Applicant: LYNXI TECHNOLOGIES CO., LTD.
Inventor: Wei HE , Yangshu SHEN , Yaolong ZHU
IPC: G06F30/392 , G06F30/33
Abstract: The disclosed method is applicable to a many-core system. The method includes: acquiring multiple pieces of routing information, each of which includes two logical nodes and a data transmission amount between the two logical nodes; determining a piece of unprocessed routing information with a maximum data transmission amount as current routing information; mapping each unlocked logical node of the current routing information to one unlocked processing node, and locking the mapped logical node and processing node, wherein if there is an unlocked edge processing node, the unlocked logical node is mapped to the unlocked edge processing node; and returning, if there is at least one unlocked logical node, to the step of determining the piece of unprocessed routing information with the maximum data transmission amount as the current routing information.
-
公开(公告)号:US20220091906A1
公开(公告)日:2022-03-24
申请号:US17434736
申请日:2020-12-21
Applicant: LYNXI TECHNOLOGIES CO., LTD.
Inventor: Yangshu SHEN , Yaolong ZHU , Wei HE , Luping SHI
Abstract: Embodiments of the present disclosure provide multitask parallel processing method and apparatus, a computer device and a storage medium. The method is applied to a neural network consisting of a plurality of nodes, the neural network including at least one closed-loop path, and the method includes: inputting a data sequence to be computed into the neural network in a form of data packets, each of the data packets including multiple pieces of data; and computing, by the nodes in the closed-loop path, all the data in a currently received data packet each time a computation flow is started.
-
公开(公告)号:US20230099117A1
公开(公告)日:2023-03-30
申请号:US17800176
申请日:2021-04-22
Applicant: LYNXI TECHNOLOGIES CO., LTD.
Inventor: Zhenzhi WU , Yaolong ZHU , Luojun JIN , Wei HE , Qikun ZHANG
IPC: G06N3/063
Abstract: A computing core circuit, including: an encoding module, a route sending module, and a control module, wherein the control module is configured to control the encoding module to perform encoding processing on a pulse sequence determined by pulses to be transmitted of at least one neuron in a current computing core, so as to obtain an encoded pulse sequence, and control the route sending module to determine a corresponding route packet according to the encoded pulse sequence, so as to send the route packet. The present disclosure further provides a data processing method, a chip, a board, an electronic device, and a computer-readable storage medium.
-
-
-
-