-
公开(公告)号:US20210287095A1
公开(公告)日:2021-09-16
申请号:US16816117
申请日:2020-03-11
Applicant: QUALCOMM Incorporated
Inventor: Jamie Menjay LIN , Edwin Chongwoo PARK , Nojun KWAK
Abstract: A method for operating a low-bitwidth neural network includes converting a first activation to a non-negative value (e.g., absolute value). The first activation has a signed value. The sign of the activation is used to select a weight value. A product of the non-negative activation and the selected weight value is computed to determine a next activation. The next activation is quantized and supplied to a subsequent layer of the low-bitwidth neural network.
-
2.
公开(公告)号:US20220065981A1
公开(公告)日:2022-03-03
申请号:US17461516
申请日:2021-08-30
Applicant: QUALCOMM Incorporated
Inventor: Jamie Menjay LIN , Nojun KWAK , Fatih Murat PORIKLI
Abstract: Certain aspects of the present disclosure provide techniques for machine learning using basis decomposition, comprising receiving a first runtime record, where the first runtime record includes RF signal data collected in a physical space; processing the first runtime record using a plurality of basis machine learning (ML) models to generate a plurality of inferences; aggregating the plurality of inferences to generate a prediction comprising a plurality of coordinates; and outputting the prediction, where the plurality of coordinates indicate a location of a physical element in a physical space.
-
公开(公告)号:US20210150347A1
公开(公告)日:2021-05-20
申请号:US17098159
申请日:2020-11-13
Applicant: QUALCOMM Incorporated
Abstract: Aspects described herein provide a method of performing guided training of a neural network model, including: receiving supplementary domain feature data; providing the supplementary domain feature data to a fully connected layer of a neural network model; receiving from the fully connected layer supplementary domain feature scaling data; providing the supplementary domain feature scaling data to an activation function; receiving from the activation function supplementary domain feature weight data; receiving a set of feature maps from a first convolution layer of the neural network model; fusing the supplementary domain feature weight data with the set of feature maps to form fused feature maps; and providing the fused feature maps to a second convolution layer of the neural network model.
-
-