-
公开(公告)号:US20190325203A1
公开(公告)日:2019-10-24
申请号:US16471106
申请日:2017-01-20
Applicant: INTEL CORPORATION
Inventor: Anbang Yao , Dongqi Cai , Ping Hu , Shandong Wang , Yurong Chen
Abstract: An apparatus for dynamic emotion recognition in unconstrained scenarios is described herein. The apparatus comprises a controller to pre-process image data and a phase-convolution mechanism to build lower levels of a CNN such that the filters form pairs in phase. The apparatus also comprises a phase-residual mechanism configured to build middle layers of the CNN via plurality of residual functions and an inception-residual mechanism to build top layers of the CNN by introducing multi-scale feature extraction. Further, the apparatus comprises a fully connected mechanism to classify extracted features.
-
公开(公告)号:US12217163B2
公开(公告)日:2025-02-04
申请号:US18371934
申请日:2023-09-22
Applicant: Intel Corporation
Inventor: Yiwen Guo , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06K9/62 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06N3/063 , G06N3/08 , G06V10/44 , G06V10/764 , G06V10/82 , G06V10/94 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US20240086693A1
公开(公告)日:2024-03-14
申请号:US18371934
申请日:2023-09-22
Applicant: Intel Corporation
Inventor: Yiwen GUO , Yuqing Hou , Anbang YAO , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06N3/063 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/44 , G06V10/764 , G06V10/82 , G06V10/94 , G06V20/00
CPC classification number: G06N3/063 , G06F18/213 , G06F18/2148 , G06F18/217 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/82 , G06V10/94 , G06V10/955 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
公开(公告)号:US11803739B2
公开(公告)日:2023-10-31
申请号:US17584216
申请日:2022-01-25
Applicant: Intel Corporation
Inventor: Yiwen Guo , Yuqing Hou , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen , Libin Wang
IPC: G06K9/62 , G06N3/063 , G06N3/08 , G06V10/94 , G06F18/21 , G06F18/213 , G06F18/214 , G06N3/044 , G06N3/045 , G06V10/764 , G06V10/82 , G06V10/44 , G06V20/00
CPC classification number: G06N3/063 , G06F18/213 , G06F18/217 , G06F18/2148 , G06N3/044 , G06N3/045 , G06N3/08 , G06V10/454 , G06V10/764 , G06V10/82 , G06V10/94 , G06V10/955 , G06V20/00
Abstract: Methods and systems for budgeted and simplified training of deep neural networks (DNNs) are disclosed. In one example, a trainer is to train a DNN using a plurality of training sub-images derived from a down-sampled training image. A tester is to test the trained DNN using a plurality of testing sub-images derived from a down-sampled testing image. In another example, in a recurrent deep Q-network (RDQN) having a local attention mechanism located between a convolutional neural network (CNN) and a long-short time memory (LSTM), a plurality of feature maps are generated by the CNN from an input image. Hard-attention is applied by the local attention mechanism to the generated plurality of feature maps by selecting a subset of the generated feature maps. Soft attention is applied by the local attention mechanism to the selected subset of generated feature maps by providing weights to the selected subset of generated feature maps in obtaining weighted feature maps. The weighted feature maps are stored in the LSTM. A Q value is calculated for different actions based on the weighted feature maps stored in the LSTM.
-
15.
公开(公告)号:US20230290134A1
公开(公告)日:2023-09-14
申请号:US18019450
申请日:2020-09-25
Applicant: Intel Corporation
Inventor: Ping Hu , Anbang Yao , Xiaolong Liu , Yurong Chen , Dongqi Cai
IPC: G06V10/82 , G06N3/0464 , G06V40/16 , G06V10/77
CPC classification number: G06V10/82 , G06N3/0464 , G06V40/171 , G06V10/7715
Abstract: A method and system of multiple facial attributes recognition using highly efficient neural networks.
-
公开(公告)号:US11790223B2
公开(公告)日:2023-10-17
申请号:US16475076
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Libin Wang , Yiwen Guo , Anbang Yao , Dongqi Cai , Lin Xu , Ping Hu , Shandong Wang , Wenhua Cheng , Yurong Chen
CPC classification number: G06N3/08 , G06F18/217 , G06F18/2148 , G06N3/045 , G06N3/063 , G06T1/20
Abstract: Methods and systems are disclosed for boosting deep neural networks for deep learning. In one example, in a deep neural network including a first shallow network and a second shallow network, a first training sample is processed by the first shallow network using equal weights. A loss for the first shallow network is determined based on the processed training sample using equal weights. Weights for the second shallow network are adjusted based on the determined loss for the first shallow network. A second training sample is processed by the second shallow network using the adjusted weights. In another example, in a deep neural network including a first weak network and a second weak network, a first subset of training samples is processed by the first weak network using initialized weights. A classification error for the first weak network on the first subset of training samples is determined. The second weak network is boosted using the determined classification error of the first weak network with adjusted weights. A second subset of training samples is processed by the second weak network using the adjusted weights.
-
公开(公告)号:US11551335B2
公开(公告)日:2023-01-10
申请号:US16474848
申请日:2017-04-07
Applicant: Intel Corporation
Inventor: Lin Xu , Liu Yang , Anbang Yao , Dongqi Cai , Libin Wang , Ping Hu , Shandong Wang , Wenhua Cheng , Yiwen Guo , Yurong Chen
Abstract: Methods and systems are disclosed using camera devices for deep channel and Convolutional Neural Network (CNN) images and formats. In one example, image values are captured by a color sensor array in an image capturing device or camera. The image values provide color channel data. The captured image values by the color sensor array are input to a CNN having at least one CNN layer. The CNN provides CNN channel data for each layer. The color channel data and CNN channel data is to form a deep channel image that stored in a memory. In another example, image values are captured by sensor array. The captured image values by the sensor array are input a CNN having a first CNN layer. An output is generated at the first CNN layer using the captured image values by the color sensor array. The output of the first CNN layer is stored as a feature map of the captured image.
-
公开(公告)号:US11341368B2
公开(公告)日:2022-05-24
申请号:US16475079
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Anbang Yao , Shandong Wang , Wenhua Cheng , Dongqi Cai , Libin Wang , Lin Xu , Ping Hu , Yiwen Guo , Liu Yang , Yuqing Hou , Zhou Su , Yurong Chen
Abstract: Methods and systems for advanced and augmented training of deep neural networks (DNNs) using synthetic data and innovative generative networks. A method includes training a DNN using synthetic data, training a plurality of DNNs using context data, associating features of the DNNs trained using context data with features of the DNN trained with synthetic data, and generating an augmented DNN using the associated features.
-
公开(公告)号:US11107189B2
公开(公告)日:2021-08-31
申请号:US16474927
申请日:2017-04-07
Applicant: INTEL CORPORATION
Inventor: Shandong Wang , Yiwen Guo , Anbang Yao , Dongqi Cai , Libin Wang , Lin Xu , Ping Hu , Wenhua Cheng , Yurong Chen
IPC: G06K9/00 , G06T3/40 , G06N20/20 , G06N20/10 , G06K9/62 , G06N3/04 , G06N3/08 , G06N5/04 , G06T1/20
Abstract: Methods and systems are disclosed using improved Convolutional Neural Networks (CNN) for image processing. In one example, an input image is down-sampled into smaller images with a smaller resolution than the input image. The down-sampled smaller images are processed by a CNN having a last layer with a reduced number of nodes than a last layer of a full CNN used to process the input image at a full resolution. A result is outputted based on the processed down-sampled smaller images by the CNN having a last layer with a reduced number of nodes. In another example, shallow CNN networks are built randomly. The randomly built shallow CNN networks are combined to imitate a trained deep neural network (DNN).
-
公开(公告)号:US20200279156A1
公开(公告)日:2020-09-03
申请号:US16645425
申请日:2017-10-09
Applicant: INTEL CORPORATION
Inventor: Dongqi Cai , Anbang Yao , Ping Hu , Shandong Wang , Yurong Chen
Abstract: A system to perform multi-modal analysis has at least three distinct characteristics: an early abstraction layer for each data modality integrating homogeneous feature cues coming from different deep learning architectures for that data modality, a late abstraction layer for further integrating heterogeneous features extracted from different models or data modalities and output from the early abstraction layer, and a propagation-down strategy for joint network training in an end-to-end manner. The system is thus able to consider correlations among homogeneous features and correlations among heterogenous features at different levels of abstraction. The system further extracts and fuses discriminative information contained in these models and modalities for high performance emotion recognition.
-
-
-
-
-
-
-
-
-