-
公开(公告)号:US11967175B2
公开(公告)日:2024-04-23
申请号:US18322517
申请日:2023-05-23
Applicant: CENTRAL CHINA NORMAL UNIVERSITY
Inventor: Sannyuya Liu , Zongkai Yang , Xiaoliang Zhu , Zhicheng Dai , Liang Zhao
IPC: G06K9/00 , G06V10/24 , G06V10/62 , G06V10/77 , G06V10/80 , G06V10/82 , G06V20/40 , G06V40/16 , G06V10/774
CPC classification number: G06V40/165 , G06V10/247 , G06V10/62 , G06V10/7715 , G06V10/806 , G06V10/82 , G06V20/41 , G06V40/171 , G06V40/174 , G06V10/774
Abstract: Provided are a facial expression recognition method and system combined with an attention mechanism. The method comprises: detecting faces comprised in each video frame in a video sequence, and extracting corresponding facial ROIs, so as to obtain facial pictures in each video frame; aligning the facial pictures in each video frame on the basis of location information of facial feature points of the facial pictures; inputting the aligned facial pictures into a residual neural network, and extracting spatial features of facial expressions corresponding to the facial pictures; inputting the spatial features of the facial expressions into a hybrid attention module to acquire fused features of the facial expressions; inputting the fused features of the facial expressions into a gated recurrent unit, and extracting temporal features of the facial expressions; and inputting the temporal features of the facial expressions into a fully connected layer, and classifying and recognizing the facial expressions.
-
公开(公告)号:US12036021B2
公开(公告)日:2024-07-16
申请号:US18511919
申请日:2023-11-16
Applicant: CENTRAL CHINA NORMAL UNIVERSITY
Inventor: Liang Zhao , Sannyuya Liu , Zongkai Yang , Xiaoliang Zhu , Jianwen Sun , Qing Li , Zhicheng Dai
IPC: A61B8/14 , A61B5/00 , A61B5/0205 , A61B5/1171 , A61B5/16 , G06N3/0464 , G06N3/08 , G06V10/30 , G06V40/16 , A61B5/024 , A61B5/08
CPC classification number: A61B5/16 , A61B5/0205 , A61B5/1176 , A61B5/725 , A61B5/726 , A61B5/7264 , G06N3/0464 , G06N3/08 , G06V10/30 , G06V40/161 , A61B5/02427 , A61B5/0816 , G06V2201/03
Abstract: The present disclosure provides a non-contact fatigue detection system and method based on rPPG. The system and method adopt multi-thread synchronous communication for real-time acquisition and processing of rPPG signal, enabling fatigue status detection. In this setup, the first thread handles real-time rPPG data capture, storage and concatenation, while the second thread conducts real-time analysis and fatigue detection of rPPG data. Through a combination of skin detection and LUV color space conversion, rPPG raw signal extraction is achieved, effectively eliminating interference from internal and external environmental facial noise; Subsequently, an adaptive multi-stage filtering process enhances the signal-to-noise ratio, and a multi-dimensional fusion CNN model ensures accurate detection of respiration and heart rate. The final step involves multi-channel data fusion of respiration and heartbeats, succeeding in not only learning person-independent features for fatigue detection but also detecting early fatigue with very high accuracy.
-