-
公开(公告)号:US11967175B2
公开(公告)日:2024-04-23
申请号:US18322517
申请日:2023-05-23
Applicant: CENTRAL CHINA NORMAL UNIVERSITY
Inventor: Sannyuya Liu , Zongkai Yang , Xiaoliang Zhu , Zhicheng Dai , Liang Zhao
IPC: G06K9/00 , G06V10/24 , G06V10/62 , G06V10/77 , G06V10/80 , G06V10/82 , G06V20/40 , G06V40/16 , G06V10/774
CPC classification number: G06V40/165 , G06V10/247 , G06V10/62 , G06V10/7715 , G06V10/806 , G06V10/82 , G06V20/41 , G06V40/171 , G06V40/174 , G06V10/774
Abstract: Provided are a facial expression recognition method and system combined with an attention mechanism. The method comprises: detecting faces comprised in each video frame in a video sequence, and extracting corresponding facial ROIs, so as to obtain facial pictures in each video frame; aligning the facial pictures in each video frame on the basis of location information of facial feature points of the facial pictures; inputting the aligned facial pictures into a residual neural network, and extracting spatial features of facial expressions corresponding to the facial pictures; inputting the spatial features of the facial expressions into a hybrid attention module to acquire fused features of the facial expressions; inputting the fused features of the facial expressions into a gated recurrent unit, and extracting temporal features of the facial expressions; and inputting the temporal features of the facial expressions into a fully connected layer, and classifying and recognizing the facial expressions.
-
公开(公告)号:US12036021B2
公开(公告)日:2024-07-16
申请号:US18511919
申请日:2023-11-16
Applicant: CENTRAL CHINA NORMAL UNIVERSITY
Inventor: Liang Zhao , Sannyuya Liu , Zongkai Yang , Xiaoliang Zhu , Jianwen Sun , Qing Li , Zhicheng Dai
IPC: A61B8/14 , A61B5/00 , A61B5/0205 , A61B5/1171 , A61B5/16 , G06N3/0464 , G06N3/08 , G06V10/30 , G06V40/16 , A61B5/024 , A61B5/08
CPC classification number: A61B5/16 , A61B5/0205 , A61B5/1176 , A61B5/725 , A61B5/726 , A61B5/7264 , G06N3/0464 , G06N3/08 , G06V10/30 , G06V40/161 , A61B5/02427 , A61B5/0816 , G06V2201/03
Abstract: The present disclosure provides a non-contact fatigue detection system and method based on rPPG. The system and method adopt multi-thread synchronous communication for real-time acquisition and processing of rPPG signal, enabling fatigue status detection. In this setup, the first thread handles real-time rPPG data capture, storage and concatenation, while the second thread conducts real-time analysis and fatigue detection of rPPG data. Through a combination of skin detection and LUV color space conversion, rPPG raw signal extraction is achieved, effectively eliminating interference from internal and external environmental facial noise; Subsequently, an adaptive multi-stage filtering process enhances the signal-to-noise ratio, and a multi-dimensional fusion CNN model ensures accurate detection of respiration and heart rate. The final step involves multi-channel data fusion of respiration and heartbeats, succeeding in not only learning person-independent features for fatigue detection but also detecting early fatigue with very high accuracy.
-
公开(公告)号:US10884112B2
公开(公告)日:2021-01-05
申请号:US16626571
申请日:2018-11-29
Applicant: CENTRAL CHINA NORMAL UNIVERSITY
Inventor: Zongkai Yang , Sannvya Liu , Zhicheng Dai , Zengzhao Chen , Xiuling He
IPC: G01S11/06 , G01S5/02 , H04B17/318 , H04W4/33 , H04W4/80 , H04W4/021 , H04W4/02 , H04W8/00 , H04W64/00
Abstract: The disclosure discloses a fingerprint positioning method in a smart classroom, which is specifically: firstly, performing Gaussian filtering and taking the average value on a wireless signal strength value RSSI in the fingerprint database; then finding the neighbor point closest to the signal strength of the to-be-measured point; finally, the Euclidean distance is used as the weight reference, and the weighted center of mass is obtained for the nearest neighbor points. The weight index is introduced as an index of the weight, and the coordinates of the to-be-tested node are obtained. The disclosure has a higher positioning accuracy, smaller positioning error fluctuations and greater environmental adaptability.
-
公开(公告)号:US10916158B2
公开(公告)日:2021-02-09
申请号:US16697205
申请日:2019-11-27
Applicant: CENTRAL CHINA NORMAL UNIVERSITY
Inventor: Zongkai Yang , Jingying Chen , Sannvya Liu , Ruyi Xu , Kun Zhang , Leyuan Liu , Shixin Peng , Zhicheng Dai
Abstract: The invention provides a classroom cognitive load detection system belonging to the field of education informationization, which includes the following. A task completion feature collecting module records an answer response time and a correct answer rate of a student when completing a task. A cognitive load self-assessment collecting module quantifies and analyzes a mental effort and a task subjective difficulty by a rating scale. An expression and attention feature collecting module collects a student classroom performance video to obtain a face region through a face detection and counting a smiley face duration and a watching duration of the student according to a video analysis result. A feature fusion module fuses aforesaid six indexes into a characteristic vector. A cognitive load determining module inputs the characteristic vector to a classifier to identify a classroom cognitive load level of the student.
-
5.
公开(公告)号:US12254692B2
公开(公告)日:2025-03-18
申请号:US18011847
申请日:2021-09-07
Applicant: CENTRAL CHINA NORMAL UNIVERSITY
Inventor: Sannyuya Liu , Zengzhao Chen , Zhicheng Dai , Shengming Wang , Xiuling He , Baolin Yi
Abstract: The present invention discloses construction method and system of a descriptive model of classroom teaching behavior events. The construction method includes steps as the followings: acquiring classroom teaching video data to be trained; dividing the classroom teaching video data to be trained into multiple events according to utterances of a teacher by using a voice activity detection technology; and performing multi-modal recognition on all events by using multiple artificial intelligence technologies to divide the events into sub-events in multiple dimensions, establishing an event descriptive model according to the sub-events, and describing various teaching behavior events of the teacher in a classroom. The present invention divides a classroom video according to voice, which can ensure the completeness of the teacher's non-verbal behavior in each event to the greatest extent. Also, a descriptive model that uniformly describes all events is established by extracting commonality between different events, which can not only complete the description of various teaching behaviors of the teacher, but also reflect the correlation between events, so that the events are no longer isolated.
-
-
-
-