-
公开(公告)号:US20200168071A1
公开(公告)日:2020-05-28
申请号:US16202108
申请日:2018-11-28
Inventor: Chuan-Yu CHANG , Fu-Jen TSAI
Abstract: A mouth and nose occluded detecting method includes a detecting step and a warning step. The detecting step includes a facial detecting step, an image extracting step and an occluded determining step. In the facial detecting step, an image is captured by an image capturing device, wherein a facial portion image is obtained from the image. In the image extracting step, a mouth portion is extracted from the facial portion image so as to obtain a mouth portion image. In the occluded determining step, the mouth portion image is entered into an occluding convolutional neural network so as to produce a determining result, wherein the determining result is an occluding state or a normal state. In the warning step, a warning is provided according to the determining result.
-
公开(公告)号:US20220031191A1
公开(公告)日:2022-02-03
申请号:US17026288
申请日:2020-09-20
Inventor: Chuan-Yu CHANG , Min-Hsiang CHANG
IPC: A61B5/08 , A61B5/1171 , A61B5/00
Abstract: A contactless breathing detection method is for detecting a breathing rate of a subject. The contactless breathing detection method includes a photographing step, a capturing step, a calculating step, and a converting step. The photographing step is performed to provide a camera to photograph the subject to generate a facial image. The capturing step is performed to provide a processor module to capture the facial image to generate a plurality of feature points. The calculating step is performed to drive the processor module to calculate the feature points according to an optical flow algorithm to generate a plurality of breathing signals. The converting step is performed to drive the processor module to convert the breathing signals to generate a plurality of power spectrums, respectively. The processor module generates an index value by calculating the power spectrums, and the breathing rate is extrapolated from the index value.
-
公开(公告)号:US20240164649A1
公开(公告)日:2024-05-23
申请号:US18324999
申请日:2023-05-28
Inventor: Chuan-Yu CHANG , Yen-Qun GAO
CPC classification number: A61B5/01 , G06V40/171 , G06V2201/03
Abstract: A physiological signal measuring method includes a training's thermal image providing step, a training step, a classification model generating step, a measurement's thermal image providing step, a mask-wearing classifying step, a block identifying step and a measurement result generating step. The measurement's thermal image providing step includes providing a measurement's thermal image, which is an infrared thermal video for measuring. The measurement result generating step includes generating a measurement result of at least one physiological parameter of the subject according to a plurality of signals of the forehead block, and the mask block or the nasal cavity block.
-
公开(公告)号:US20200167551A1
公开(公告)日:2020-05-28
申请号:US16202116
申请日:2018-11-28
Inventor: Chuan-Yu CHANG , Man-Ju CHENG , Matthew Huei-Ming MA
Abstract: A facial stroking detection method includes a detecting step and a determining step. The detecting step includes a pre-processing step, a feature extracting step and a feature selecting step. In the pre-processing step, an image is captured by an image capturing device, and the image is pre-processed so as to obtain a post-processing image. In the feature extracting step, a plurality of image features are extracted from the post-processing image so as to form an image feature set. In the feature selecting step, a determining feature set is formed by selecting a part of the image features from the image feature set and entered into a classifier. In the determining step, wherein the classifier provides a determining result according to the determining feature set.
-
公开(公告)号:US20220028409A1
公开(公告)日:2022-01-27
申请号:US17004015
申请日:2020-08-27
Inventor: Chuan-Yu CHANG , Jun-Ying LI
Abstract: A method for correcting infant crying identification includes the following steps: a detecting step provides an audio unit to detect a sound around an infant to generate a plurality of audio samples. A converting step provides a processing unit to convert the audio samples to generate a plurality of audio spectrograms. An extracting step provides a common model to extract the audio spectrograms to generate a plurality of infant crying features. An incremental training step provides an incremental model to train the infant crying features to generate an identification result. A judging step provides the processing unit to judge whether the identification result is correct according to a real result of the infant. When the identification result is different from the real result, an incorrect result is generated. A correcting step provides the processing unit to correct the incremental model according to the incorrect result.
-
公开(公告)号:US20200324897A1
公开(公告)日:2020-10-15
申请号:US16845048
申请日:2020-04-09
Inventor: Ching-Ju CHEN , Chuan-Yu CHANG , Chia-Yan CHENG , Meng-Syue LI
Abstract: A buoy position monitoring method includes a buoy positioning step, an unmanned aerial vehicle receiving step and an unmanned aerial vehicle flying step. In the buoy positioning step, a plurality of buoys are put on a water surface. Each of the buoys is capable of sending a detecting signal. Each of the detecting signals is sent periodically and includes a position dataset of each of the buoys. In the unmanned aerial vehicle receiving step, an unmanned aerial vehicle is disposed on an initial position, and the unmanned aerial vehicle receives the detecting signals. In the unmanned aerial vehicle flying step, when at least one of the buoys is lost, the unmanned aerial vehicle flies to a predetermined position to get contact with the at least one buoy that is lost.
-
公开(公告)号:US20200163560A1
公开(公告)日:2020-05-28
申请号:US16202110
申请日:2018-11-28
Inventor: Chuan-Yu CHANG , Hsiang-Chi LIU , Matthew Huei-Ming MA
Abstract: A heart rate detection method includes a facial image data acquiring step, a feature points recognizing step, an effective displacement signal generating step and a heart rate determining step. The feature points recognizing step is for recognizing a plurality of feature points, wherein a number range of the feature points is from three to twenty, and the feature points include a center point between two medial canthi, a point of a pronasale and a point of a subnasale of the face. The effective displacement signal generating step is for calculating an original displacement signal, wherein the original displacement signal is converted to an effective displacement signal. The heart rate determining step is for transforming the effective displacement signals of each of the feature points to an effective spectrum, wherein a heart rate is determined from one of the effective spectrums corresponding to the feature points, respectively.
-
-
-
-
-
-