-
公开(公告)号:US11275932B2
公开(公告)日:2022-03-15
申请号:US16938858
申请日:2020-07-24
Inventor: Siqian Yang , Jilin Li , Yongjian Wu , Yichao Yan , Keke He , Yanhao Ge , Feiyue Huang , Chengjie Wang
Abstract: This application discloses a human attribute recognition method performed at a computing device. The method includes: determining a human body region image in a surveillance image; inputting the human body region image into a multi-attribute convolutional neural network model, to obtain, for each of a plurality of human attributes in the human body region image, a probability that the human attribute corresponds to a respective predefined attribute value, the multi-attribute convolutional neural network model being obtained by performing multi-attribute recognition and training on a set of pre-obtained training images by using a multi-attribute convolutional neural network; determining, for each of the plurality of human attributes in the human body region image, the attribute value of the human attribute based on the corresponding probability; and displaying the attribute values of the plurality of human attributes next to the human body region image.
-
公开(公告)号:US11200404B2
公开(公告)日:2021-12-14
申请号:US17089435
申请日:2020-11-04
Inventor: Yandan Zhao , Yichao Yan , Weijian Cao , Yun Cao , Yanhao Ge , Chengjie Wang , Jilin Li
Abstract: This application relates to feature point positioning technologies. The technologies involve positioning a target area in a current image; determining an image feature difference between a target area in a reference image and the target area in the current image, the reference image being a frame of image that is processed before the current image and that includes the target area; determining a target figure point location of the target area in the reference image; determining a target feature point location difference between the target area in the reference image and the target area in the current image according to a feature point location difference determining model and the image feature difference; and positioning a target feature point in the target area in the current image according to the target feature point location of the target area in the reference image and the target feature point location difference.
-
公开(公告)号:US11151360B2
公开(公告)日:2021-10-19
申请号:US16665060
申请日:2019-10-28
Inventor: Yanhao Ge , Jilin Li , Chengjie Wang
Abstract: A face attribute recognition method, electronic device, and storage medium. The method may include obtaining a face image, inputting the face image into an attribute recognition model, performing a forward calculation on the face image using the attribute recognition model to obtain a plurality of attribute values according to different types of attributes, and outputting the plurality of attribute values, the plurality of attribute values indicating recognition results of a plurality of attributes of the face image. The attribute recognition model may be obtained through training based on a plurality of sample face images, a plurality of sample attribute recognition results of the plurality of sample face images, and the different types of attributes.
-
公开(公告)号:US10990803B2
公开(公告)日:2021-04-27
申请号:US16222941
申请日:2018-12-17
Inventor: Chengjie Wang , Jilin Li , Yandan Zhao , Hui Ni , Yabiao Wang , Ling Zhao
Abstract: When a target image is captured, the device provides a portion of the target image within a target detection region to a preset first model set to calculate positions of face key points and a first confidence value. The face key points and the first confidence value are output by the first model set for a single input of the portion of the first target image into the first model set. When the first confidence value meets a first threshold corresponding to whether the target image is a face image, the device obtains a second target image corresponding to the positions of the first face key points; the device inputs the second target image into the first model set to calculate a second confidence value, the second confidence value corresponds to accuracy key point positioning, and outputs the first key points if the second confidence value meets a second threshold.
-
公开(公告)号:US10922529B2
公开(公告)日:2021-02-16
申请号:US16208183
申请日:2018-12-03
Inventor: Yicong Liang , Jilin Li , Chengjie Wang , Shouhong Ding
IPC: G06K9/00 , G06K9/32 , G06N3/04 , G06K9/46 , G06K9/62 , G06T7/11 , G06F21/32 , G06N3/08 , G06T5/00
Abstract: A device receives an image-based authentication request from a specified object and performs human face authentication in a manner depending on whether the object wears glasses. Specifically, the device designates a glasses region on a daily photograph of the specified object using a glasses segmentation model. If the regions of the human face in the daily photograph labeled as glasses exceed a first threshold amount, the device modifies the daily photograph by changing pixel values of the regions that are labeled as being obscured by glasses. The device extracts features of a daily human face from the daily photograph and features of an identification human face from the identification photograph. The device approves the authentication request if a matching degree between the features of the daily human face and the features of the identification human face is greater than a second threshold amount.
-
公开(公告)号:US10607066B2
公开(公告)日:2020-03-31
申请号:US15462423
申请日:2017-03-17
Inventor: Jilin Li , Chengjie Wang , Feiyue Huang , Yongjian Wu , Hui Ni , Ruixin Zhang , Guofu Tan
Abstract: The present disclosure discloses a living body identification method, an information generation method, and a terminal, and belongs to the field of biometric feature recognition. The method includes: providing lip language prompt information, the lip language prompt information including at least two target characters, and the at least two target characters being at least one of: characters of a same lip shape, characters of opposite lip shapes, or characters whose lip shape similarity is in a preset range; collecting at least two frame pictures; detecting whether lip changes of a to-be-identified object in the at least two frame pictures meet a preset condition, when the to-be-identified object reads the at least two target characters; and determining that the to-be-identified object is a living body, if the preset condition is met. The present disclosure resolves a problem in the related technology that even if a to-be-identified object performs an operation according to lip language prompt information, a terminal may incorrectly determine that the to-be-identified object is not a living body, and achieves an effect that the terminal can accurately determine whether the to-be-identified object is a living body and improve determining accuracy.
-
公开(公告)号:US20170316598A1
公开(公告)日:2017-11-02
申请号:US15652009
申请日:2017-07-17
Inventor: Chengjie Wang , Jilin Li , Feiyue Huang , Lei Zhang
CPC classification number: G06T15/10 , G06K9/00 , G06T7/73 , G06T15/04 , G06T19/20 , G06T2207/30201 , G06T2210/44 , G06T2215/16 , G06T2219/2021
Abstract: A 3D human face reconstruction method and apparatus, and a server are provided. In some embodiments, the method includes determining feature points on an acquired 2D human face image; determining posture parameters of a human face according to the feature points, and adjusting a posture of a universal 3D human face model according to the posture parameters; determining points on the universal 3D human face model corresponding to the feature points, and adjusting the corresponding points in a sheltered status to obtain a preliminary 3D human face model; and performing deformation adjusting on the preliminary 3D human face model, and performing texture mapping on the deformed 3D human face model to obtain a final 3D human face.
-
38.
公开(公告)号:US12272024B2
公开(公告)日:2025-04-08
申请号:US17739053
申请日:2022-05-06
Inventor: Xiaozhong Ji , Yun Cao , Ying Tai , Chengjie Wang , Jilin Li
IPC: G06T5/00 , G06N20/20 , G06T5/50 , G06V10/774
Abstract: A method, an apparatus, and a device for image processing and a training method thereof are provided. The training method includes obtaining a sample image set, the sample image set comprising a first number of sample images; constructing an image feature set based on the sample image set, the image feature set comprising an image feature extracted from each of the sample images in the sample image set; obtaining a training image set, the training image set comprising a second number of training images; constructing multiple training image pairs based on the training image set and the image feature set; and training the image processing model based on the multiple training image pairs.
-
公开(公告)号:US20240257554A1
公开(公告)日:2024-08-01
申请号:US18632696
申请日:2024-04-11
Inventor: Chao Xu , Junwei Zhu , Wenqing Chu , Ying Tai , Chengjie Wang
CPC classification number: G06V40/171 , G06V10/467 , G06V40/172 , G10L25/63 , G06V2201/07
Abstract: An image generation method, performed by an electronic device includes obtaining an original face image frame, audio driving information, and emotion driving information, performing spatial feature extraction on the original face image frame to obtain an original face spatial feature corresponding to the original face image frame; performing feature interaction processing on the audio driving information and the emotion driving information to obtain a face local pose feature of the to-be-adjusted object issuing the voice content with the target emotion; and performing, based on the original face spatial feature and the face local pose feature, face reconstruction processing on the to-be-adjusted object to generate a target face image frame.
-
公开(公告)号:US10817708B2
公开(公告)日:2020-10-27
申请号:US16297565
申请日:2019-03-08
Inventor: Chengjie Wang , Hui Ni , Yandan Zhao , Yabiao Wang , Shouhong Ding , Shaoxin Li , Ling Zhao , Jilin Li , Yongjian Wu , Feiyue Huang , Yicong Liang
IPC: G06K9/00
Abstract: A facial tracking method is provided. The method includes: obtaining, from a video stream, an image that currently needs to be processed as a current image frame; and obtaining coordinates of facial key points in a previous image frame and a confidence level corresponding to the previous image frame. The method also includes calculating coordinates of facial key points in the current image frame according to the coordinates of the facial key points in the previous image frame when the confidence level is higher than a preset threshold; and performing multi-face recognition on the current image frame according to the coordinates of the facial key points in the current image frame. The method also includes calculating a confidence level of the coordinates of the facial key points in the current image frame, and returning to process a next frame until recognition on all image frames is completed.
-
-
-
-
-
-
-
-
-