-
1.
公开(公告)号:US11710335B2
公开(公告)日:2023-07-25
申请号:US17504682
申请日:2021-10-19
Inventor: Keke He , Jing Liu , Yanhao Ge , Chengjie Wang , Jilin Li
CPC classification number: G06V40/103 , G06F18/217 , G06F18/25 , G06V10/40 , G06V10/98
Abstract: The present disclosure describes human body attribute recognition methods and apparatus, electronic devices, and a storage medium. The method includes acquiring a sample image containing a plurality of to-be-detected areas being labeled with true values of human body attributes; generating, through a recognition model, a heat map of the sample image and heat maps of the to-be-detected areas to obtain a global heat map and local heat maps; fusing the global and the local heat maps to obtain a fused image, and performing human body attribute recognition on the fused image to obtain predicted values; determining a focus area of each type of human body attribute according to the global and the local heat maps; correcting the recognition model by using the focus area, the true values, and the predicted values; and performing, based on the corrected recognition model, human body attribute recognition on a to-be-recognized image.
-
公开(公告)号:US20230087489A1
公开(公告)日:2023-03-23
申请号:US17989109
申请日:2022-11-17
Inventor: Yunlong FENG , Xu Chen , Ying Tai , Chengjie Wang , Jilin Li
Abstract: An image processing method and apparatus are provided. During image matting on an original image, a plurality of segmented images including different regions are first obtained through semantic segmentation. Further, according to the segmented images, lines of different widths are drawn on a contour line of a foreground region to obtain a target trimap. Finally, a target image is generated based on the target trimap. For the target trimap, because lines of different widths are drawn on the contour line of the foreground region, pertinent image matting may be implemented for different regions, so that image matting precision may be improved for a region requiring fine image matting, and image matting precision of another region can also be ensured. In this way, a final image obtained through image matting is fine and natural.
-
公开(公告)号:US11087476B2
公开(公告)日:2021-08-10
申请号:US16890087
申请日:2020-06-02
Inventor: Changwei He , Chengjie Wang , Jilin Li , Yabiao Wang , Yandan Zhao , Yanhao Ge , Hui Ni , Yichao Xiong , Zhenye Gan , Yongjian Wu , Feiyue Huang
Abstract: A trajectory tracking method is provided for a computer device. The method includes performing motion tracking on head images in a plurality of video frames, to obtain motion trajectories corresponding to the head images; acquiring face images corresponding to the head images in the video frames, to obtain face image sets corresponding to the head images; determining from the face image sets corresponding to the head images, at least two face image sets having same face images; and combining motion trajectories corresponding to the at least two face image sets having same face images, to obtain a final motion trajectory of trajectory tracking.
-
公开(公告)号:US10909356B2
公开(公告)日:2021-02-02
申请号:US16356924
申请日:2019-03-18
Inventor: Yicong Liang , Chengjie Wang , Shaoxin Li , Yandan Zhao , Jilin Li
Abstract: A facial tracking method can include receiving a first vector of a first frame, and second vectors of second frames that are prior to the first frame in a video. The first vector is formed by coordinates of first facial feature points in the first frame and determined based on a facial registration method. Each second vector is formed by coordinates of second facial feature points in the respective second frame and previously determined based on the facial tracking method. A second vector of the first frame is determined according to a fitting function based on the second vectors of the first set of second frames. The fitting function has a set of coefficients that are determined by solving a problem of minimizing a function formulated based on a difference between the second vector and the first vector of the current frame, and a square sum of the coefficients.
-
5.
公开(公告)号:US20200342214A1
公开(公告)日:2020-10-29
申请号:US16927812
申请日:2020-07-13
Inventor: Anping LI , Shaoxin Li , Chao Chen , Pengchen Shen , Jilin Li
Abstract: This application relates to a face recognition method performed at a computer server. After obtaining a to-be-recognized face image, the server inputs the to-be-recognized face image into a classification model. The server then obtains a recognition result of the to-be-recognized face image through the classification model. The classification model is obtained by inputting a training sample marked with class information into the classification model, outputting an output result of the training sample, calculating a loss of the classification model in a training process according to the output result, the class information and model parameters of the classification model, and performing back propagation optimization on the classification model according to the loss.
-
公开(公告)号:US10706263B2
公开(公告)日:2020-07-07
申请号:US16409341
申请日:2019-05-10
Inventor: Chengjie Wang , Jilin Li , Feiyue Huang , Kekai Sheng , Weiming Dong
IPC: G06K9/00
Abstract: Disclosed are an evaluation method and an evaluation device for a facial key point positioning result. In some embodiments, the evaluation method includes: acquiring a facial image and one or more positioning result coordinates of a key point of the facial image; performing a normalization process on the positioning result coordinate and an average facial model to obtain a normalized facial image; and extracting a facial feature value of the normalized facial image and calculating an evaluation result based on the facial feature value and a weight vector.
-
公开(公告)号:US12288422B2
公开(公告)日:2025-04-29
申请号:US17585205
申请日:2022-01-26
Inventor: Jia Meng , Xinyao Wang , Shouhong Ding , Jilin Li
Abstract: A method of living body detection includes generating a detection interface in response to a receipt of a living body detection request for a verification of a detection object. The detection interface includes a first region with a viewing target for the detection object to track, a position of the first region in the detection interface changes during a detection time according to a first sequence of a position change of the first region. The method also includes receiving a video stream that is captured during the detection time, determining a second sequence of a sight line change of the detection object based on the video stream, and determining that the detection object is a living body at least partially based on the second sequence of the sight line change of the detection object matching the first sequence of the position change of the first region.
-
公开(公告)号:US12197640B2
公开(公告)日:2025-01-14
申请号:US17977646
申请日:2022-10-31
Inventor: Keke He , Zhengkai Jiang , Jinlong Peng , Yang Yi , Xiaoming Yu , Juanhui Tu , Yi Zhou , Yabiao Wang , Ying Tai , Chengjie Wang , Jilin Li , Feiyue Huang
Abstract: An image gaze correction method, apparatus, electronic device, computer-readable storage medium, and computer program product related to the field of artificial intelligence technologies are provided. The image gaze correction method includes: acquiring an eye image from an image; performing feature extraction processing on the eye image to obtain feature information of the eye image; performing, based on the feature information and a target gaze direction, gaze correction processing on the eye image to obtain an initially corrected eye image and an eye contour mask; performing, by using the eye contour mask, adjustment processing on the initially corrected eye image to obtain a corrected eye image; and generating a gaze corrected image based on the corrected eye image.
-
公开(公告)号:US11928893B2
公开(公告)日:2024-03-12
申请号:US17530428
申请日:2021-11-18
Inventor: Donghao Luo , Yabiao Wang , Chenyang Guo , Boyuan Deng , Chengjie Wang , Jilin Li , Feiyue Huang , Yongjian Wu
IPC: G06V20/40 , G06F18/213 , G06T7/246 , G06V40/20 , G06N3/02
CPC classification number: G06V40/20 , G06F18/213 , G06T7/246 , G06N3/02 , G06T2207/20081
Abstract: An action recognition method includes: obtaining original feature submaps of each of temporal frames on a plurality of convolutional channels by using a multi-channel convolutional layer; calculating, by using each of the temporal frames as a target temporal frame, motion information weights of the target temporal frame on the convolutional channels according to original feature submaps of the target temporal frame and original feature submaps of a next temporal frame, and obtaining motion information feature maps of the target temporal frame on the convolutional channels according to the motion information weights; performing temporal convolution on the motion information feature maps of the target temporal frame to obtain temporal motion feature maps of the target temporal frame; and recognizing an action type of a moving object in image data of the target temporal frame according to the temporal motion feature maps of the target temporal frame on the convolutional channels.
-
公开(公告)号:US11436739B2
公开(公告)日:2022-09-06
申请号:US16922196
申请日:2020-07-07
Inventor: Yabiao Wang , Yanhao Ge , Zhenye Gan , Yuan Huang , Changyou Deng , Yafeng Zhao , Feiyue Huang , Yongjian Wu , Xiaoming Huang , Xiaolong Liang , Chengjie Wang , Jilin Li
Abstract: This present disclosure describes a video image processing method and apparatus, a computer-readable medium and an electronic device, relating to the field of image processing technologies. The method includes determining, by a device, a target-object region in a current frame in a video. The device includes a memory storing instructions and a processor in communication with the memory. The method also includes determining, by the device, a target-object tracking image in a next frame and corresponding to the target-object region; and sequentially performing, by the device, a plurality of sets of convolution processing on the target-object tracking image to determine a target-object region in the next frame. A quantity of convolutions of a first set of convolution processing in the plurality of sets of convolution processing is less than a quantity of convolutions of any other set of convolution processing.
-
-
-
-
-
-
-
-
-