-
11.
公开(公告)号:US10909989B2
公开(公告)日:2021-02-02
申请号:US16213421
申请日:2018-12-07
Inventor: Wei Li , Binghua Qian , Xingming Jin , Ke Li , Fuzhang Wu , Yongjian Wu , Feiyue Huang
Abstract: An identity vector generation method is provided. The method includes obtaining to-be-processed speech data. Corresponding acoustic features are extracted from the to-be-processed speech data. A posterior probability that each of the acoustic features belongs to each Gaussian distribution component in a speaker background model is calculated to obtain a statistic. The statistic is mapped to a statistic space to obtain a reference statistic, the statistic space built according to a statistic corresponding to a speech sample exceeding a threshold speech duration. A corrected statistic is determined according to the calculated statistic and the reference statistic; and an identity vector is generated according to the corrected statistic.
-
公开(公告)号:US10607066B2
公开(公告)日:2020-03-31
申请号:US15462423
申请日:2017-03-17
Inventor: Jilin Li , Chengjie Wang , Feiyue Huang , Yongjian Wu , Hui Ni , Ruixin Zhang , Guofu Tan
Abstract: The present disclosure discloses a living body identification method, an information generation method, and a terminal, and belongs to the field of biometric feature recognition. The method includes: providing lip language prompt information, the lip language prompt information including at least two target characters, and the at least two target characters being at least one of: characters of a same lip shape, characters of opposite lip shapes, or characters whose lip shape similarity is in a preset range; collecting at least two frame pictures; detecting whether lip changes of a to-be-identified object in the at least two frame pictures meet a preset condition, when the to-be-identified object reads the at least two target characters; and determining that the to-be-identified object is a living body, if the preset condition is met. The present disclosure resolves a problem in the related technology that even if a to-be-identified object performs an operation according to lip language prompt information, a terminal may incorrectly determine that the to-be-identified object is not a living body, and achieves an effect that the terminal can accurately determine whether the to-be-identified object is a living body and improve determining accuracy.
-
公开(公告)号:US09977997B2
公开(公告)日:2018-05-22
申请号:US15486102
申请日:2017-04-12
Inventor: Xiang Bai , Feiyue Huang , Xiaowei Guo , Cong Yao , Baoguang Shi
CPC classification number: G06K9/6267 , G06K9/42 , G06K9/4604 , G06K9/6256 , G06K9/627
Abstract: Disclosed are a training method and apparatus for a CNN model, which belong to the field of image recognition. The method comprises: performing a convolution operation, maximal pooling operation and horizontal pooling operation on training images, respectively, to obtain second feature images; determining feature vectors according to the second feature images; processing the feature vectors to obtain category probability vectors; according to the category probability vectors and an initial category, calculating a category error; based on the category error, adjusting model parameters; based on the adjusted model parameters, continuing the model parameters adjusting process, and using the model parameters when the number of iteration times reaches a pre-set number of times as the model parameters for the well-trained CNN model. After the convolution operation and maximal pooling operation on the training images on each level of convolution layer, a horizontal pooling operation is performed. Since the horizontal pooling operation can extract feature images identifying image horizontal direction features from the feature images, such that the well-trained CNN model can recognize an image of any size, thus expanding the applicable range of the well-trained CNN model in image recognition.
-
公开(公告)号:US20170316598A1
公开(公告)日:2017-11-02
申请号:US15652009
申请日:2017-07-17
Inventor: Chengjie Wang , Jilin Li , Feiyue Huang , Lei Zhang
CPC classification number: G06T15/10 , G06K9/00 , G06T7/73 , G06T15/04 , G06T19/20 , G06T2207/30201 , G06T2210/44 , G06T2215/16 , G06T2219/2021
Abstract: A 3D human face reconstruction method and apparatus, and a server are provided. In some embodiments, the method includes determining feature points on an acquired 2D human face image; determining posture parameters of a human face according to the feature points, and adjusting a posture of a universal 3D human face model according to the posture parameters; determining points on the universal 3D human face model corresponding to the feature points, and adjusting the corresponding points in a sheltered status to obtain a preliminary 3D human face model; and performing deformation adjusting on the preliminary 3D human face model, and performing texture mapping on the deformed 3D human face model to obtain a final 3D human face.
-
公开(公告)号:US20150172358A1
公开(公告)日:2015-06-18
申请号:US14626671
申请日:2015-02-19
Inventor: Yuan Huang , Yongjian Wu , Feiyue Huang
CPC classification number: H04L67/02 , G06F16/954 , H04L67/06 , H04L67/32 , H04N5/765
Abstract: The present invention relates to an image uploading method, system and client, the image uploading method comprising the following steps: acquiring an image uploading request; labeling the sequence information of images; creating multiple threads according to the uploading request and the sequence information; and concurrently uploading the images according to the multiple threads created. The image uploading method, system and client employ a multi-thread concurrent image uploading mode to upload a plurality of images simultaneously, and can start uploading of a second image before the uploading of a first image is completed, thus improving network resource utilization rate, and saving overall time.
Abstract translation: 图像上传方法,系统和客户端,图像上传方法,包括以下步骤:获取图像上传请求; 标记图像的序列信息; 根据上传请求和序列信息创建多个线程; 并根据创建的多个线程同时上传图像。 图像上传方法,系统和客户端采用多线程并发图像上传模式同时上传多个图像,并且可以在第一图像上传完成之前开始上传第二图像,从而提高网络资源利用率, 并节省整体时间。
-
公开(公告)号:US10817708B2
公开(公告)日:2020-10-27
申请号:US16297565
申请日:2019-03-08
Inventor: Chengjie Wang , Hui Ni , Yandan Zhao , Yabiao Wang , Shouhong Ding , Shaoxin Li , Ling Zhao , Jilin Li , Yongjian Wu , Feiyue Huang , Yicong Liang
IPC: G06K9/00
Abstract: A facial tracking method is provided. The method includes: obtaining, from a video stream, an image that currently needs to be processed as a current image frame; and obtaining coordinates of facial key points in a previous image frame and a confidence level corresponding to the previous image frame. The method also includes calculating coordinates of facial key points in the current image frame according to the coordinates of the facial key points in the previous image frame when the confidence level is higher than a preset threshold; and performing multi-face recognition on the current image frame according to the coordinates of the facial key points in the current image frame. The method also includes calculating a confidence level of the coordinates of the facial key points in the current image frame, and returning to process a next frame until recognition on all image frames is completed.
-
公开(公告)号:US10664693B2
公开(公告)日:2020-05-26
申请号:US15950929
申请日:2018-04-11
Inventor: Feiyue Huang , Jilin Li , Chengjie Wang
Abstract: Aspects of the disclosure provide a method for adding a target contact to a user's friend list in a social network. A target image of a human body part of the target contact can be received from a user terminal. A target biological feature can be extracted from the target image. Whether the target biological feature matches a reference biological feature of a plurality of prestored reference biological features can be determined. A social account associated with the determined reference biological feature that matches the target biological feature may be determined, and added to the user's friend list.
-
公开(公告)号:US10438077B2
公开(公告)日:2019-10-08
申请号:US15728178
申请日:2017-10-09
Inventor: Chengjie Wang , Jilin Li , Feiyue Huang , Yongjian Wu
Abstract: A face liveness detection method includes outputting a prompt to complete one or more specified actions in sequence within a specified time period, obtaining a face video, detecting a reference face image frame in the face video using a face detection method, locating a facial keypoint in the reference face image frame, tracking the facial keypoint in one or more subsequent face image frames, determining a state parameter of one of the one or more specified actions using a continuity analysis method according to the facial keypoint, and determining whether the one of the one or more specified actions is completed according to a continuity of the state parameter.
-
公开(公告)号:US10395095B2
公开(公告)日:2019-08-27
申请号:US15703826
申请日:2017-09-13
Inventor: Shouhong Ding , Jilin Li , Chengjie Wang , Feiyue Huang , Yongjian Wu , Guofu Tan
Abstract: Face model matrix training method, apparatus, and storage medium are provided. The method includes: obtaining a face image library, the face image library including k groups of face images, and each group of face images including at least one face image of at least one person, k>2, and k being an integer; separately parsing each group of the k groups of face images, and calculating a first matrix and a second matrix according to parsing results, the first matrix being an intra-group covariance matrix of facial features of each group of face images, and the second matrix being an inter-group covariance matrix of facial features of the k groups of face images; and training face model matrices according to the first matrix and the second matrix.
-
公开(公告)号:US10325181B2
公开(公告)日:2019-06-18
申请号:US15703027
申请日:2017-09-13
Inventor: Kun Xu , Xiaowei Guo , Feiyue Huang , Ruixin Zhang , Juhong Wang , Shimin Hu , Bin Liu
Abstract: An image classification method is provided. The method includes: inputting a to-be-classified image into a plurality of neural network models; obtaining data output by multiple non-input layers specified by each neural network model to generate a plurality of image features corresponding to the plurality of neural network models; respectively inputting the plurality of corresponding image features into linear classifiers, each of the linear classifiers being trained by one of the plurality of neural network models for determining whether an image belongs to a preset class; obtaining, using each neural network model, a corresponding probability that the to-be-classified image comprises an object image of the preset class; and determining, according to each obtained probability, whether the to-be-classified image includes the object image of the preset class.
-
-
-
-
-
-
-
-
-