-
公开(公告)号:US20220138455A1
公开(公告)日:2022-05-05
申请号:US17343575
申请日:2021-06-09
申请人: Pinscreen, Inc.
发明人: Koki Nagano , Huiwen Luo , Zejian Wang , Jaewoo Seo , Liwen Hu , Lingyu Wei , Hao Li
摘要: A system, method, and apparatus for generating a normalization of a single two-dimensional image of an unconstrained human face. The system receives the single two-dimensional image of the unconstrained human face, generates an undistorted face based on the unconstrained human face by removing perspective distortion from the unconstrained human face via a perspective undistortion network, generates an evenly lit face based on the undistorted face by normalizing lighting of the undistorted face via a lighting translation network, and generates a frontalized and neutralized expression face based on the evenly lit face via an expression neutralization network.
-
公开(公告)号:US20200051303A1
公开(公告)日:2020-02-13
申请号:US16430204
申请日:2019-06-03
申请人: Pinscreen, Inc.
发明人: Hao Li , Koki Nagano , Jaewoo Seo , Lingyu Wei , Jens Fursund
摘要: A system and method for generating real-time facial animation is disclosed. The system relies upon pre-generating a series of key expression images from a single neutral image using a pre-trained generative adversarial neural network. The key expression images are used to generate a set of FACS expressions and associated textures which may be applied to a three-dimensional model to generate facial animation. The FACS expressions and textures may be provided to a mobile device to enable that mobile device to generate convincing three-dimensional avatars in real-time with convincing animation in a processor non-intensive way through a blending process using the pre-determined FACS expressions and textures.
-
公开(公告)号:US20170243387A1
公开(公告)日:2017-08-24
申请号:US15438546
申请日:2017-02-21
申请人: Pinscreen, Inc.
发明人: Hao Li , Joseph J. Lim , Kyle Olszewski
CPC分类号: G06T13/40 , G06K9/00281 , G06K9/00315 , G06K9/00744 , G06K9/6201
摘要: There is disclosed a system and method for training a set of expression and neutral convolutional neural networks using a single performance mapped to a set of known phonemes and visemes in the form predetermined sentences and facial expressions. Then, subsequent training of the convolutional neural networks can occur using temporal data derived from audio data within the original performance mapped to a set of professionally-created three dimensional animations. Thereafter, with sufficient training, the expression and neutral convolutional neural networks can generate facial animations from facial image data in real-time without individual specific training.
-
-