- 专利标题: Style-aware audio-driven talking head animation from a single image
-
申请号: US16788551申请日: 2020-02-12
-
公开(公告)号: US11417041B2公开(公告)日: 2022-08-16
- 发明人: Dingzeyu Li , Yang Zhou , Jose Ignacio Echevarria Vallespi , Elya Shechtman
- 申请人: ADOBE INC.
- 申请人地址: US CA San Jose
- 专利权人: ADOBE INC.
- 当前专利权人: ADOBE INC.
- 当前专利权人地址: US CA San Jose
- 代理机构: Shook, Hardy & Bacon L.L.P.
- 主分类号: G06T13/20
- IPC分类号: G06T13/20 ; G06T13/40 ; G06T17/20
摘要:
Embodiments of the present invention provide systems, methods, and computer storage media for generating an animation of a talking head from an input audio signal of speech and a representation (such as a static image) of a head to animate. Generally, a neural network can learn to predict a set of 3D facial landmarks that can be used to drive the animation. In some embodiments, the neural network can learn to detect different speaking styles in the input speech and account for the different speaking styles when predicting the 3D facial landmarks. Generally, template 3D facial landmarks can be identified or extracted from the input image or other representation of the head, and the template 3D facial landmarks can be used with successive windows of audio from the input speech to predict 3D facial landmarks and generate a corresponding animation with plausible 3D effects.
公开/授权文献
信息查询