Real-time high-quality facial performance capture

    公开(公告)号:US10019826B2

    公开(公告)日:2018-07-10

    申请号:US15833855

    申请日:2017-12-06

    摘要: A method of transferring a facial expression from a subject to a computer-generated character and a system and non-transitory computer-readable medium for the same. The method can include receiving an input image depicting a face of a subject; matching a first facial model to the input image; generating a displacement map representing of finer-scale details not present in the first facial model using a regression function that estimates the shape of the finer-scale details. The displacement map can be combined with the first facial model to create a second facial model that includes the finer-scale details, and the second facial model can be rendered, if desired, to create a computer-generated image of the face of the subject that includes the finer-scale details.

    Methods and systems of performing performance capture using an anatomically-constrained local model

    公开(公告)号:US09639737B2

    公开(公告)日:2017-05-02

    申请号:US14869735

    申请日:2015-09-29

    IPC分类号: G06K9/00 G06T7/20 G06T7/00

    摘要: Techniques and systems are described for generating an anatomically-constrained local model and for performing performance capture using the model. The local model includes a local shape subspace and an anatomical subspace. In one example, the local shape subspace constrains local deformation of various patches that represent the geometry of a subject's face. In the same example, the anatomical subspace includes an anatomical bone structure, and can be used to constrain movement and deformation of the patches globally on the subject's face. The anatomically-constrained local face model and performance capture technique can be used to track three-dimensional faces or other parts of a subject from motion data in a high-quality manner. Local model parameters that best describe the observed motion of the subject's physical deformations (e.g., facial expressions) under the given constraints are estimated through optimization. The optimization can solve for rigid local patch motion, local patch deformation, and the rigid motion of the anatomical bones. The solution can be formulated as an energy minimization problem for each frame that is obtained for performance capture.

    METHODS AND SYSTEMS OF PERFORMING PERFORMANCE CAPTURE USING AN ANATOMICALLY-CONSTRAINED LOCAL MODEL

    公开(公告)号:US20170091529A1

    公开(公告)日:2017-03-30

    申请号:US14869735

    申请日:2015-09-29

    IPC分类号: G06K9/00 G06T7/00 G06T7/20

    摘要: Techniques and systems are described for generating an anatomically-constrained local model and for performing performance capture using the model. The local model includes a local shape subspace and an anatomical subspace. In one example, the local shape subspace constrains local deformation of various patches that represent the geometry of a subject's face. In the same example, the anatomical subspace includes an anatomical bone structure, and can be used to constrain movement and deformation of the patches globally on the subject's face. The anatomically-constrained local face model and performance capture technique can be used to track three-dimensional faces or other parts of a subject from motion data in a high-quality manner. Local model parameters that best describe the observed motion of the subject's physical deformations (e.g., facial expressions) under the given constraints are estimated through optimization. The optimization can solve for rigid local patch motion, local patch deformation, and the rigid motion of the anatomical bones. The solution can be formulated as an energy minimization problem for each frame that is obtained for performance capture.

    RIGID STABILIZATION OF FACIAL EXPRESSIONS
    5.
    发明申请
    RIGID STABILIZATION OF FACIAL EXPRESSIONS 有权
    刚性表达的稳定性

    公开(公告)号:US20150213307A1

    公开(公告)日:2015-07-30

    申请号:US14497208

    申请日:2014-09-25

    IPC分类号: G06K9/00

    CPC分类号: G06K9/00302 G06T13/40

    摘要: Systems and techniques for performing automatic rigid stabilization of facial expressions are provided. The systems and techniques may include obtaining one or more shapes, the one or more shapes including one or more facial expressions of a subject. The systems and techniques may further include generating a subject-specific skull representation, and performing rigid stabilization of the one or more facial expressions by fitting the subject-specific skull with the one or more facial expressions of the subject.

    摘要翻译: 提供了用于执行面部表情的自动刚性稳定的系统和技术。 系统和技术可以包括获得一个或多个形状,所述一个或多个形状包括受试者的一个或多个面部表情。 系统和技术可以进一步包括生成受试者特定的头骨表示,以及通过将被检体特定的头骨与受试者的一个或多个面部表情相配合来执行一个或多个面部表情的刚性稳定。

    Methods and systems of enriching blendshape rigs with physical simulation

    公开(公告)号:US10297065B2

    公开(公告)日:2019-05-21

    申请号:US15347296

    申请日:2016-11-09

    IPC分类号: G06T13/40

    摘要: Methods, systems, and computer-readable memory are provided for determining time-varying anatomical and physiological tissue characteristics of an animation rig. For example, shape and material properties are defined for a plurality of sample configurations of the animation rig. The shape and material properties are associated with the plurality of sample configurations. An animation of the animation rig is obtained, and one or more configurations of the animation rig are determined for one or more frames of the animation. The determined one or more configurations include shape and material properties, and are determined using one or more sample configurations of the animation rig. A simulation of the animation rig is performed using the determined one or more configurations. Performing the simulation includes computing physical effects for addition to the animation of the animation rig.

    REAL-TIME HIGH-QUALITY FACIAL PERFORMANCE CAPTURE

    公开(公告)号:US20180096511A1

    公开(公告)日:2018-04-05

    申请号:US15833855

    申请日:2017-12-06

    IPC分类号: G06T13/40 G06K9/00 G06T15/04

    摘要: A method of transferring a facial expression from a subject to a computer-generated character and a system and non-transitory computer-readable medium for the same. The method can include receiving an input image depicting a face of a subject; matching a first facial model to the input image; generating a displacement map representing of finer-scale details not present in the first facial model using a regression function that estimates the shape of the finer-scale details. The displacement map can be combined with the first facial model to create a second facial model that includes the finer-scale details, and the second facial model can be rendered, if desired, to create a computer-generated image of the face of the subject that includes the finer-scale details.

    METHODS AND SYSTEMS OF PERFORMING EYE RECONSTRUCTION USING A PARAMETRIC MODEL

    公开(公告)号:US20180012418A1

    公开(公告)日:2018-01-11

    申请号:US15204867

    申请日:2016-07-07

    IPC分类号: G06T19/20 G06T13/20 G06T17/10

    CPC分类号: G06T17/00

    摘要: Systems and techniques for reconstructing one or more eyes using a parametric eye model are provided. The systems and techniques may include obtaining one or more input images that include at least one eye. The systems and techniques may further include obtaining a parametric eye model including an eyeball model and an iris model. The systems and techniques may further include determining parameters of the parametric eye model from the one or more input images. The parameters can be determined to fit the parametric eye model to the at least one eye in the one or more input images. The parameters include a control map used by the iris model to synthesize an iris of the at least one eye. The systems and techniques may further include reconstructing the at least one eye using the parametric eye model with the determined parameters.

    Methods and systems of generating an anatomically-constrained local model for performance capture

    公开(公告)号:US09652890B2

    公开(公告)日:2017-05-16

    申请号:US14869717

    申请日:2015-09-29

    摘要: Techniques and systems are described for generating an anatomically-constrained local model and for performing performance capture using the model. The local model includes a local shape subspace and an anatomical subspace. In one example, the local shape subspace constrains local deformation of various patches that represent the geometry of a subject's face. In the same example, the anatomical subspace includes an anatomical bone structure, and can be used to constrain movement and deformation of the patches globally on the subject's face. The anatomically-constrained local face model and performance capture technique can be used to track three-dimensional faces or other parts of a subject from motion data in a high-quality manner. Local model parameters that best describe the observed motion of the subject's physical deformations (e.g., facial expressions) under the given constraints are estimated through optimization. The optimization can solve for rigid local patch motion, local patch deformation, and the rigid motion of the anatomical bones. The solution can be formulated as an energy minimization problem for each frame that is obtained for performance capture.

    REAL-TIME HIGH-QUALITY FACIAL PERFORMANCE CAPTURE
    10.
    发明申请
    REAL-TIME HIGH-QUALITY FACIAL PERFORMANCE CAPTURE 有权
    实时高品质性能检测

    公开(公告)号:US20170024921A1

    公开(公告)日:2017-01-26

    申请号:US14871313

    申请日:2015-09-30

    IPC分类号: G06T13/40 G06T17/20 G06T7/20

    摘要: A method of transferring a facial expression from a subject to a computer-generated character and a system and non-transitory computer-readable medium for the same. The method can include receiving an input image depicting a face of a subject; matching a first facial model to the input image; generating a displacement map representing of finer-scale details not present in the first facial model using a regression function that estimates the shape of the finer-scale details. The displacement map can be combined with the first facial model to create a second facial model that includes the finer-scale details, and the second facial model can be rendered, if desired, to create a computer-generated image of the face of the subject that includes the finer-scale details.

    摘要翻译: 将面部表情从对象转移到计算机生成的角色的方法以及用于其的系统和非暂时计算机可读介质的方法。 该方法可以包括接收描绘对象的脸部的输入图像; 将第一面部模型与输入图像进行匹配; 使用估计更细微尺寸细节的形状的回归函数生成表示不存在于第一面部模型中的更细微尺寸细节的位移图。 位移图可以与第一面部模型组合以创建包括更细微尺度的细节的第二面部模型,并且如果需要,可以呈现第二面部模型以创建计算机生成的对象的脸部的图像 包括更细微的细节。