Mechanism for facilitating dynamic simulation of avatars corresponding to changing user performances as detected at computing devices
    83.
    发明授权
    Mechanism for facilitating dynamic simulation of avatars corresponding to changing user performances as detected at computing devices 有权
    用于促进对应于在计算设备处检测到的改变的用户性能的化身的动态模拟的机制

    公开(公告)号:US09489760B2

    公开(公告)日:2016-11-08

    申请号:US14358394

    申请日:2013-11-14

    Abstract: A mechanism is described for facilitating dynamic simulation of avatars based on user performances according to one embodiment. A method of embodiments, as described herein, includes capturing, in real-time, an image of a user, the image including a video image over a plurality of video frames. The method may further include tracking changes in size of the user image, the tracking of the changes may include locating one or more positions of the user image within each of the plurality of video frames, computing, in real-time, user performances based on the changes in the size of the user image over the plurality of video frames, and dynamically scaling an avatar associated with the user such that the avatar is dynamically simulated corresponding to the user performances.

    Abstract translation: 描述了根据一个实施例的用于促进基于用户性能的动物模拟化的机制。 如本文所述的实施例的方法包括实时捕获用户的图像,所述图像包括多个视频帧上的视频图像。 该方法还可以包括跟踪用户图像大小的变化,所述改变的跟踪可以包括定位所述多个视频帧中的每一个中的所述用户图像的一个或多个位置,并且基于 在多个视频帧上的用户图像的尺寸的变化,以及动态地缩放与用户相关联的化身,使得化身被动态地模拟,以对应于用户的表现。

    Facial movement based avatar animation
    84.
    发明授权
    Facial movement based avatar animation 有权
    基于面部动作的头像动画

    公开(公告)号:US09466142B2

    公开(公告)日:2016-10-11

    申请号:US13997271

    申请日:2012-12-17

    CPC classification number: G06T13/40 G06K9/00315 G06T13/80 H04N7/157

    Abstract: Avatars are animated using predetermined avatar images that are selected based on facial features of a user extracted from video of the user. A user's facial features are tracked in a live video, facial feature parameters are determined from the tracked features, and avatar images are selected based on the facial feature parameters. The selected images are then displayed are sent to another device for display. Selecting and displaying different avatar images as a user's facial movements change animates the avatar. An avatar image can be selected from a series of avatar images representing a particular facial movement, such as blinking. An avatar image can also be generated from multiple avatar feature images selected from multiple avatar feature image series associated with different regions of a user's face (eyes, mouth, nose, eyebrows), which allows different regions of the avatar to be animated independently.

    Abstract translation: 使用基于从用户的视频提取的用户的面部特征来选择的预定化身图像来动画化身。 在实时视频中跟踪用户的面部特征,根据跟踪的特征确定面部特征参数,并且基于面部特征参数选择头像图像。 然后将所选择的图像显示发送到另一个设备进行显示。 选择和显示不同的头像图像作为用户的面部动作变化动画化身。 可以从代表特定面部运动的一系列化身图像中选择化身图像,例如闪烁。 还可以从从用户脸部(眼睛,嘴,鼻,眉毛)的不同区域相关联的多个化身特征图像系列中选择的多个化身特征图像中生成化身图像,其允许化身的不同区域独立地动画化。

    AVATAR AUDIO COMMUNICATION SYSTEMS AND TECHNIQUES
    85.
    发明申请
    AVATAR AUDIO COMMUNICATION SYSTEMS AND TECHNIQUES 审中-公开
    AVATAR音频通信系统和技术

    公开(公告)号:US20160292903A1

    公开(公告)日:2016-10-06

    申请号:US14773933

    申请日:2014-09-24

    Abstract: Examples of systems and methods for transmitting avatar sequencing data in an audio file are generally described herein. A method can include receiving, at a second device from a first device, an audio file comprising: facial motion data, the facial motion data derived from a series of facial images captured at the first device, an avatar sequencing data structure from the first device, the avatar sequencing data structure comprising an avatar identifier and a duration, and an audio stream. The method can include presenting an animation of an avatar, at the second device, using the facial motion data and the audio stream.

    Abstract translation: 这里通常描述用于在音频文件中传送头像排序数据的系统和方法的示例。 一种方法可以包括在来自第一设备的第二设备处接收音频文件,该音频文件包括:面部运动数据,从在第一设备处捕获的一系列面部图像导出的面部运动数据,来自第一设备的头像排序数据结构 ,所述头像排序数据结构包括化身标识符和持续时间,以及音频流。 该方法可以包括使用面部运动数据和音频流在第二设备处呈现化身的动画。

    FACIAL GESTURE DRIVEN ANIMATION COMMUNICATION SYSTEM
    86.
    发明申请
    FACIAL GESTURE DRIVEN ANIMATION COMMUNICATION SYSTEM 有权
    动物动画动画通讯系统

    公开(公告)号:US20160292901A1

    公开(公告)日:2016-10-06

    申请号:US14773911

    申请日:2014-09-24

    Abstract: Examples of systems and methods for transmitting facial motion data and animating an avatar are generally described herein. A system may include an image capture device to capture a series of images of a face, a facial recognition module to compute facial motion data for each of the images in the series of images, and a communication module to transmit the facial motion data to an animation device, wherein the animation device is to use the facial motion data to animate an avatar on the animation device.

    Abstract translation: 这里通常描述用于传送面部运动数据和动画化身的化身的系统和方法的示例。 系统可以包括用于捕获一系列面部图像的图像捕获设备,面部识别模块,用于计算所述一系列图像中的每个图像的面部运动数据;以及通信模块,用于将面部运动数据传输到 动画设备,其中所述动画设备将使用所述面部运动数据来对所述动画设备上的化身进行动画化。

    FURRY AVATAR ANIMATION
    87.
    发明申请
    FURRY AVATAR ANIMATION 有权
    FURRY AVATAR动画

    公开(公告)号:US20160247308A1

    公开(公告)日:2016-08-25

    申请号:US14763773

    申请日:2014-09-24

    CPC classification number: G06T13/40 G06T1/20 G06T15/005 G06T15/503 G06T15/80

    Abstract: Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, the apparatus may comprise an avatar animation engine to receive a plurality of fur shell texture data maps associated with a furry avatar, and drive an avatar model to animate the furry avatar, using the plurality of fur shell texture data maps. The plurality of fur shell texture data maps may be generated through sampling of fur strands across a plurality of horizontal planes. Other embodiments may be described and/or claimed.

    Abstract translation: 本文公开了与动画和呈现化身相关联的装置,方法和存储介质。 在实施例中,该装置可以包括头像动画引擎,用于接收与毛茸茸化身相关联的多个毛皮外壳纹理数据图,并且使用多个皮草外壳纹理数据图来驱动化身模型以使毛茸茸的头像动画化。 可以通过跨越多个水平面的毛线进行抽样来生成多个皮壳结构数据图。 可以描述和/或要求保护其他实施例。

Patent Agency Ranking