AVATAR-BASED VIDEO ENCODING
    71.
    发明申请

    公开(公告)号:US20180025506A1

    公开(公告)日:2018-01-25

    申请号:US15450295

    申请日:2017-03-06

    Abstract: Techniques are disclosed for performing avatar-based video encoding. In some embodiments, a video recording of an individual may be encoded utilizing an avatar that is driven by the facial expression(s) of the individual. In some such cases, the resultant avatar animation may accurately mimic facial expression(s) of the recorded individual. Some embodiments can be used, for example, in video sharing via social media and networking websites. Some embodiments can be used, for example, in video-based communications (e.g., peer-to-peer video calls; videoconferencing). In some instances, use to the disclosed techniques may help to reduce communications bandwidth use, preserve the individual's anonymity, and/or provide enhanced entertainment value (e.g., amusement) for the individual, for example.

    Avatar facial expression animations with head rotation

    公开(公告)号:US09761032B2

    公开(公告)日:2017-09-12

    申请号:US14443339

    申请日:2014-07-25

    Abstract: Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, In embodiments, an apparatus may include an avatar animation engine configured to receive a plurality of facial motion parameters and a plurality of head gestures parameters, respectively associated with a face and a head of a user. The plurality of facial motion parameters may depict facial action movements of the face, and the plurality of head gesture parameters may depict head pose gestures of the head. Further, the avatar animation engine may be configured to drive an avatar model with facial and skeleton animations to animate an avatar, using the facial motion parameters and the head gestures parameters, to replicate a facial expression of the user on the avatar that includes impact of head post rotation of the user. Other embodiments may be described and/or claimed.

    FACIAL MOVEMENT BASED AVATAR ANIMATION
    74.
    发明申请

    公开(公告)号:US20170193684A1

    公开(公告)日:2017-07-06

    申请号:US15290444

    申请日:2016-10-11

    CPC classification number: G06T13/40 G06K9/00315 G06T13/80 H04N7/157

    Abstract: Avatars are animated using predetermined avatar images that are selected based on facial features of a user extracted from video of the user. A user's facial features are tracked in a live video, facial feature parameters are determined from the tracked features, and avatar images are selected based on the facial feature parameters. The selected images are then displayed are sent to another device for display. Selecting and displaying different avatar images as a user's facial movements change animates the avatar. An avatar image can be selected from a series of avatar images representing a particular facial movement, such as blinking. An avatar image can also be generated from multiple avatar feature images selected from multiple avatar feature image series associated with different regions of a user's face (eyes, mouth, nose, eyebrows), which allows different regions of the avatar to be animated independently.

    Mechanism for facilitating dynamic simulation of avatars corresponding to changing user performances as detected at computing devices
    78.
    发明授权
    Mechanism for facilitating dynamic simulation of avatars corresponding to changing user performances as detected at computing devices 有权
    用于促进对应于在计算设备处检测到的改变的用户性能的化身的动态模拟的机制

    公开(公告)号:US09489760B2

    公开(公告)日:2016-11-08

    申请号:US14358394

    申请日:2013-11-14

    Abstract: A mechanism is described for facilitating dynamic simulation of avatars based on user performances according to one embodiment. A method of embodiments, as described herein, includes capturing, in real-time, an image of a user, the image including a video image over a plurality of video frames. The method may further include tracking changes in size of the user image, the tracking of the changes may include locating one or more positions of the user image within each of the plurality of video frames, computing, in real-time, user performances based on the changes in the size of the user image over the plurality of video frames, and dynamically scaling an avatar associated with the user such that the avatar is dynamically simulated corresponding to the user performances.

    Abstract translation: 描述了根据一个实施例的用于促进基于用户性能的动物模拟化的机制。 如本文所述的实施例的方法包括实时捕获用户的图像,所述图像包括多个视频帧上的视频图像。 该方法还可以包括跟踪用户图像大小的变化,所述改变的跟踪可以包括定位所述多个视频帧中的每一个中的所述用户图像的一个或多个位置,并且基于 在多个视频帧上的用户图像的尺寸的变化,以及动态地缩放与用户相关联的化身,使得化身被动态地模拟,以对应于用户的表现。

    Facial movement based avatar animation
    79.
    发明授权
    Facial movement based avatar animation 有权
    基于面部动作的头像动画

    公开(公告)号:US09466142B2

    公开(公告)日:2016-10-11

    申请号:US13997271

    申请日:2012-12-17

    CPC classification number: G06T13/40 G06K9/00315 G06T13/80 H04N7/157

    Abstract: Avatars are animated using predetermined avatar images that are selected based on facial features of a user extracted from video of the user. A user's facial features are tracked in a live video, facial feature parameters are determined from the tracked features, and avatar images are selected based on the facial feature parameters. The selected images are then displayed are sent to another device for display. Selecting and displaying different avatar images as a user's facial movements change animates the avatar. An avatar image can be selected from a series of avatar images representing a particular facial movement, such as blinking. An avatar image can also be generated from multiple avatar feature images selected from multiple avatar feature image series associated with different regions of a user's face (eyes, mouth, nose, eyebrows), which allows different regions of the avatar to be animated independently.

    Abstract translation: 使用基于从用户的视频提取的用户的面部特征来选择的预定化身图像来动画化身。 在实时视频中跟踪用户的面部特征,根据跟踪的特征确定面部特征参数,并且基于面部特征参数选择头像图像。 然后将所选择的图像显示发送到另一个设备进行显示。 选择和显示不同的头像图像作为用户的面部动作变化动画化身。 可以从代表特定面部运动的一系列化身图像中选择化身图像,例如闪烁。 还可以从从用户脸部(眼睛,嘴,鼻,眉毛)的不同区域相关联的多个化身特征图像系列中选择的多个化身特征图像中生成化身图像,其允许化身的不同区域独立地动画化。

    AVATAR AUDIO COMMUNICATION SYSTEMS AND TECHNIQUES
    80.
    发明申请
    AVATAR AUDIO COMMUNICATION SYSTEMS AND TECHNIQUES 审中-公开
    AVATAR音频通信系统和技术

    公开(公告)号:US20160292903A1

    公开(公告)日:2016-10-06

    申请号:US14773933

    申请日:2014-09-24

    Abstract: Examples of systems and methods for transmitting avatar sequencing data in an audio file are generally described herein. A method can include receiving, at a second device from a first device, an audio file comprising: facial motion data, the facial motion data derived from a series of facial images captured at the first device, an avatar sequencing data structure from the first device, the avatar sequencing data structure comprising an avatar identifier and a duration, and an audio stream. The method can include presenting an animation of an avatar, at the second device, using the facial motion data and the audio stream.

    Abstract translation: 这里通常描述用于在音频文件中传送头像排序数据的系统和方法的示例。 一种方法可以包括在来自第一设备的第二设备处接收音频文件,该音频文件包括:面部运动数据,从在第一设备处捕获的一系列面部图像导出的面部运动数据,来自第一设备的头像排序数据结构 ,所述头像排序数据结构包括化身标识符和持续时间,以及音频流。 该方法可以包括使用面部运动数据和音频流在第二设备处呈现化身的动画。

Patent Agency Ranking