Abstract:
Examples of systems and methods for non-facial animation in facial performance driven avatar system are generally described herein. A method for facial gesture driven body animation may include capturing a series of images of a face, and computing facial motion data for each of the images in the series of images. The method may include identifying an avatar body animation based on the facial motion data, and animating a body of an avatar using the avatar body animation.
Abstract:
Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
Abstract:
A mechanism is described for facilitating dynamic simulation of avatars based on user performances according to one embodiment. A method of embodiments, as described herein, includes capturing, in real-time, an image of a user, the image including a video image over a plurality of video frames. The method may further include tracking changes in size of the user image, the tracking of the changes may include locating one or more positions of the user image within each of the plurality of video frames, computing, in real-time, user performances based on the changes in the size of the user image over the plurality of video frames, and dynamically scaling an avatar associated with the user such that the avatar is dynamically simulated corresponding to the user performances.
Abstract:
Avatars are animated using predetermined avatar images that are selected based on facial features of a user extracted from video of the user. A user's facial features are tracked in a live video, facial feature parameters are determined from the tracked features, and avatar images are selected based on the facial feature parameters. The selected images are then displayed are sent to another device for display. Selecting and displaying different avatar images as a user's facial movements change animates the avatar. An avatar image can be selected from a series of avatar images representing a particular facial movement, such as blinking. An avatar image can also be generated from multiple avatar feature images selected from multiple avatar feature image series associated with different regions of a user's face (eyes, mouth, nose, eyebrows), which allows different regions of the avatar to be animated independently.
Abstract:
Examples of systems and methods for transmitting avatar sequencing data in an audio file are generally described herein. A method can include receiving, at a second device from a first device, an audio file comprising: facial motion data, the facial motion data derived from a series of facial images captured at the first device, an avatar sequencing data structure from the first device, the avatar sequencing data structure comprising an avatar identifier and a duration, and an audio stream. The method can include presenting an animation of an avatar, at the second device, using the facial motion data and the audio stream.
Abstract:
Examples of systems and methods for transmitting facial motion data and animating an avatar are generally described herein. A system may include an image capture device to capture a series of images of a face, a facial recognition module to compute facial motion data for each of the images in the series of images, and a communication module to transmit the facial motion data to an animation device, wherein the animation device is to use the facial motion data to animate an avatar on the animation device.
Abstract:
Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, the apparatus may comprise an avatar animation engine to receive a plurality of fur shell texture data maps associated with a furry avatar, and drive an avatar model to animate the furry avatar, using the plurality of fur shell texture data maps. The plurality of fur shell texture data maps may be generated through sampling of fur strands across a plurality of horizontal planes. Other embodiments may be described and/or claimed.
Abstract:
Disclosed in some examples are various modifications to the shape regression technique for use in real-time applications, and methods, systems, and machine readable mediums which utilize the resulting facial landmark tracking methods.