Abstract:
Methods and apparatus to detect collision of a virtual camera with objects in a three-dimensional volumetric model are disclosed herein. An example virtual camera system disclosed herein includes cameras to obtain images of a scene in an environment. The example virtual camera system also includes a virtual camera generator to create a 3D volumetric model of the scene based on the images, identify a 3D location of a virtual camera to be disposed in the 3D volumetric model, and detect whether a collision occurs between the virtual camera and one or more objects in the 3D volumetric model.
Abstract:
Methods, systems and apparatuses may provide for technology that identifies a player captured in a multi-camera video feed of a game that involves the identified player and estimates a first field of view from the perspective of the identified player for a selected from of the multi-camera video feed. Additionally, the technology automatically generates, based on the first field of view, a camera path for a replay of the selected frame from the perspective of the identified player. In one example, the technology also determines a trajectory of a projectile captured in the multi-camera video feed, estimates, based on the trajectory, a second field of view from the perspective of the projectile, and automatically generates, based on the second field of view, a replay of one or more selected frames of the multi-camera video feed from the perspective of the projectile.
Abstract:
Methods and apparatus to detect collision of a virtual camera with objects in a three-dimensional volumetric model are disclosed herein. An example virtual camera system disclosed herein includes cameras to obtain images of a scene in an environment. The example virtual camera system also includes a virtual camera generator to create a 3D volumetric model of the scene based on the images, identify a 3D location of a virtual camera to be disposed in the 3D volumetric model, and detect whether a collision occurs between the virtual camera and one or more objects in the 3D volumetric model.
Abstract:
Methods, systems, and storage media for generating and displaying animations of simulated biomechanical motions are disclosed. In embodiments, a computer device may obtain sensor data of a sensor affixed to a user's body or equipment used by the user, and may use inverse kinematics to determine desired positions and orientations of an avatar based on the sensor data. In embodiments, the computer device may adjust or alter the avatar based on the inverse kinematics, and generate an animation for display based on the adjusted avatar. Other embodiments may be disclosed and/or claimed.
Abstract:
Video analysis may be used to determine who is watching television and their level of interest in the current programming Lists of favorite programs may be derived for each of a plurality of viewers of programming on the same television receiver.
Abstract:
Examples of systems and methods for transmitting facial motion data and animating an avatar are generally described herein. A system may include an image capture device to capture a series of images of a face, a facial recognition module to compute facial motion data for each of the images in the series of images, and a communication module to transmit the facial motion data to an animation device, wherein the animation device is to use the facial motion data to animate an avatar on the animation device.
Abstract:
Video analysis may be used to determine who is watching television and their level of interest in the current programming Lists of favorite programs may be derived for each of a plurality of viewers of programming on the same television receiver.
Abstract:
Systems and methods may provide for identifying one or more facial expressions of a subject in a video signal and generating avatar animation data based on the one or more facial expressions. Additionally, the avatar animation data may be incorporated into an audio file associated with the video signal. In one example, the audio file is sent to a remote client device via a messaging application. Systems and methods may also facilitate the generation of avatar icons and doll animations that mimic the actual facial features and/or expressions of specific individuals.
Abstract:
Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, the apparatus may comprise an avatar animation engine to receive a plurality of fur shell texture data maps associated with a furry avatar, and drive an avatar model to animate the furry avatar, using the plurality of fur shell texture data maps. The plurality of fur shell texture data maps may be generated through sampling of fur strands across a plurality of horizontal planes. Other embodiments may be described and/or claimed.
Abstract:
Disclosed in some examples are various modifications to the shape regression technique for use in real-time applications, and methods, systems, and machine readable mediums which utilize the resulting facial landmark tracking methods.