Abstract:
Generally this disclosure describes a video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, extracting features from the face, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters.
Abstract:
A device, method and system of video and audio sharing among communication devices, may comprise a communication device for generating and sending a packet containing information related to the video and audio, and another communication device for receiving the packet and rendering the information related to the audio and video. In some embodiments, the communication device may comprise: an audio encoding module to encode a piece of audio into an audio bit stream; an avatar data extraction module to extract avatar data from a piece of video and generate an avatar data bit stream; and a synchronization module to generate synchronization information for synchronizing the audio bit stream with the avatar parameter stream. In some embodiments, the another communication device may comprise: an audio decoding module to decode an audio bit stream into decoded audio data; an Avatar animation module to animate an Avatar model based on an Avatar data bit stream to generate an animated Avatar model; and a synchronizing and rendering module to synchronize and render the decoded audio data and the animated Avatar model by utilizing the synchronization information.
Abstract:
Techniques are disclosed for performing avatar-based video encoding. In some embodiments, a video recording of an individual may be encoded utilizing an avatar that is driven by the facial expression(s) of the individual. In some such cases, the resultant avatar animation may accurately mimic facial expression(s) of the recorded individual. Some embodiments can be used, for example, in video sharing via social media and networking websites. Some embodiments can be used, for example, in video-based communications (e.g., peer-to-peer video calls; videoconferencing). In some instances, use to the disclosed techniques may help to reduce communications bandwidth use, preserve the individual's anonymity, and/or provide enhanced entertainment value (e.g., amusement) for the individual, for example.
Abstract:
A method for trajectory generation based on player tracking is described herein. The method includes determining a temporal association for a first player in a captured field of view and determining a spatial association for the first player. The method also includes deriving a global player identification based on the temporal association and the spatial association and generating a trajectory based on the global player identification.
Abstract:
Methods, systems and apparatuses may provide for technology that selects a player from a plurality of players based on an automated analysis of two-dimensional (2D) video data associated with a plurality of cameras, wherein the selected player is nearest to a projectile depicted in the 2D video data. The technology may also track a location of the selected player over a subsequent plurality of frames in the 2D video data and estimate a location of the projectile based on the location of the selected player over the subsequent plurality of frames.
Abstract:
A system (600) includes multiple cameras (104) disposed about an area (102), a processor (606), and a memory (608) communicatively coupled to the processor. The memory stores instructions that cause the processor to receive a set of video data (602) associated with the cameras. In an embodiment, the set of video data includes a set of image frames associated with a set of ball tracking data (618, 622). In an embodiment, the operations include selecting a first image frame (626) associated with a first change in acceleration and a second image frame (628) associated with a second change in acceleration. In an embodiment, the operations include generating a set of virtual camera actions (630) based on the first image frame and the second image frame.
Abstract:
Method, systems and apparatuses may provide for technology that extracts one or more motion features from filtered position data associated with a projectile in a game and identifies a turning point in a trajectory of the projectile based on the one or more motion features. The technology may also automatically designate the turning point as a highlight moment if one or more of the turning point or the trajectory satisfies a proximity condition with respect to a target area in the game.
Abstract:
Methods and apparatus to generate photo-realistic three-dimensional models of a photographed environment are disclosed. An apparatus includes an object position calculator to determine a three-dimensional (3D) position of an object detected within a first image of an environment and within a second image of the environment. The apparatus further includes a 3D model generator to generate a 3D model of the environment based on the first image and the second image. The apparatus also includes a model integrity analyzer to detect a difference between the 3D position of the object and the 3D model. The 3D model generator automatically modifies the 3D model based on the difference in response to the difference satisfying a confidence threshold.
Abstract:
An example apparatus is disclosed herein that includes a memory and at least one processor. The at least one processor is to execute instructions to: select a gesture from a database, the gesture including a sequence of poses; translate the selected gesture into an animated avatar performing the selected gesture for display at a display device; display a prompt for the user to perform the selected gesture performed by the animated avatar; capture an image of the user performing the selected gesture; and perform a comparison between a gesture performed by the user in the captured image and the selected gesture to determine whether there is a match between the gesture performed by the user and the selected gesture.
Abstract:
Methods, systems and apparatuses may provide for technology that detects an individual in a real-time multi-camera video feed and generates three-dimensional (3D) skeletal data based on the real-time multi-camera video feed. The technology may also automatically identify a frontal body orientation of an individual based on the 3D skeletal data and one or more anthropometric constraints.