Abstract:
System and techniques for positional analysis using computer vision sensor synchronization are described herein. A set of sensor data may be obtained for a participant of an activity. A video stream may be captured in response to detection of a start of the activity in the set of sensor data. The video stream may include images of the participant engaging in the activity. A key stage of the activity may be identified by evaluation of the sensor data. A key frame may be selected from the video stream using a timestamp of the sensor data used to identify the key stage of the activity. A skeletal map may be generated for the participant in the key frame using key points of the participant extracted from the key frame. Instructional data may be selected using the skeletal map. The instructional data may be displayed on a display device.
Abstract:
A mechanism is described to facilitate gesture matching according to one embodiment. A method of embodiments, as described herein, includes selecting a gesture from a database during an authentication phase, translating the selected gesture into an animated avatar, displaying the avatar, prompting a user to perform the selected gesture, capturing a real-time image of the user and comparing the gesture performed by the user in the captured image to the selected gesture to determine whether there is a match.
Abstract:
Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
Abstract:
An embodiment of a semiconductor package apparatus may include technology to recognize an action in a video, determine a synchronization point in the video based on the recognized action, and align sensor-related information with the video based on the synchronization point. Other embodiments are disclosed and claimed.
Abstract:
Methods and apparatus relating to a unified environmental mapping framework are described. In an embodiment, Environmental Mapping (EM) logic performs one or more operations to extract illumination information for an object from an environmental map in response to a determination that the object has a diffuse surface and/or specular surface. Memory, coupled to the EM logic, stores data corresponding to the environmental map. Other embodiments are also disclosed and claimed.
Abstract:
Examples of systems and methods for augmented facial animation are generally described herein. A method for mapping facial expressions to an alternative avatar expression may include capturing a series of images of a face, and detecting a sequence of facial expressions of the face from the series of images. The method may include determining an alternative avatar expression mapped to the sequence of facial expressions, and animating an avatar using the alternative avatar expression.
Abstract:
Various embodiments of this disclosure may describe apparatuses, methods, and systems including an encoding engine to encode and/or compress one or more objects of interest within individual image frames with higher bit densities than the bit density employed to encode and/or compress their background. The image processing system may further include a context engine to identify a region of interest including at least a part of the one or more objects of interest, and scale the region of interest within individual image frames to emphasize the objects of interest. Other embodiments may also be disclosed or claimed.
Abstract:
Examples of systems and methods for three-dimensional model customization for avatar animation using a sketch image selection are generally described herein. A method for rendering a three-dimensional model may include presenting a plurality of sketch images to a user on a user interface, and receiving a selection of sketch images from the plurality of sketch images to compose a face. The method may include rendering the face as a three-dimensional model, the three-dimensional model for use as an avatar.
Abstract:
A device, method and system of screenshot grabbing and sharing comprise projecting contents on a display device connected with a communication device; and grabbing a screenshot from the contents projected on the display device, in response to a screenshot grabbing request from another communication module connected with the communication device.
Abstract:
Apparatuses, systems, media and/or methods may involve providing work assistance. One or more user actions may be recognized, which may be observed by an image capture device, wherein the user actions may be directed to a work surface incapable of electronically processing one or more of the user actions. One or more regions of interest may be identified from the work surface and/or content may be extracted from the regions of interest, wherein the regions of interest may be determined based at least on one or more of the user actions. Additionally, one or more support operations associated with the content may be implemented.