Abstract:
An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.
Abstract:
Implementations of the subject matter described herein relate to mixed reality rendering of objects. According to the embodiments of the subject matter described herein, while rendering an object, a wearable computing device takes lighting conditions in the real world into account, thereby increasing the reality of the rendered object. In particular, the wearable computing device acquires environment lighting information of an object to be rendered and renders the object to a user based on the environment lighting information. In this way, the object rendered by the wearable computing device can be more real and accurate. The user will thus have a better interaction experience.
Abstract:
Implementations of the subject matter described herein relate to mixed reality object rendering based on ambient light conditions. According to the embodiments of the subject matter described herein, while rendering an object, a wearable computing device acquires light conditions of the real world, thereby increasing the reality of the rendered object. In particular, the wearable computing device is configured to acquire an image of an environment where the wearable computing device is located. The image is adjusted based on a camera parameter used when the image is captured. Subsequently, ambient light information is determined based on the adjusted image. In this way, the wearable computing device can obtain more real and accurate ambient light information, so as to render to the user an object with enhanced reality. Accordingly, the user can have a better interaction experience.
Abstract:
The claimed subject matter includes techniques for printing three-dimensional (3D) objects. An example method includes obtaining a 3D model and processing the 3D model to generate layers of tool path information. The processing includes automatically optimizing the orientation of the 3D model to reduce an amount of support material used in the printing. The method also includes printing the 3D object using layers.
Abstract:
The present disclosure provides method, apparatus and system for 3-dimension (3D) face tracking. The method for 3D face tracking may comprise: obtaining a 2-dimension (2D) face image; performing a local feature regression on the 2D face image to determine 3D face representation parameters corresponding to the 2D face image; and generating a 3D facial mesh and corresponding 2D facial landmarks based on the determined 3D face representation parameters. The present disclosure may improve tracking accuracy and reduce memory cost, and accordingly may be effectively applied in broader application scenarios.
Abstract:
In this disclosure, a solution for denoising a curve mesh is proposed. For a curve mesh including a polygonal facet, a noisy normal and a ground-truth normal of a first facet in the mesh is obtained. Then, based on the noisy normal, a first geometric feature of the first facet is determined from a plurality of neighboring facets of the first facet in the mesh. Next, based on the first geometric feature and the ground-truth normal, a mapping from the first geometric feature to the ground-truth normal of the first facet is determined for denoising the mesh.
Abstract:
An intuitive interface may allow users of a computing device (e.g., children, etc.) to create imaginary three dimensional (3D) objects of any shape using body gestures performed by the users as a primary or only input. A user may make motions while in front of an imaging device that senses movement of the user. The interface may allow first-person and/or third person interaction during creation of objects, which may map a body of a user to a body of an object presented by a display. In an example process, the user may start by scanning an arbitrary body gesture into an initial shape of an object. Next, the user may perform various gestures using his body, which may result in various edits to the object. After the object is completed, the object may be animated, possibly based on movements of the user.
Abstract:
Some implementations disclosed herein provide techniques and arrangements to render global light transport in real-time or near real-time. For example, in a pre-computation stage, a first computing device may render points of surfaces (e.g., using multiple light bounces and the like). Attributes for each of the points may be determined. A plurality of machine learning algorithms may be trained using particular attributes from the attributes. For example, a first machine learning algorithm may be trained using a first portion of the attributes and a second machine learning algorithm may be trained using a second portion of the attributes. The trained machine learning algorithms may be used by a second computing device to render components (e.g., diffuse and specular components) of indirect shading in real-time.
Abstract:
Implementations of the subject matter described herein relate to mixed reality object rendering based on ambient light conditions. According to the embodiments of the subject matter described herein, while rendering an object a wearable computing device acquires light conditions of the real world, thereby increasing the reality of the rendered object. In particular, the wearable computing deice is configured to acquire an image of an environment where the wearable computing deice is located. The image is adjusted based on a cement parameter used when the image is captured. Subsequently, ambient light information is determined based on the adjusted image. In this way, the wearable computing deice can obtain more real and accurate emblem light information, so as to render to the user an object with enhanced reality. Accordingly, the user can have a better interaction experience.
Abstract:
The implementations of the subject matter described herein relate to an octree-based convolutional neural network. In some implementations, there is provided a computer-implemented method for processing a three-dimensional shape. The method comprises obtaining an octree for representing the three-dimensional shape. Nodes of the octree include empty nodes and non-empty nodes. The empty nodes exclude the three-dimensional shape and are leaf nodes of the octree, and the non-empty nodes include at least a part of the three-dimensional shape. The method further comprises for nodes in the octree with a depth associated with a convolutional layer of a convolutional neural network, performing a convolutional operation of the convolutional layer to obtain an output of the convolutional layer.