Motion representations for articulated animation

    公开(公告)号:US11836835B2

    公开(公告)日:2023-12-05

    申请号:US17364218

    申请日:2021-06-30

    Applicant: Snap Inc.

    Abstract: Systems and methods herein describe novel motion representations for animating articulated objects consisting of distinct parts. The described systems and method access source image data, identify driving image data to modify image feature data in the source image sequence data, generate, using an image transformation neural network, modified source image data comprising a plurality of modified source images depicting modified versions of the image feature data, the image transformation neural network being trained to identify, for each image in the source image data, a driving image from the driving image data, the identified driving image being implemented by the image transformation neural network to modify a corresponding source image in the source image data using motion estimation differences between the identified driving image and the corresponding source image, and stores the modified source image data.

    AUTODECODING LATENT 3D DIFFUSION MODELS

    公开(公告)号:US20240420407A1

    公开(公告)日:2024-12-19

    申请号:US18211149

    申请日:2023-06-16

    Applicant: Snap Inc.

    Abstract: Systems and methods for generating static and articulated 3D assets are provided that include a 3D autodecoder at their core. The 3D autodecoder framework embeds properties learned from the target dataset in the latent space, which can then be decoded into a volumetric representation for rendering view-consistent appearance and geometry. The appropriate intermediate volumetric latent space is then identified and robust normalization and de-normalization operations are implemented to learn a 3D diffusion from 2D images or monocular videos of rigid or articulated objects. The methods are flexible enough to use either existing camera supervision or no camera information at all—instead efficiently learning the camera information during training. The generated results are shown to outperform state-of-the-art alternatives on various benchmark datasets and metrics, including multi-view image datasets of synthetic objects, real in-the-wild videos of moving people, and a large-scale, real video dataset of static objects.

    PLOTTING BEHIND THE SCENES WITH LEARNABLE GAME ENGINES

    公开(公告)号:US20240307783A1

    公开(公告)日:2024-09-19

    申请号:US18121268

    申请日:2023-03-14

    Applicant: Snap Inc.

    CPC classification number: A63F13/67 A63F13/57

    Abstract: A framework trains game-engine-like neural models from annotated videos to generate a Learnable Game Engine (LGE) that maintains states of the scene, objects and agents in it, and enables rendering the environment from a controllable viewpoint. The LGE models the logic of the game and the rules of physics, making it possible for the user to play the game by specifying both high- and low-level action sequences. The LGE also unlocks a director's mode where the game is played by plotting behind the scenes, specifying high-level actions and goals for the agents using text-based instructions. To implement the director's mode, a trained diffusion-based animation model navigates the scene using high-level constraints, to enable play against an adversary, and to devise the strategy to win a point. To render the resulting state of the environment and its agents, a compositional neural radiance field (NeRF) representation is used in a synthesis model.

    Motion representations for articulated animation

    公开(公告)号:US11798213B2

    公开(公告)日:2023-10-24

    申请号:US17364218

    申请日:2021-06-30

    Applicant: Snap Inc.

    Abstract: Systems and methods herein describe novel motion representations for animating articulated objects consisting of distinct parts. The described systems and method access source image data, identify driving image data to modify image feature data in the source image sequence data, generate, using an image transformation neural network, modified source image data comprising a plurality of modified source images depicting modified versions of the image feature data, the image transformation neural network being trained to identify, for each image in the source image data, a driving image from the driving image data, the identified driving image being implemented by the image transformation neural network to modify a corresponding source image in the source image data using motion estimation differences between the identified driving image and the corresponding source image, and stores the modified source image data.

    MOTION REPRESENTATIONS FOR ARTICULATED ANIMATION

    公开(公告)号:US20210407163A1

    公开(公告)日:2021-12-30

    申请号:US17364218

    申请日:2021-06-30

    Applicant: Snap Inc.

    Abstract: Systems and methods herein describe novel motion representations for animating articulated objects consisting of distinct parts. The described systems and method access source image data, identify driving image data to modify image feature data in the source image sequence data, generate, using an image transformation neural network, modified source image data comprising a plurality of modified source images depicting modified versions of the image feature data, the image transformation neural network being trained to identify, for each image in the source image data, a driving image from the driving image data, the identified driving image being implemented by the image transformation neural network to modify a corresponding source image in the source image data using motion estimation differences between the identified driving image and the corresponding source image, and stores the modified source image data.

Patent Agency Ranking