Personalized videos
    103.
    发明授权

    公开(公告)号:US12177559B2

    公开(公告)日:2024-12-24

    申请号:US18242016

    申请日:2023-09-05

    Applicant: Snap Inc.

    Abstract: Systems and methods for providing personalized videos are provided. An example method includes receiving preprocessed videos including a target face and facial expression parameters of the target face, modifying the preprocessed videos to generate one or more personalized videos by replacing the target face with a source face, where the source face is modified to adopt the facial expression parameters of the target face, providing a user interface enabling a user to share at least one personalized video of the one or more personalized videos with a further user of a further computing device, determining that an application to be used to share the personalized video does not allow auto-play of the personalized video in a video format, in response the determination, exporting the personalized video of the one or more personalized videos into an image file, and sharing the image file via the application.

    Advanced video editing techniques using sampling patterns

    公开(公告)号:US12176005B2

    公开(公告)日:2024-12-24

    申请号:US18243487

    申请日:2023-09-07

    Applicant: Snap Inc.

    Abstract: Systems and methods provide for advanced video editing techniques using sampling patterns. In one example, a computing device can receive a selection of a clip of a video and a sampling pattern. The computing device can determine a respective number of frames to sample from the clip for each interval of time over a length of time for a new clip. For example, the computing device can determine a function corresponding the pattern that relates time and the number of frames to sample, a histogram corresponding to the pattern, or a definite integral corresponding to the pattern, among other approaches. The computing device can extract these numbers of frames from the clip and generate the new clip from the extracted frames. The computing device can present the new clip as a preview and send the new clip to other computing devices.

    Dynamic over-rendering in late-warping

    公开(公告)号:US12175623B2

    公开(公告)日:2024-12-24

    申请号:US18373563

    申请日:2023-09-27

    Applicant: Snap Inc.

    Abstract: A method for adjusting an over-rendered area of a display in an AR device is described. The method includes identifying an angular velocity of a display device, a most recent pose of the display device, previous warp poses, and previous over-rendered areas, and adjusting a size of a dynamic over-rendered area based on a combination of the angular velocity, the most recent pose, the previous warp poses, and the previous over-rendered areas.

    Situational-risk-based AR display
    107.
    发明授权

    公开(公告)号:US12175605B2

    公开(公告)日:2024-12-24

    申请号:US17700733

    申请日:2022-03-22

    Applicant: Snap Inc.

    Abstract: Content is displayed to a user of augmented reality device. In response to receiving an indication of an increased level of risk, the degree of content being displayed to the user is reduced. The indication of increased level of risk may be generated by or received from an associated transportation device. The adjustment of the display of the degree of content may include moving one or more content elements out of a central field of view of the augmented reality device, reducing the size or visual characteristics of a content element, or eliminating a content element from the display.

    Customizable avatar generation system

    公开(公告)号:US12175570B2

    公开(公告)日:2024-12-24

    申请号:US17657286

    申请日:2022-03-30

    Applicant: Snap Inc.

    Abstract: Systems, methods, and computer readable media for customizable avatar generation system, where the methods include accessing text data, processing, using at least one processor, the text data to determine first characteristics of the text data, selecting a personalized avatar of a plurality of personalized avatars for the text data based on matching the first characteristics with second characteristics of the plurality of personalized avatars, generating a customized avatar based on the text data and the selected personalized avatar, and causing the customized avatar to be displayed on a display of a computing device.

    AUTODECODING LATENT 3D DIFFUSION MODELS

    公开(公告)号:US20240420407A1

    公开(公告)日:2024-12-19

    申请号:US18211149

    申请日:2023-06-16

    Applicant: Snap Inc.

    Abstract: Systems and methods for generating static and articulated 3D assets are provided that include a 3D autodecoder at their core. The 3D autodecoder framework embeds properties learned from the target dataset in the latent space, which can then be decoded into a volumetric representation for rendering view-consistent appearance and geometry. The appropriate intermediate volumetric latent space is then identified and robust normalization and de-normalization operations are implemented to learn a 3D diffusion from 2D images or monocular videos of rigid or articulated objects. The methods are flexible enough to use either existing camera supervision or no camera information at all—instead efficiently learning the camera information during training. The generated results are shown to outperform state-of-the-art alternatives on various benchmark datasets and metrics, including multi-view image datasets of synthetic objects, real in-the-wild videos of moving people, and a large-scale, real video dataset of static objects.

    CURATED CONTEXTUAL OVERLAYS FOR AUGMENTED REALITY EXPERIENCES

    公开(公告)号:US20240420382A1

    公开(公告)日:2024-12-19

    申请号:US18819587

    申请日:2024-08-29

    Applicant: Snap Inc.

    Inventor: Tejas Bahulkar

    Abstract: Example systems, devices, media, and methods are described for curating and presenting a contextual overlay that includes graphical elements and virtual elements in an augmented reality experience. A contextual overlay application implements and controls the capturing of frames of video data within a field of view of the camera. The image processing system detects, in the captured frames of video data, one or more food items in the physical environment. Detecting food items may involve computer vision and machine-trained classification models. The method includes retrieving data associated with the detected food item, curating a contextual overlay based on the retrieved data and a configurable profile, and presenting the contextual overlay on the display.

Patent Agency Ranking