Dense feature scale detection for image matching

    公开(公告)号:US11367205B1

    公开(公告)日:2022-06-21

    申请号:US16721483

    申请日:2019-12-19

    Applicant: Snap Inc.

    Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.

    Efficient human pose tracking in videos

    公开(公告)号:US11315259B2

    公开(公告)日:2022-04-26

    申请号:US16949594

    申请日:2020-11-05

    Applicant: Snap Inc.

    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.

    Local augmented reality persistent sticker objects

    公开(公告)号:US11308706B2

    公开(公告)日:2022-04-19

    申请号:US16927273

    申请日:2020-07-13

    Applicant: Snap Inc.

    Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.

    EYE TEXTURE INPAINTING
    35.
    发明申请

    公开(公告)号:US20210319540A1

    公开(公告)日:2021-10-14

    申请号:US17355687

    申请日:2021-06-23

    Applicant: Snap Inc.

    Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.

    EFFICIENT HUMAN POSE TRACKING IN VIDEOS

    公开(公告)号:US20230010480A1

    公开(公告)日:2023-01-12

    申请号:US17660462

    申请日:2022-04-25

    Applicant: Snap, Inc.

    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.

    Object modeling using light projection

    公开(公告)号:US11164376B1

    公开(公告)日:2021-11-02

    申请号:US16116590

    申请日:2018-08-29

    Applicant: Snap Inc.

    Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.

    Real-time bokeh effect
    40.
    发明授权

    公开(公告)号:US11087513B1

    公开(公告)日:2021-08-10

    申请号:US16186164

    申请日:2018-11-09

    Applicant: Snap Inc.

    Abstract: Systems and methods are provided for receiving an image from a camera of a mobile device, analyzing the image to determine a subject of the image, segmenting the subject of the image to generate a mask indicating an area of the image comprising the subject of the image, applying a bokeh effect to a background region of the image to generate a processed background region, generating an output image comprising the subject of the image and the processed background region, and causing the generated output image to display on a display of the mobile device.

Patent Agency Ranking