DENSE FEATURE SCALE DETECTION FOR IMAGE MATCHING

    公开(公告)号:US20220292697A1

    公开(公告)日:2022-09-15

    申请号:US17825994

    申请日:2022-05-26

    Applicant: Snap Inc.

    Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.

    IMAGE AND POINT CLOUD BASED TRACKING AND IN AUGMENTED REALITY SYSTEMS

    公开(公告)号:US20210174578A1

    公开(公告)日:2021-06-10

    申请号:US17248833

    申请日:2021-02-10

    Applicant: Snap Inc.

    Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. Point cloud data describing an environment is then accessed. A two-dimensional surface of an image of an environment is captured, and a portion of the image is matched to a portion of key points in the point cloud data. An augmented reality object is then aligned within one or more images of the environment based on the match of the point cloud with the image. In some embodiments, building façade data may additionally be used to determine a device location and place the augmented reality object within an image.

    EFFICIENT HUMAN POSE TRACKING IN VIDEOS

    公开(公告)号:US20210125342A1

    公开(公告)日:2021-04-29

    申请号:US16949594

    申请日:2020-11-05

    Applicant: Snap Inc.

    Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.

    Scaled perspective zoom on resource constrained devices

    公开(公告)号:US10757319B1

    公开(公告)日:2020-08-25

    申请号:US15624277

    申请日:2017-06-15

    Applicant: Snap Inc.

    Abstract: A dolly zoom effect can be applied to one or more images captured via a resource-constrained device (e.g., a mobile smartphone) by manipulating the size of a target feature while the background in the one or more images changes due to physical movement of the resource-constrained device. The target feature can be detected using facial recognition or shape detection techniques. The target feature can be resized before the size is manipulated as the background changes (e.g., changes perspective).

    Dense feature scale detection for image matching

    公开(公告)号:US10552968B1

    公开(公告)日:2020-02-04

    申请号:US15712990

    申请日:2017-09-22

    Applicant: Snap Inc.

    Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.

    Local augmented reality persistent sticker objects

    公开(公告)号:US10055895B2

    公开(公告)日:2018-08-21

    申请号:US15010847

    申请日:2016-01-29

    Applicant: Snap Inc.

    Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.

Patent Agency Ranking