Local augmented reality persistent sticker objects

    公开(公告)号:US11308706B2

    公开(公告)日:2022-04-19

    申请号:US16927273

    申请日:2020-07-13

    Applicant: Snap Inc.

    Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.

    PROCESSING AND FORMATTING VIDEO FOR INTERACTIVE PRESENTATION

    公开(公告)号:US20210295874A1

    公开(公告)日:2021-09-23

    申请号:US17303817

    申请日:2021-06-08

    Applicant: Snap Inc.

    Abstract: Systems and methods are described for determining a first media item related to an event, of a plurality of stored media items each comprising video content related to the event, that was captured in a device orientation corresponding to a first device orientation detected for the first computing device; providing, to the first computing device, the first media item to be displayed on the first computing device; in response to a detected change to a second device orientation for the first computing device, determining a second media item that was captured in a device orientation corresponding to the second device orientation detected for the first computing device; and providing, to the first computing device, the second media item to be displayed on the first computing device.

    Processing and formatting video for interactive presentation

    公开(公告)号:US11122218B2

    公开(公告)日:2021-09-14

    申请号:US16722721

    申请日:2019-12-20

    Applicant: Snap Inc.

    Abstract: Systems and methods are described for determining that the user interaction with a display of a computing device during display of a video comprising a sequence of frames indicates a region of interest in a current frame of the sequence of frames of the displayed video. For each frame of the sequence of frames after the current frame, the frame is cropped to generate a cropped frame comprising a portion of the frame including the region of interest in the frame, the cropped frame is enlarged based on a display size corresponding to an angle or orientation of the computing device during display of the video, and the enlarged cropped frame replaces the frame such that the enlarged cropped frame is displayed in the sequence of frames of the video on the display of the computing device instead of the frame.

    Neural networks for facial modeling

    公开(公告)号:US11100311B2

    公开(公告)日:2021-08-24

    申请号:US16509083

    申请日:2019-07-11

    Applicant: Snap Inc.

    Abstract: Systems, devices, media, and methods are presented for modeling facial representations using image segmentation with a client device. The systems and methods receive an image depicting a face, detect at least a portion of the face within the image, and identify a set of facial features within the portion of the face. The systems and methods generate a descriptor function representing the set of facial features, fit object functions of the descriptor function, identify an identification probability for each facial feature, and assign an identification to each facial feature.

    Dense captioning with joint interference and visual context

    公开(公告)号:US10726306B1

    公开(公告)日:2020-07-28

    申请号:US16226035

    申请日:2018-12-19

    Applicant: Snap Inc.

    Abstract: A dense captioning system and method is provided for analyzing an image to generate proposed bounding regions for a plurality of visual concepts within the image, generating a region feature for each proposed bounding region to generate a plurality of region features of the image, and determining a context feature for the image using a proposed bounding region that is a largest in size of the proposed bounding regions. For each region feature of the plurality of region features of the image, the dense captioning system and method further provides for analyzing the region feature to determine for the region feature a detection score that indicates a likelihood that the region feature comprises an actual object, and generating a caption for a visual concept in the image using the region feature and the context feature when a detection score is above a specified threshold value.

    Neural networks for facial modeling

    公开(公告)号:US10395100B1

    公开(公告)日:2019-08-27

    申请号:US16226084

    申请日:2018-12-19

    Applicant: Snap Inc.

    Abstract: Systems, devices, media, and methods are presented for modeling facial representations using image segmentation with a client device. The systems and methods receive an image depicting a face, detect at least a portion of the face within the image, and identify a set of facial features within the portion of the face. The systems and methods generate a descriptor function representing the set of facial features, fit object functions of the descriptor function, identify an identification probability for each facial feature, and assign an identification to each facial feature.

    Neural networks for facial modeling

    公开(公告)号:US10198626B2

    公开(公告)日:2019-02-05

    申请号:US15297789

    申请日:2016-10-19

    Applicant: Snap Inc.

    Abstract: Systems, devices, media, and methods are presented for modeling facial representations using image segmentation with a client device. The systems and methods receive an image depicting a face, detect at least a portion of the face within the image, and identify a set of facial features within the portion of the face. The systems and methods generate a descriptor function representing the set of facial features, fit object functions of the descriptor function, identify an identification probability for each facial feature, and assign an identification to each facial feature.

    Systems and methods for content tagging

    公开(公告)号:US10157333B1

    公开(公告)日:2018-12-18

    申请号:US15247697

    申请日:2016-08-25

    Applicant: Snap Inc.

    Abstract: Systems, methods, devices, media, and computer readable instructions are described for local image tagging in a resource constrained environment. One embodiment involves processing image data using a deep convolutional neural network (DCNN) comprising at least a first subgraph and a second subgraph, the first subgraph comprising at least a first layer and a second layer, processing, the image data using at least the first layer of the first subgraph to generate first intermediate output data; processing, by the mobile device, the first intermediate output data using at least the second layer of the first subgraph to generate first subgraph output data, and in response to a determination that each layer reliant on the first intermediate data have completed processing, deleting the first intermediate data from the mobile device. Additional embodiments involve convolving entire pixel resolutions of the image data against kernels in different layers if the DCNN.

Patent Agency Ranking