-
公开(公告)号:US11367205B1
公开(公告)日:2022-06-21
申请号:US16721483
申请日:2019-12-19
Applicant: Snap Inc.
Inventor: Shenlong Wang , Linjie Luo , Ning Zhang , Jia Li
Abstract: Dense feature scale detection can be implemented using multiple convolutional neural networks trained on scale data to more accurately and efficiently match pixels between images. An input image can be used to generate multiple scaled images. The multiple scaled images are input into a feature net, which outputs feature data for the multiple scaled images. An attention net is used to generate an attention map from the input image. The attention map assigns emphasis as a soft distribution to different scales based on texture analysis. The feature data and the attention data can be combined through a multiplication process and then summed to generate dense features for comparison.
-
公开(公告)号:US11315259B2
公开(公告)日:2022-04-26
申请号:US16949594
申请日:2020-11-05
Applicant: Snap Inc.
Inventor: Yuncheng Li , Linjie Luo , Xuecheng Nie , Ning Zhang
Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.
-
公开(公告)号:US11308706B2
公开(公告)日:2022-04-19
申请号:US16927273
申请日:2020-07-13
Applicant: Snap Inc.
Inventor: Jia Li , Linjie Luo , Rahul Bhupendra Sheth , Ning Xu , Jianchao Yang
Abstract: Systems and methods for local augmented reality (AR) tracking of an AR object are disclosed. In one example embodiment a device captures a series of video image frames. A user input is received at the device associating a first portion of a first image of the video image frames with an AR sticker object and a target. A first target template is generated to track the target across frames of the video image frames. In some embodiments, global tracking based on a determination that the target is outside a boundary area is used. The global tracking comprises using a global tracking template for tracking movement in the video image frames captured following the determination that the target is outside the boundary area. When the global tracking determines that the target is within the boundary area, local tracking is resumed along with presentation of the AR sticker object on an output display of the device.
-
公开(公告)号:US20220036647A1
公开(公告)日:2022-02-03
申请号:US17499659
申请日:2021-10-12
Applicant: Snap Inc.
Inventor: Soumyadip Sengupta , Linjie Luo , Chen Cao , Menglei Chai
Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.
-
公开(公告)号:US20210319540A1
公开(公告)日:2021-10-14
申请号:US17355687
申请日:2021-06-23
Applicant: Snap Inc.
Inventor: Chen Cao , Wen Zhang , Menglei Chai , Linjie Luo
Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.
-
公开(公告)号:US20230343033A1
公开(公告)日:2023-10-26
申请号:US18216327
申请日:2023-06-29
Applicant: Snap Inc.
Inventor: Chen Cao , Menglei Chai , Linjie Luo , Soumyadip Sengupta
CPC classification number: G06T17/30 , G06V20/647 , G06V40/165 , G06V40/171
Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.
-
公开(公告)号:US11769307B2
公开(公告)日:2023-09-26
申请号:US17728553
申请日:2022-04-25
Applicant: Snap Inc.
Inventor: Nathan Jurgenson , Linjie Luo , Jonathan M. Rodriguez, II , Rahul Bhupendra Sheth , Jia Li , Xutao Lv
IPC: G06T19/00 , G06T7/73 , G06V20/10 , G06V20/20 , G06F3/01 , G06F3/04815 , G06T7/20 , G06T13/80 , G06T19/20 , G06T7/246
CPC classification number: G06T19/006 , G06F3/012 , G06F3/04815 , G06T7/20 , G06T7/246 , G06T7/73 , G06T13/80 , G06T19/20 , G06V20/10 , G06V20/20 , G06T2200/04 , G06T2207/30244 , G06T2219/2004 , G06V2201/07
Abstract: Systems and methods for image based location estimation are described. In one example embodiment, a first positioning system is used to generate a first position estimate. A set of structure façade data describing one or more structure façades associated with the first position estimate is then accessed. A first image of an environment is captured, and a portion of the image is matched to part of the structure façade data. A second position is then estimated based on a comparison of the structure façade data with the portion of the image matched to the structure façade data.
-
公开(公告)号:US20230010480A1
公开(公告)日:2023-01-12
申请号:US17660462
申请日:2022-04-25
Applicant: Snap, Inc.
Inventor: Yuncheng Li , Linjie Luo , Xuecheng Nie , Ning Zhang
Abstract: Systems, devices, media and methods are presented for a human pose tracking framework. The human pose tracking framework may identify a message with video frames, generate, using a composite convolutional neural network, joint data representing joint locations of a human depicted in the video frames, the generating of the joint data by the composite convolutional neural network done by a deep convolutional neural network operating on one portion of the video frames, a shallow convolutional neural network operating on a another portion of the video frames, and tracking the joint locations using a one-shot learner neural network that is trained to track the joint locations based on a concatenation of feature maps and a convolutional pose machine. The human pose tracking framework may store, the joint locations, and cause presentation of a rendition of the joint locations on a user interface of a client device.
-
公开(公告)号:US11164376B1
公开(公告)日:2021-11-02
申请号:US16116590
申请日:2018-08-29
Applicant: Snap Inc.
Inventor: Soumyadip Sengupta , Linjie Luo , Chen Cao , Menglei Chai
Abstract: A shape generation system can generate a three-dimensional (3D) model of an object from a two-dimensional (2D) image of the object by projecting vectors onto light cones created from the 2D image. The projected vectors can be used to more accurately create the 3D model of the object based on image element (e.g., pixel) values of the image.
-
公开(公告)号:US11087513B1
公开(公告)日:2021-08-10
申请号:US16186164
申请日:2018-11-09
Applicant: Snap Inc.
Inventor: Kun Duan , Nan Hu , Linjie Luo , Chongyang Ma , Guohui Wang
Abstract: Systems and methods are provided for receiving an image from a camera of a mobile device, analyzing the image to determine a subject of the image, segmenting the subject of the image to generate a mask indicating an area of the image comprising the subject of the image, applying a bokeh effect to a background region of the image to generate a processed background region, generating an output image comprising the subject of the image and the processed background region, and causing the generated output image to display on a display of the mobile device.
-
-
-
-
-
-
-
-
-