Eye texture inpainting
    5.
    发明授权

    公开(公告)号:US11074675B2

    公开(公告)日:2021-07-27

    申请号:US16051083

    申请日:2018-07-31

    Applicant: Snap Inc.

    Abstract: Systems, devices, media, and methods are presented for generating texture models for objects within a video stream. The systems and methods access a set of images as the set of images are being captured at a computing device. The systems and methods determine, within a portion of the set of images, an area of interest containing an eye and extract an iris area from the area of interest. The systems and methods segment a sclera area within the area of interest and generate a texture for the eye based on the iris area and the sclera area.

    AUGMENTED EXPRESSION SYSTEM
    6.
    发明申请

    公开(公告)号:US20190325631A1

    公开(公告)日:2019-10-24

    申请号:US16387092

    申请日:2019-04-17

    Applicant: Snap Inc.

    Abstract: The present invention relates to improvements to systems and methods for determining a current location of a client device, and for identifying and selecting appropriate geo-fences based on the current location of the client device. An improved geo-fence selection system performs operations that include associating media content with a geo-fence that encompasses a portion of a geographic region, sampling location data from a client device, defining a boundary based on the sampled location data from the client device, detecting an overlap between the boundary and the geo-fence, retrieving the media content associated with the geo-fence, and loading the media content at a memory location of the client device, in response to detecting the overlap.

    Avatar style transformation using neural networks

    公开(公告)号:US12182921B2

    公开(公告)日:2024-12-31

    申请号:US17724235

    申请日:2022-04-19

    Applicant: Snap Inc.

    Abstract: Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing at least one program and a method for transforming a motion style of an avatar from a first style to a second style. The program and method include: retrieving, by a processor from a storage device, an avatar depicting motion in a first style; receiving user input selecting a second style; obtaining, based on the user input, a trained machine learning model that performs a non-linear transformation of motion from the first style to the second style; and applying the obtained trained machine learning model to the retrieved avatar to transform the avatar from depicting motion in the first style to depicting motion in the second style.

Patent Agency Ranking