METHODS FOR A RASTERIZATION-BASED DIFFERENTIABLE RENDERER FOR TRANSLUCENT OBJECTS

    公开(公告)号:US20240096018A1

    公开(公告)日:2024-03-21

    申请号:US17932640

    申请日:2022-09-15

    Applicant: Lemon Inc.

    CPC classification number: G06T17/20 G06T2210/62

    Abstract: Systems and methods for rendering a translucent object are provided. In one aspect, the system includes a processor coupled to a storage medium that stores instructions, which, upon execution by the processor, cause the processor to receive at least one mesh representing at least one translucent object. For each pixel to be rendered, the processor performs a rasterization-based differentiable rendering of the pixel to be rendered using the at least one mesh and determines a plurality of values for the pixel to be rendered based on the rasterization-based differentiable rendering. The rasterization-based differentiable rendering can include performing a probabilistic rasterization process along with aggregation techniques to compute the plurality of values for the pixel to be rendered. The plurality of values includes a set of color channel values and an opacity channel value. Once values are determined for all pixels, an image can be rendered.

    PORTRAIT STYLIZATION FRAMEWORK TO CONTROL THE SIMILARITY BETWEEN STYLIZED PORTRAITS AND ORIGINAL PHOTO

    公开(公告)号:US20230146676A1

    公开(公告)日:2023-05-11

    申请号:US17519711

    申请日:2021-11-05

    Applicant: Lemon Inc.

    CPC classification number: G06T9/002 G06T11/60 G06N3/08

    Abstract: Systems and methods directed to controlling the similarity between stylized portraits and an original photo are described. In examples, an input image is received and encoded using a variational autoencoder to generate a latent vector. The latent vector may be blended with latent vectors that best represent a face in the original user portrait image. The resulting blended latent vector may be provided to a generative adversarial network (GAN) generator to generate a controlled stylized image. In examples, one or more layers of the stylized GAN generator may be swapped with one or more layers of the original GAN generator. Accordingly, a user can interactively determine how much stylization vs. personalization should be included in a resulting stylized portrait.

    Methods for a rasterization-based differentiable renderer for translucent objects

    公开(公告)号:US12148095B2

    公开(公告)日:2024-11-19

    申请号:US17932640

    申请日:2022-09-15

    Applicant: Lemon Inc.

    Abstract: Systems and methods for rendering a translucent object are provided. In one aspect, the system includes a processor coupled to a storage medium that stores instructions, which, upon execution by the processor, cause the processor to receive at least one mesh representing at least one translucent object. For each pixel to be rendered, the processor performs a rasterization-based differentiable rendering of the pixel to be rendered using the at least one mesh and determines a plurality of values for the pixel to be rendered based on the rasterization-based differentiable rendering. The rasterization-based differentiable rendering can include performing a probabilistic rasterization process along with aggregation techniques to compute the plurality of values for the pixel to be rendered. The plurality of values includes a set of color channel values and an opacity channel value. Once values are determined for all pixels, an image can be rendered.

    Neural network architecture for face tracking

    公开(公告)号:US11803996B2

    公开(公告)日:2023-10-31

    申请号:US17390440

    申请日:2021-07-30

    Applicant: Lemon Inc.

    CPC classification number: G06T13/40 G06N3/08 G06V40/162 G06V40/171 G06V40/176

    Abstract: Techniques for face tracking comprise receiving landmark data associated with a plurality of images indicative of at least one facial part. Representative images corresponding to the plurality of images may be generated based on the landmark data. Each representative image may depict a plurality of segments, and each segment may correspond to a region of the at least one facial part. The plurality of images and corresponding representative images may be input into a neural network to train the neural network to predict a feature associated with a subsequently received image comprising a face. An animation associated with a facial expression may be controlled based on output from the trained neural network.

    ASYMMETRIC FACIAL EXPRESSION RECOGNITION

    公开(公告)号:US20230046286A1

    公开(公告)日:2023-02-16

    申请号:US17402344

    申请日:2021-08-13

    Applicant: Lemon Inc.

    Abstract: The present disclosure describes techniques for facial expression recognition. A first loss function may be determined based on a first set of feature vectors associated with a first set of images depicting facial expressions and a first set of labels indicative of the facial expressions. A second loss function may be determined based on a second set of feature vectors associated with a second set of images depicting asymmetric facial expressions and a second set of labels indicative of the asymmetric facial expressions. The first loss function and the second loss function may be used to determine a maximum loss function. The maximum loss function may be applied during training of a model. The trained model may be configured to predict at least one asymmetric facial expression in a subsequently received image.

    Systems for multi-task joint training of neural networks using multi-label datasets

    公开(公告)号:US12243292B2

    公开(公告)日:2025-03-04

    申请号:US17929449

    申请日:2022-09-02

    Applicant: Lemon Inc.

    Abstract: Systems and methods for multi-task joint training of a neural network including an encoder module and a multi-headed attention mechanism are provided. In one aspect, the system includes a processor configured to receive input data including a first set of labels and a second set of labels. Using the encoder module, features are extracted from the input data. Using a multi-headed attention mechanism, training loss metrics are computed. A first training loss metric is computed using the extracted features and the first set of labels, and a second training loss metric is computed using the extracted features and the second set of labels. A first mask is applied to filter the first training loss metric, and a second mask is applied to filter the second training loss metric. A final training loss metric is computed based on the filtered first and second training loss metrics.

    Portrait stylization framework to control the similarity between stylized portraits and original photo

    公开(公告)号:US12217466B2

    公开(公告)日:2025-02-04

    申请号:US17519711

    申请日:2021-11-05

    Applicant: Lemon Inc.

    Abstract: Systems and methods directed to controlling the similarity between stylized portraits and an original photo are described. In examples, an input image is received and encoded using a variational autoencoder to generate a latent vector. The latent vector may be blended with latent vectors that best represent a face in the original user portrait image. The resulting blended latent vector may be provided to a generative adversarial network (GAN) generator to generate a controlled stylized image. In examples, one or more layers of the stylized GAN generator may be swapped with one or more layers of the original GAN generator. Accordingly, a user can interactively determine how much stylization vs. personalization should be included in a resulting stylized portrait.

Patent Agency Ranking