-
公开(公告)号:US20240062008A1
公开(公告)日:2024-02-22
申请号:US17820437
申请日:2022-08-17
Applicant: Snap Inc.
Inventor: Arnab Ghosh , Jian Ren , Pavel Savchenkov , Sergey Tulyakov
IPC: G06F40/289 , G06T11/60 , H04L51/10 , G06F3/04842 , G06F16/583
CPC classification number: G06F40/289 , G06T11/60 , H04L51/10 , G06F3/04842 , G06F16/5846 , G06T2200/24
Abstract: A method of generating an image for use in a conversation taking place in a messaging application is disclosed. Conversation input text is received from a user of a portable device that includes a display. Model input text is generated from the conversation input text, which is processed with a text-to-image model to generate an image based on the model input text. The generated image is displayed on the portable device, and user input is received to transmit the image to a remote recipient.
-
公开(公告)号:US11741940B2
公开(公告)日:2023-08-29
申请号:US17355834
申请日:2021-06-23
Applicant: Snap Inc.
Inventor: Pavel Savchenkov , Maxim Lukin , Aleksandr Mashrabov
CPC classification number: G10L13/00 , G06T13/40 , G06V10/764 , G06V10/82 , G06V40/171 , G10L13/08
Abstract: Provided are systems and methods for text and audio-based real-time face reenactment. An example method includes receiving an input text and a target image, the target image including a target face; generating, based on the input text, a sequence of sets of acoustic features representing the input text; generating, based on the sequence of sets of acoustic features, a sequence of sets of mouth key points; generating, based on the sequence of sets of mouth key points, a sequence of sets of facial key points; generating, by the computing device and based on the sequence of sets of the facial key points and the target image, a sequence of frames; and generating, based on the sequence of frames, an output video. Each of the frames includes the target face modified based on at least one set of mouth key points of the sequence of sets of mouth key points.
-
公开(公告)号:US20220284654A1
公开(公告)日:2022-09-08
申请号:US17751796
申请日:2022-05-24
Applicant: Snap Inc.
Inventor: Eugene Krokhalev , Aleksandr Mashrabov , Pavel Savchenkov
Abstract: Disclosed are systems and methods for portrait animation. An example method includes receiving, by a computing device, a scenario video, where the scenario video includes at least one input frame and the at least one input frame includes a first face, receiving, by the computing device, a target image, where the target image includes a second face, determining, by the computing device and based on the at least one input frame and the target image, two-dimensional (2D) deformations of the second face in the target image, where the 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression of the first face, and applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video.
-
公开(公告)号:US20220172438A1
公开(公告)日:2022-06-02
申请号:US17107410
申请日:2020-11-30
Applicant: Snap Inc.
Inventor: Pavel Savchenkov , Yurii Volkov , Jeremy Baker Voss
Abstract: In some embodiments, users' experience of engaging with augmented reality technology is enhanced by providing a process, referred to as face animation synthesis, that replaces an actor's face in the frames of a video with a user's face from the user's portrait image. The resulting face in the frames of the video retains the facial expressions, as well as color and lighting, of the actor's face but, at the same time, has the likeness of the user's face. An example face animation synthesis experience can be made available to uses of a messaging system by providing a face animation synthesis augmented reality component.
-
公开(公告)号:US20240296614A1
公开(公告)日:2024-09-05
申请号:US18641472
申请日:2024-04-22
Applicant: Snap Inc.
Inventor: Eugene Krokhalev , Aleksandr Mashrabov , Pavel Savchenkov
CPC classification number: G06T13/80 , G06T7/174 , G06V40/167 , G06T2207/20084
Abstract: Provided are systems and methods for portrait animation. An example method includes receiving, by a computing device, scenario data including information concerning movements of a first head, receiving, by the computing device, a target image including a second head and a background, determining, by the computing device and based on the target image and the information concerning the movements of the first head, two-dimensional (2D) deformations of the second head in the target image, applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video, the at least one output frame including the second head displaced according to the movements of the first head, and filling, by the computing device and using a background prediction neural network, a portion of the background in gaps between the displaced second head and the background.
-
公开(公告)号:US11861936B2
公开(公告)日:2024-01-02
申请号:US17869794
申请日:2022-07-21
Applicant: Snap Inc.
Inventor: Pavel Savchenkov , Dmitry Matov , Aleksandr Mashrabov , Alexey Pchelnikov
IPC: G06V40/16 , G06N3/04 , G06Q30/0251 , G06T11/00
CPC classification number: G06V40/161 , G06N3/04 , G06Q30/0254 , G06Q30/0269 , G06T11/001 , G06V40/174 , G06V40/178
Abstract: Provided are systems and methods for face reenactment. An example method includes receiving visual data including a visible portion of a source face, determining, based on the visible portion of the source face, a first portion of source face parameters associated with a parametric face model, where the first portion corresponds to the visible portion, predicting, based partially on the visible portion of the source face, a second portion of the source face parameters, where the second portion corresponds to the rest of the source face, receiving a target video that includes a target face, determining, based on the target video, target face parameters associated with the parametric face model and corresponding to the target face, and synthesizing, using the parametric face model, based on the source face parameters and the target face parameters, an output face that includes the source face imitating a facial expression of the target face.
-
公开(公告)号:US11568589B2
公开(公告)日:2023-01-31
申请号:US17751796
申请日:2022-05-24
Applicant: Snap Inc.
Inventor: Eugene Krokhalev , Aleksandr Mashrabov , Pavel Savchenkov
Abstract: Disclosed are systems and methods for portrait animation. An example method includes receiving, by a computing device, a scenario video, where the scenario video includes at least one input frame and the at least one input frame includes a first face, receiving, by the computing device, a target image, where the target image includes a second face, determining, by the computing device and based on the at least one input frame and the target image, two-dimensional (2D) deformations of the second face in the target image, where the 2D deformations, when applied to the second face, modify the second face to imitate at least a facial expression of the first face, and applying, by the computing device, the 2D deformations to the target image to obtain at least one output frame of an output video.
-
公开(公告)号:US20220358784A1
公开(公告)日:2022-11-10
申请号:US17869794
申请日:2022-07-21
Applicant: Snap Inc.
Inventor: Pavel Savchenkov , Dmitry Matov , Aleksandr Mashrabov , Alexey Pchelnikov
Abstract: Provided are systems and methods for face reenactment. An example method includes receiving visual data including a visible portion of a source face, determining, based on the visible portion of the source face, a first portion of source face parameters associated with a parametric face model, where the first portion corresponds to the visible portion, predicting, based partially on the visible portion of the source face, a second portion of the source face parameters, where the second portion corresponds to the rest of the source face, receiving a target video that includes a target face, determining, based on the target video, target face parameters associated with the parametric face model and corresponding to the target face, and synthesizing, using the parametric face model, based on the source face parameters and the target face parameters, an output face that includes the source face imitating a facial expression of the target face.
-
公开(公告)号:US11114086B2
公开(公告)日:2021-09-07
申请号:US16509370
申请日:2019-07-11
Applicant: SNAP INC.
Inventor: Pavel Savchenkov , Maxim Lukin , Aleksandr Mashrabov
Abstract: Provided are systems and methods for text and audio-based real-time face reenactment. An example method includes receiving an input text and a target image, the target image including a target face; generating, based on the input text, a sequence of sets of acoustic features representing the input text; determining, based on the sequence of sets of acoustic features, a sequence of sets of scenario data indicating modifications of the target face for pronouncing the input text; generating, based on the sequence of sets of scenario data, a sequence of frames, wherein each of the frames includes the target face modified based on at least one of the sets of scenario data; generating, based on the sequence of frames, an output video; and synthesizing, based on the sequence of sets of acoustic features, an audio data and adding the audio data to the output video.
-
公开(公告)号:US20200234690A1
公开(公告)日:2020-07-23
申请号:US16509370
申请日:2019-07-11
Applicant: SNAP INC.
Inventor: Pavel Savchenkov , Maxim Lukin , Aleksandr Mashrabov
Abstract: Provided are systems and methods for text and audio-based real-time face reenactment. An example method includes receiving an input text and a target image, the target image including a target face; generating, based on the input text, a sequence of sets of acoustic features representing the input text; determining, based on the sequence of sets of acoustic features, a sequence of sets of scenario data indicating modifications of the target face for pronouncing the input text; generating, based on the sequence of sets of scenario data, a sequence of frames, wherein each of the frames includes the target face modified based on at least one of the sets of scenario data; generating, based on the sequence of frames, an output video; and synthesizing, based on the sequence of sets of acoustic features, an audio data and adding the audio data to the output video.
-
-
-
-
-
-
-
-
-