-
公开(公告)号:US12231806B2
公开(公告)日:2025-02-18
申请号:US18223215
申请日:2023-07-18
Applicant: Snap Inc.
Inventor: Yurii Monastyrshyn , Illia Tulupov
Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frame depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.
-
公开(公告)号:US12093607B2
公开(公告)日:2024-09-17
申请号:US17876842
申请日:2022-07-29
Applicant: Snap Inc.
Inventor: Xin Chen , Yurii Monastyrshyn , Fedir Poliakov , Shubham Vij
CPC classification number: G06F3/167 , G06F3/0482 , G06N3/044 , G06N3/08 , G06T11/001 , G10L15/08 , G10L2015/088 , G10L15/16
Abstract: An audio control system can control interactions with an application or device using keywords spoken by a user of the device. The audio control system can use machine learning models (e.g., a neural network model) trained to recognize one or more keywords. Which machine learning model is activated can depend on the active location in the application or device. Responsive to detecting keywords, different actions are performed by the device, such as navigation to a pre-specified area of the application.
-
公开(公告)号:US20220166816A1
公开(公告)日:2022-05-26
申请号:US17538545
申请日:2021-11-30
Applicant: Snap Inc.
Inventor: Artem Gaiduchenko , Artem Yerofieiev , Bohdan Pozharskyi , Gabriel Lupin , Oleksii Kholovchuk , Travis Chen , Yurii Monastyrshyn , Denys Makoviichuk
Abstract: Method for triggering changes to real-time special effects included in a live streaming video starts with a processor transmitting in real-time a video stream captured by a camera via a network. The processor causes a live streaming interface that includes the video stream to be displayed on the plurality of client devices. The processor receives a trigger to apply one of a plurality of special effects to the video stream and determines a first special effect of the plurality of special effects is associated with the trigger. The processor applies in real-time the first special effect to the video stream to generate a video stream having the first special effect and transmits in real-time the video stream having the first special effect via the network. The processor causes the live streaming interface that includes the video stream having the first special effect to be displayed on the plurality of client devices. Other embodiments are disclosed.
-
公开(公告)号:US20220078370A1
公开(公告)日:2022-03-10
申请号:US17530094
申请日:2021-11-18
Applicant: Snap Inc.
Inventor: Yurii Monastyrshyn , Illia Tulupov
Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frame depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.
-
公开(公告)号:US20210006759A1
公开(公告)日:2021-01-07
申请号:US17024074
申请日:2020-09-17
Applicant: Snap Inc.
Inventor: Yurii Monastyrshyn
Abstract: Systems, devices, media, and methods are presented for receiving a set of images in a video stream, converting one or more images of the set of images to a set of single channel images, generating a set of approximation images from the set of single channel images, and generating a set of binarized images by thresholding the set of approximation images.
-
公开(公告)号:US10102423B2
公开(公告)日:2018-10-16
申请号:US15199482
申请日:2016-06-30
Applicant: Snap, Inc.
Inventor: Victor Shaburov , Yurii Monastyrshyn , Oleksandr Pyshchenko , Sergei Kotcur
Abstract: Systems, devices, and methods are presented for segmenting an image of a video stream with a client device by receiving one or more images depicting an object of interest and determining pixels within the one or more images corresponding to the object of interest. The systems, devices, and methods identify a position of a portion of the object of interest and determine a direction for the portion of the object of interest. Based on the direction of the portion of the object of interest, a histogram threshold is dynamically modified for identifying pixels as corresponding to the portion of the object of interest. The portion of the object of interest is replaced with a graphical interface element aligned with the direction of the portion of the object of interest.
-
公开(公告)号:US20180075292A1
公开(公告)日:2018-03-15
申请号:US15816776
申请日:2017-11-17
Applicant: Snap Inc.
Inventor: Victor Shaburov , Yurii Monastyrshyn
CPC classification number: G06K9/00315 , G06K9/00201 , G06K9/00248 , G06K9/00261 , G06K9/00281 , G06K9/6209 , G06Q30/0281 , G06T7/337 , G06T7/344 , G06T2207/10016 , G06T2207/30201 , G10L25/57 , G10L25/63 , H04N7/147 , H04N7/15
Abstract: Methods and systems for videoconferencing include recognition of emotions related to one videoconference participant such as a customer. This ultimately enables another videoconference participant, such as a service provider or supervisor, to handle angry, annoyed, or distressed customers. One example method includes the steps of receiving a video that includes a sequence of images, detecting at least one object of interest (e.g., a face), locating feature reference points of the at least one object of interest, aligning a virtual face mesh to the at least one object of interest based on the feature reference points, finding over the sequence of images at least one deformation of the virtual face mesh that reflect face mimics, determining that the at least one deformation refers to a facial emotion selected from a plurality of reference facial emotions, and generating a communication bearing data associated with the facial emotion.
-
公开(公告)号:US20240119396A1
公开(公告)日:2024-04-11
申请号:US18541970
申请日:2023-12-15
Applicant: Snap Inc.
Inventor: Victor Shaburov , Yurii Monastyrshyn
IPC: G06Q10/0639 , G06V40/16 , H04N21/4402
CPC classification number: G06Q10/06395 , G06Q10/06393 , G06V40/174 , H04N21/440218 , G10L17/26
Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
-
公开(公告)号:US11922356B1
公开(公告)日:2024-03-05
申请号:US16667366
申请日:2019-10-29
Applicant: Snap Inc.
Inventor: Victor Shaburov , Yurii Monastyrshyn
IPC: G06Q10/0639 , G06V40/16 , G10L25/63 , H04N21/4402 , G10L15/22 , G10L17/26 , H04N7/15 , H04N21/442 , H04N21/4788
CPC classification number: G06Q10/06395 , G06Q10/06393 , G06V40/174 , H04N21/440218 , G10L2015/227 , G10L17/26 , G10L25/63 , H04N7/15 , H04N21/44218 , H04N21/4788
Abstract: Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
-
公开(公告)号:US20230362327A1
公开(公告)日:2023-11-09
申请号:US18223215
申请日:2023-07-18
Applicant: Snap Inc.
Inventor: Yurii Monastyrshyn , Illia Tulupov
CPC classification number: H04N5/77 , G06T3/40 , G06F3/04883 , H04N21/44008 , H04N21/84 , G06V40/165 , G06T2207/30201
Abstract: Systems, devices, media, and methods are presented for generating graphical representations within frames of a video stream in real time. The systems and methods receive a frame depicting a portion of a face, identify user input, identify positions on the portion of the face corresponding to the user input. The systems and methods generate a graphical representation of the user input linked to positions on the portion of the face and render the graphical representation within frames of the video stream in real time.
-
-
-
-
-
-
-
-
-