-
公开(公告)号:US20210326421A1
公开(公告)日:2021-10-21
申请号:US17231672
申请日:2021-04-15
Applicant: PINDROP SECURITY, INC.
Inventor: Elie KHOURY , Ganesh SIVARAMAN , Avrosh KUMAR , Ivan ANTOLIC-SOBAN
Abstract: Embodiments described herein provide for a voice biometrics system execute machine-learning architectures capable of passive, active, continuous, or static operations, or a combination thereof. Systems passively and/or continuously, in some cases in addition to actively and/or statically, enrolling speakers as the speakers speak into or around an edge device (e.g., car, television, radio, phone). The system identifies users on the fly without requiring a new speaker to mirror prompted utterances for reconfiguring operations. The system manages speaker profiles as speakers provide utterances to the system. Machine-learning architectures implement a passive and continuous voice biometrics system, possibly without knowledge of speaker identities. The system creates identities in an unsupervised manner, sometimes passively enrolling and recognizing known or unknown speakers. The system offers personalization and security across a wide range of applications, including media content for over-the-top services and IoT devices (e.g., personal assistants, vehicles), and call centers.
-
公开(公告)号:US20210233541A1
公开(公告)日:2021-07-29
申请号:US17155851
申请日:2021-01-22
Applicant: PINDROP SECURITY, INC.
Inventor: Tianxiang CHEN , Elie KHOURY
Abstract: Embodiments described herein provide for systems and methods for implementing a neural network architecture for spoof detection in audio signals. The neural network architecture contains a layers defining embedding extractors that extract embeddings from input audio signals. Spoofprint embeddings are generated for particular system enrollees to detect attempts to spoof the enrollee's voice. Optionally, voiceprint embeddings are generated for the system enrollees to recognize the enrollee's voice. The voiceprints are extracted using features related to the enrollee's voice. The spoofprints are extracted using features related to features of how the enrollee speaks and other artifacts. The spoofprints facilitate detection of efforts to fool voice biometrics using synthesized speech (e.g., deepfakes) that spoof and emulate the enrollee's voice.
-
公开(公告)号:US20200321009A1
公开(公告)日:2020-10-08
申请号:US16907951
申请日:2020-06-22
Applicant: PINDROP SECURITY, INC.
Inventor: Elie KHOURY , Parav NAGARSHETH , Kailash PATIL , Matthew GARLAND
Abstract: An automated speaker verification (ASV) system incorporates a first deep neural network to extract deep acoustic features, such as deep CQCC features, from a received voice sample. The deep acoustic features are processed by a second deep neural network that classifies the deep acoustic features according to a determined likelihood of including a spoofing condition. A binary classifier then classifies the voice sample as being genuine or spoofed.
-
公开(公告)号:US20190096424A1
公开(公告)日:2019-03-28
申请号:US16200283
申请日:2018-11-26
Applicant: PINDROP SECURITY, INC.
Inventor: Elie KHOURY , Matthew GARLAND
Abstract: Methods, systems, and apparatuses for audio event detection, where the determination of a type of sound data is made at the cluster level rather than at the frame level. The techniques provided are thus more robust to the local behavior of features of an audio signal or audio recording. The audio event detection is performed by using Gaussian mixture models (GMMs) to classify each cluster or by extracting an i-vector from each cluster. Each cluster may be classified based on an i-vector classification using a support vector machine or probabilistic linear discriminant analysis. The audio event detection significantly reduces potential smoothing error and avoids any dependency on accurate window-size tuning. Segmentation may be performed using a generalized likelihood ratio and a Bayesian information criterion, and the segments may be clustered using hierarchical agglomerative clustering. Audio frames may be clustered using K-means and GMMs.
-
公开(公告)号:US20180075849A1
公开(公告)日:2018-03-15
申请号:US15818231
申请日:2017-11-20
Applicant: PINDROP SECURITY, INC.
Inventor: Elie KHOURY , Matthew GARLAND
CPC classification number: G10L17/08 , G06N3/04 , G06N3/08 , G10L15/16 , G10L17/02 , G10L17/04 , G10L17/18 , G10L17/22
Abstract: The present invention is directed to a deep neural network (DNN) having a triplet network architecture, which is suitable to perform speaker recognition. In particular, the DNN includes three feed-forward neural networks, which are trained according to a batch process utilizing a cohort set of negative training samples. After each batch of training samples is processed, the DNN may be trained according to a loss function, e.g., utilizing a cosine measure of similarity between respective samples, along with positive and negative margins, to provide a robust representation of voiceprints.
-
公开(公告)号:US20250029614A1
公开(公告)日:2025-01-23
申请号:US18777278
申请日:2024-07-18
Applicant: PINDROP SECURITY, INC.
Inventor: David LOONEY , Nikolay GAUBITCH , Elie KHOURY
IPC: G10L17/02
Abstract: Disclosed are systems and methods including software processes executed by a server for obtaining, by a computer, an audio signal including synthetic speech, extracting, by the computer, metadata from a watermark of the audio signal by applying a set of keys associated with a plurality of text-to-speech (TTS) services to the audio signal, the metadata indicating an origin of the synthetic speech in the audio signal, and generating, by the computer, based on the extracted metadata, a notification indicating that the audio signal includes the synthetic speech.
-
公开(公告)号:US20240355323A1
公开(公告)日:2024-10-24
申请号:US18388447
申请日:2023-11-09
Applicant: PINDROP SECURITY, INC.
Inventor: Umair Altaf , Sai Pradeep PERI , Lakshay PHATELA , Payas GUPTA , Yitao SUN , Svetlane AFANASEVA , Kailash PATIL , Elie KHOURY , Bradley MAGNETTA , Vijay BALASUBRAMANIYAN , Tianxiang CHEN
Abstract: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. The server applies an NLP engine to transcribe call audio and analyze the text for anomalous patterns to detect synthetic speech. Additionally or alternatively, the server executes a voice “liveness” detection system for detecting machine speech, such as synthetic speech or replayed speech. The system performs phrase repetition detection, background change detection, and passive voice liveness detection in call audio signals to detect liveness of a speech utterance. An automated model update module allows the liveness detection model to adapt to new types of presentation attacks, based on the human provided feedback.
-
公开(公告)号:US20230326462A1
公开(公告)日:2023-10-12
申请号:US18329138
申请日:2023-06-05
Applicant: Pindrop Security, Inc.
Inventor: Elie KHOURY , Matthew GARLAND
IPC: G10L17/00 , H04M1/27 , G10L17/24 , G10L15/19 , G10L17/08 , G06N7/01 , G10L15/07 , G10L15/26 , G10L17/04
CPC classification number: G10L17/00 , H04M1/271 , G10L17/24 , G10L15/19 , G10L17/08 , G06N7/01 , G10L15/07 , G10L15/26 , G10L17/04 , H04M2203/40
Abstract: Utterances of at least two speakers in a speech signal may be distinguished and the associated speaker identified by use of diarization together with automatic speech recognition of identifying words and phrases commonly in the speech signal. The diarization process clusters turns of the conversation while recognized special form phrases and entity names identify the speakers. A trained probabilistic model deduces which entity name(s) correspond to the clusters.
-
公开(公告)号:US20230290357A1
公开(公告)日:2023-09-14
申请号:US18321353
申请日:2023-05-22
Applicant: Pindrop Security, Inc.
Inventor: Elie KHOURY , Matthew GARLAND
IPC: G10L17/20 , G10L17/02 , G10L17/04 , G10L17/18 , G10L19/028
CPC classification number: G10L17/20 , G10L17/02 , G10L17/04 , G10L17/18 , G10L19/028
Abstract: A system for generating channel-compensated features of a speech signal includes a channel noise simulator that degrades the speech signal, a feed forward convolutional neural network (CNN) that generates channel-compensated features of the degraded speech signal, and a loss function that computes a difference between the channel-compensated features and handcrafted features for the same raw speech signal. Each loss result may be used to update connection weights of the CNN until a predetermined threshold loss is satisfied, and the CNN may be used as a front-end for a deep neural network (DNN) for speaker recognition/verification. The DNN may include convolutional layers, a bottleneck features layer, multiple fully-connected layers, and an output layer. The bottleneck features may be used to update connection weights of the convolutional layers, and dropout may be applied to the convolutional layers.
-
公开(公告)号:US20230137652A1
公开(公告)日:2023-05-04
申请号:US17977521
申请日:2022-10-31
Applicant: Pindrop Security, Inc.
Inventor: Elie KHOURY , Tianxiang CHEN , Avrosh KUMAR , Ganesh SIVARAMAN , Kedar PHATAK
Abstract: Disclosed are systems and methods including computing-processes executing machine-learning architectures for voice biometrics, in which the machine-learning architecture implements one or more language compensation functions. Embodiments include an embedding extraction engine (sometimes referred to as an “embedding extractor”) that extracts speaker embeddings and determines a speaker similarity score for determine or verifying the likelihood that speakers in different audio signals are the same speaker. The machine-learning architecture further includes a multi-class language classifier that determines a language likelihood score that indicates the likelihood that a particular audio signal includes a spoken language. The features and functions of the machine-learning architecture described herein may implement the various language compensation techniques to provide more accurate speaker recognition results, regardless of the language spoken by the speaker.
-
-
-
-
-
-
-
-
-