Cross-channel enrollment and authentication of voice biometrics

    公开(公告)号:US12266368B2

    公开(公告)日:2025-04-01

    申请号:US17165180

    申请日:2021-02-02

    Abstract: Embodiments described herein provide for systems and methods for voice-based cross-channel enrollment and authentication. The systems control for and mitigate against variations in audio signals received across any number of communications channels by training and employing a neural network architecture comprising a speaker verification neural network and a bandwidth expansion neural network. The bandwidth expansion neural network is trained on narrowband audio signals to produce and generate estimated wideband audio signals corresponding to the narrowband audio signals. These estimated wideband audio signals may be fed into one or more downstream applications, such as the speaker verification neural network or embedding extraction neural network. The speaker verification neural network can then compare and score inbound embeddings for a current call against enrolled embeddings, regardless of the channel used to receive the inbound signal or enrollment signal.

    AUDIOVISUAL DEEPFAKE DETECTION
    2.
    发明申请

    公开(公告)号:US20250037507A1

    公开(公告)日:2025-01-30

    申请号:US18919049

    申请日:2024-10-17

    Abstract: The embodiments execute machine-learning architectures for biometric-based identity recognition (e.g., speaker recognition, facial recognition) and deepfake detection (e.g., speaker deepfake detection, facial deepfake detection). The machine-learning architecture includes layers defining multiple scoring components, including sub-architectures for speaker deepfake detection, speaker recognition, facial deepfake detection, facial recognition, and lip-sync estimation engine. The machine-learning architecture extracts and analyzes various types of low-level features from both audio data and visual data, combines the various scores, and uses the scores to determine the likelihood that the audiovisual data contains deepfake content and the likelihood that a claimed identity of a person in the video matches to the identity of an expected or enrolled person. This enables the machine-learning architecture to perform identity recognition and verification, and deepfake detection, in an integrated fashion, for both audio data and visual data.

    Speaker recognition with quality indicators

    公开(公告)号:US12190905B2

    公开(公告)日:2025-01-07

    申请号:US17408281

    申请日:2021-08-20

    Abstract: Embodiments described herein provide for a machine-learning architecture for modeling quality measures for enrollment signals. Modeling these enrollment signals enables the machine-learning architecture to identify deviations from expected or ideal enrollment signal in future test phase calls. These differences can be used to generate quality measures for the various audio descriptors or characteristics of audio signals. The quality measures can then be fused at the score-level with the speaker recognition's embedding comparisons for verifying the speaker. Fusing the quality measures with the similarity scoring essentially calibrates the speaker recognition's outputs based on the realities of what is actually expected for the enrolled caller and what was actually observed for the current inbound caller.

    CALLER VERIFICATION VIA CARRIER METADATA
    5.
    发明公开

    公开(公告)号:US20240171680A1

    公开(公告)日:2024-05-23

    申请号:US18423858

    申请日:2024-01-26

    Abstract: Embodiments described herein provide for passive caller verification and/or passive fraud risk assessments for calls to customer call centers. Systems and methods may be used in real time as a call is coming into a call center. An analytics server of an analytics service looks at the purported Caller ID of the call, as well as the unaltered carrier metadata, which the analytics server then uses to generate or retrieve one or more probability scores using one or more lookup tables and/or a machine-learning model. A probability score indicates the likelihood that information derived using the Caller ID information has occurred or should occur given the carrier metadata received with the inbound call. The one or more probability scores be used to generate a risk score for the current call that indicates the probability of the call being valid (e.g., originated from a verified caller or calling device, non-fraudulent).

    BEHAVIORAL BIOMETRICS USING KEYPRESS TEMPORAL INFORMATION

    公开(公告)号:US20240169040A1

    公开(公告)日:2024-05-23

    申请号:US18515128

    申请日:2023-11-20

    CPC classification number: G06F21/316

    Abstract: Embodiments include a computing device that executes software routines and/or one or more machine-learning architectures including a neural network-based embedding extraction system that to produce an embedding vector representing a user's behavior's keypresses, where the system extracts the behaviorprint embedding vector using the keypress features that the system references later for authenticating users. Embodiments may extract and evaluate keypress features, such as keypress sequences, keypress pressure or volume, and temporal keypress features, such as the duration of keypresses and the interval between keypresses, among others. Some embodiments employ a deep neural network architecture that generates a behaviorprint embedding vector representation of the keypress duration and interval features that is used for enrollment and at inference time to authenticate users.

    UNSUPERVISED KEYWORD SPOTTING AND WORD DISCOVERY FOR FRAUD ANALYTICS

    公开(公告)号:US20240062753A1

    公开(公告)日:2024-02-22

    申请号:US18385632

    申请日:2023-10-31

    Inventor: Hrishikesh Rao

    Abstract: Embodiments described herein provide for a computer that detects one or more keywords of interest using acoustic features, to detect or query commonalities across multiple fraud calls. Embodiments described herein may implement unsupervised keyword spotting (UKWS) or unsupervised word discovery (UWD) in order to identify commonalities across a set of calls, where both UKWS and UWD employ Gaussian Mixture Models (GMM) and one or more dynamic time-warping algorithms. A user may indicate a training exemplar or occurrence of call-specific information, referred to herein as “a named entity,” such as a person's name, an account number, account balance, or order number. The computer may perform a redaction process that computationally nullifies the import of the named entity in the modeling processes described herein.

    System and method for cluster-based audio event detection

    公开(公告)号:US11842748B2

    公开(公告)日:2023-12-12

    申请号:US17121291

    申请日:2020-12-14

    CPC classification number: G10L25/45 G10L25/27 G10L25/51 G10L25/78

    Abstract: Methods, systems, and apparatuses for audio event detection, where the determination of a type of sound data is made at the cluster level rather than at the frame level. The techniques provided are thus more robust to the local behavior of features of an audio signal or audio recording. The audio event detection is performed by using Gaussian mixture models (GMMs) to classify each cluster or by extracting an i-vector from each cluster. Each cluster may be classified based on an i-vector classification using a support vector machine or probabilistic linear discriminant analysis. The audio event detection significantly reduces potential smoothing error and avoids any dependency on accurate window-size tuning. Segmentation may be performed using a generalized likelihood ratio and a Bayesian information criterion, and the segments may be clustered using hierarchical agglomerative clustering. Audio frames may be clustered using K-means and GMMs.

    Deep neural network based speech enhancement

    公开(公告)号:US11756564B2

    公开(公告)日:2023-09-12

    申请号:US16442279

    申请日:2019-06-14

    CPC classification number: G10L21/0232 G06N3/048 G10L25/30

    Abstract: A computer may segment a noisy audio signal into audio frames and execute a deep neural network (DNN) to estimate an instantaneous function of clean speech spectrum and noisy audio spectrum in the audio frame. This instantaneous function may correspond to a ratio of an a-priori signal to noise ratio (SNR) and an a-posteriori SNR of the audio frame. The computer may add estimated instantaneous function to the original noisy audio frame to output an enhanced speech audio frame.

Patent Agency Ranking