Audiovisual deepfake detection
    1.
    发明授权

    公开(公告)号:US12142083B2

    公开(公告)日:2024-11-12

    申请号:US17503152

    申请日:2021-10-15

    Abstract: The embodiments execute machine-learning architectures for biometric-based identity recognition (e.g., speaker recognition, facial recognition) and deepfake detection (e.g., speaker deepfake detection, facial deepfake detection). The machine-learning architecture includes layers defining multiple scoring components, including sub-architectures for speaker deepfake detection, speaker recognition, facial deepfake detection, facial recognition, and lip-sync estimation engine. The machine-learning architecture extracts and analyzes various types of low-level features from both audio data and visual data, combines the various scores, and uses the scores to determine the likelihood that the audiovisual data contains deepfake content and the likelihood that a claimed identity of a person in the video matches to the identity of an expected or enrolled person. This enables the machine-learning architecture to perform identity recognition and verification, and deepfake detection, in an integrated fashion, for both audio data and visual data.

    DEEPFAKE DETECTION
    2.
    发明公开
    DEEPFAKE DETECTION 审中-公开

    公开(公告)号:US20240355337A1

    公开(公告)日:2024-10-24

    申请号:US18388364

    申请日:2023-11-09

    CPC classification number: G10L17/24

    Abstract: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. The server applies an NLP engine to transcribe call audio and analyze the text for anomalous patterns to detect synthetic speech. Additionally or alternatively, the server executes a voice “liveness” detection system for detecting machine speech, such as synthetic speech or replayed speech. The system performs phrase repetition detection, background change detection, and passive voice liveness detection in call audio signals to detect liveness of a speech utterance. An automated model update module allows the liveness detection model to adapt to new types of presentation attacks, based on the human provided feedback.

    Robust spoofing detection system using deep residual neural networks

    公开(公告)号:US11862177B2

    公开(公告)日:2024-01-02

    申请号:US17155851

    申请日:2021-01-22

    CPC classification number: G10L17/18 G10L17/02 G10L17/04 G10L17/08 G10L17/22

    Abstract: Embodiments described herein provide for systems and methods for implementing a neural network architecture for spoof detection in audio signals. The neural network architecture contains a layers defining embedding extractors that extract embeddings from input audio signals. Spoofprint embeddings are generated for particular system enrollees to detect attempts to spoof the enrollee's voice. Optionally, voiceprint embeddings are generated for the system enrollees to recognize the enrollee's voice. The voiceprints are extracted using features related to the enrollee's voice. The spoofprints are extracted using features related to features of how the enrollee speaks and other artifacts. The spoofprints facilitate detection of efforts to fool voice biometrics using synthesized speech (e.g., deepfakes) that spoof and emulate the enrollee's voice.

    DEEPFAKE DETECTION
    7.
    发明公开
    DEEPFAKE DETECTION 审中-公开

    公开(公告)号:US20240363103A1

    公开(公告)日:2024-10-31

    申请号:US18388412

    申请日:2023-11-09

    CPC classification number: G10L15/08

    Abstract: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. The server applies an NLP engine to transcribe call audio and analyze the text for anomalous patterns to detect synthetic speech. Additionally or alternatively, the server executes a voice “liveness” detection system for detecting machine speech, such as synthetic speech or replayed speech. The system performs phrase repetition detection, background change detection, and passive voice liveness detection in call audio signals to detect liveness of a speech utterance. An automated model update module allows the liveness detection model to adapt to new types of presentation attacks, based on the human provided feedback.

    DEEPFAKE DETECTION
    8.
    发明公开
    DEEPFAKE DETECTION 审中-公开

    公开(公告)号:US20240355322A1

    公开(公告)日:2024-10-24

    申请号:US18388428

    申请日:2023-11-09

    CPC classification number: G10L15/08 G06N20/00

    Abstract: Disclosed are systems and methods including software processes executed by a server that detect audio-based synthetic speech (“deepfakes”) in a call conversation. The server applies an NLP engine to transcribe call audio and analyze the text for anomalous patterns to detect synthetic speech. Additionally or alternatively, the server executes a voice “liveness” detection system for detecting machine speech, such as synthetic speech or replayed speech. The system performs phrase repetition detection, background change detection, and passive voice liveness detection in call audio signals to detect liveness of a speech utterance. An automated model update module allows the liveness detection model to adapt to new types of presentation attacks, based on the human provided feedback.

    SPEAKER EMBEDDING CONVERSION FOR BACKWARD AND CROSS-CHANNEL COMPATABILITY

    公开(公告)号:US20230005486A1

    公开(公告)日:2023-01-05

    申请号:US17855149

    申请日:2022-06-30

    Abstract: Embodiments include a computer executing voice biometric machine-learning for speaker recognition. The machine-learning architecture includes embedding extractors that extract embeddings for enrollment or for verifying inbound speakers, and embedding convertors that convert enrollment voiceprints from a first type of embedding to a second type of embedding. The embedding convertor maps the feature vector space of the first type of embedding to the feature vector space of the second type of embedding. The embedding convertor takes as input enrollment embeddings of the first type of embedding and generates as output converted enrolled embeddings that are aggregated into a converted enrolled voiceprint of the second type of embedding. To verify an inbound speaker, a second embedding extractor generates an inbound voiceprint of the second type of embedding, and scoring layers determine a similarity between the inbound voiceprint and the converted enrolled voiceprint, both of which are the second type of embedding.

Patent Agency Ranking