BI-DIRECTIONAL RECURRENT ENCODERS WITH MULTI-HOP ATTENTION FOR SPEECH EMOTION RECOGNITION

    公开(公告)号:US20220076693A1

    公开(公告)日:2022-03-10

    申请号:US17526810

    申请日:2021-11-15

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for determining speech emotion. In particular, a speech emotion recognition system generates an audio feature vector and a textual feature vector for a sequence of words. Further, the speech emotion recognition system utilizes a neural attention mechanism that intelligently blends together the audio feature vector and the textual feature vector to generate attention output. Using the attention output, which includes consideration of both audio and text modalities for speech corresponding to the sequence of words, the speech emotion recognition system can apply attention methods to one of the feature vectors to generate a hidden feature vector. Based on the hidden feature vector, the speech emotion recognition system can generate a speech emotion probability distribution of emotions among a group of candidate emotions, and then select one of the candidate emotions as corresponding to the sequence of words.

    UTILIZING BI-DIRECTIONAL RECURRENT ENCODERS WITH MULTI-HOP ATTENTION FOR SPEECH EMOTION RECOGNITION

    公开(公告)号:US20210050033A1

    公开(公告)日:2021-02-18

    申请号:US16543342

    申请日:2019-08-16

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for determining speech emotion. In particular, a speech emotion recognition system generates an audio feature vector and a textual feature vector for a sequence of words. Further, the speech emotion recognition system utilizes a neural attention mechanism that intelligently blends together the audio feature vector and the textual feature vector to generate attention output. Using the attention output, which includes consideration of both audio and text modalities for speech corresponding to the sequence of words, the speech emotion recognition system can apply attention methods to one of the feature vectors to generate a hidden feature vector. Based on the hidden feature vector, the speech emotion recognition system can generate a speech emotion probability distribution of emotions among a group of candidate emotions, and then select one of the candidate emotions as corresponding to the sequence of words.

    Generating synthetic code-switched data for training language models

    公开(公告)号:US12242820B2

    公开(公告)日:2025-03-04

    申请号:US17651555

    申请日:2022-02-17

    Applicant: Adobe Inc.

    Abstract: Techniques for training a language model for code switching content are disclosed. Such techniques include, in some embodiments, generating a dataset, which includes identifying one or more portions within textual content in a first language, the identified one or more portions each including one or more of offensive content or non-offensive content; translating the identified one or more salient portions to a second language; and reintegrating the translated one or more portions into the textual content to generate code-switched textual content. In some cases, the textual content in the first language includes offensive content and non-offensive content, the identified one or more portions include the offensive content, and the translated one or more portions include a translated version of the offensive content. In some embodiments, the code-switched textual content is at least part of a synthetic dataset usable to train a language model, such as a multilingual classification model.

    Bi-directional recurrent encoders with multi-hop attention for speech emotion recognition

    公开(公告)号:US12236975B2

    公开(公告)日:2025-02-25

    申请号:US17526810

    申请日:2021-11-15

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer readable media for determining speech emotion. In particular, a speech emotion recognition system generates an audio feature vector and a textual feature vector for a sequence of words. Further, the speech emotion recognition system utilizes a neural attention mechanism that intelligently blends together the audio feature vector and the textual feature vector to generate attention output. Using the attention output, which includes consideration of both audio and text modalities for speech corresponding to the sequence of words, the speech emotion recognition system can apply attention methods to one of the feature vectors to generate a hidden feature vector. Based on the hidden feature vector, the speech emotion recognition system can generate a speech emotion probability distribution of emotions among a group of candidate emotions, and then select one of the candidate emotions as corresponding to the sequence of words.

    Image captioning
    15.
    发明授权

    公开(公告)号:US12210825B2

    公开(公告)日:2025-01-28

    申请号:US17455533

    申请日:2021-11-18

    Applicant: ADOBE INC.

    Abstract: Systems and methods for image captioning are described. One or more aspects of the systems and methods include generating a training caption for a training image using an image captioning network; encoding the training caption using a multi-modal encoder to obtain an encoded training caption; encoding the training image using the multi-modal encoder to obtain an encoded training image; computing a reward function based on the encoded training caption and the encoded training image; and updating parameters of the image captioning network based on the reward function.

    Intent detection
    16.
    发明授权

    公开(公告)号:US12182524B2

    公开(公告)日:2024-12-31

    申请号:US17453562

    申请日:2021-11-04

    Applicant: ADOBE INC.

    Abstract: Systems and methods for natural language processing are described. One or more aspects of a method, apparatus, and non-transitory computer readable medium include receiving a text phrase; encoding the text phrase using an encoder to obtain a hidden representation of the text phrase, wherein the encoder is trained during a first training phrase using self-supervised learning based on a first contrastive loss and during a second training phrase using supervised learning based on a second contrastive learning loss; identifying an intent of the text phrase from a predetermined set of intent labels using a classification network, wherein the classification network is jointly trained with the encoder in the second training phase; and generating a response to the text phrase based on the intent.

    MULTILINGUAL SEMANTIC SEARCH UTILIZING META-DISTILLATION LEARNING

    公开(公告)号:US20250068924A1

    公开(公告)日:2025-02-27

    申请号:US18449291

    申请日:2023-08-14

    Applicant: Adobe Inc.

    Abstract: The present disclosure relates to systems, non-transitory computer-readable media, and methods for providing multilingual semantic search results utilizing meta-learning and knowledge distillation. For example, in some implementations, the disclosed systems perform a first inner learning loop for a monolingual to bilingual meta-learning task for a teacher model. Additionally, in some implementations, the disclosed systems perform a second inner learning loop for a bilingual to multilingual meta-learning task for a student model. In some embodiments, the disclosed systems perform knowledge distillation based on the first inner learning loop for the monolingual to bilingual meta-learning task and the second inner learning loop for the bilingual to multilingual meta-learning task. Moreover, in some embodiments, the disclosed systems perform an outer learning loop and update parameters of a deep learning language model based on the first inner learning loop, the second inner learning loop, and the knowledge distillation.

    PERFORMING VIDEO MOMENT RETRIEVAL UTILIZING DEEP LEARNING

    公开(公告)号:US20250028758A1

    公开(公告)日:2025-01-23

    申请号:US18354833

    申请日:2023-07-19

    Applicant: Adobe Inc.

    Inventor: Seunghyun Yoon

    Abstract: The present disclosure relates to systems, methods, and non-transitory computer-readable media that learns parameters for a natural language video localization model utilizing a curated dataset. In particular, in some embodiments, the disclosed systems generate a set of similarity scores between a target query and a video dataset that includes a plurality of digital videos. For instance, the disclosed systems determines a false-negative threshold by utilizing the set of similarity scores to exclude a subset of false-negative samples from the plurality of digital videos. Further, the disclosed systems determines a negative sample distribution and generates a curated dataset that includes a subset of negative samples with the subset of false-negative samples excluded.

Patent Agency Ranking