GENERATIVE LANGUAGE MODEL FOR FEW-SHOT ASPECT-BASED SENTIMENT ANALYSIS

    公开(公告)号:US20220366145A1

    公开(公告)日:2022-11-17

    申请号:US17468950

    申请日:2021-09-08

    Abstract: Sentiment analysis is a task in natural language processing. The embodiments are directed to using a generative language model to extract an aspect term, aspect category and their corresponding polarities. The generative language model may be trained as a single, joint, and multi-task model. The single-task generative language model determines a term polarity from the aspect term in the sentence or a category polarity from an aspect category in the sentence. The joint-task generative language model determines both the aspect term and the term polarity or the aspect category and the category polarity. The multi-task generative language model determines the aspect term, term polarity, aspect category and category polarity of the sentence.

    Systems and methods for out-of-distribution classification

    公开(公告)号:US11481636B2

    公开(公告)日:2022-10-25

    申请号:US16877325

    申请日:2020-05-18

    Abstract: An embodiment provided herein preprocesses the input samples to the classification neural network, e.g., by adding Gaussian noise to word/sentence representations to make the function of the neural network satisfy Lipschitz property such that a small change in the input does not cause much change to the output if the input sample is in-distribution. Method to induce properties in the feature representation of neural network such that for out-of-distribution examples the feature representation magnitude is either close to zero or the feature representation is orthogonal to all class representations. Method to generate examples that are structurally similar to in-domain and semantically out-of domain for use in out-of-domain classification training. Method to prune feature representation dimension to mitigate long tail error of unused dimension in out-of-domain classification. Using these techniques, the accuracy of both in-domain and out-of-distribution identification can be improved.

    EFFICIENT DETERMINATION OF USER INTENT FOR NATURAL LANGUAGE EXPRESSIONS BASED ON MACHINE LEARNING

    公开(公告)号:US20210374353A1

    公开(公告)日:2021-12-02

    申请号:US17005316

    申请日:2020-08-28

    Abstract: An online system allows user interactions using natural language expressions. The online system uses a machine learning based model to infer an intent represented by a user expression. The machine learning based model takes as input a user expression and an example expression to compute a score indicating whether the user expression matches the example expression. Based on the scores, the intent inference module determines a most applicable intent for the expression. The online system determines a confidence threshold such that user expressions indicating a high confidence are assigned the most applicable intent and user expressions indicating a low confidence are assigned an out-of-scope intent. The online system encodes the example expressions using the machine learning based model. The online system may compare an encoded user expression with encoded example expressions to identify a subset of example expressions used to determine the most applicable intent.

    Systems and Methods for Out-of-Distribution Classification

    公开(公告)号:US20210150365A1

    公开(公告)日:2021-05-20

    申请号:US16877325

    申请日:2020-05-18

    Abstract: An embodiment provided herein preprocesses the input samples to the classification neural network, e.g., by adding Gaussian noise to word/sentence representations to make the function of the neural network satisfy Lipschitz property such that a small change in the input does not cause much change to the output if the input sample is in-distribution. Method to induce properties in the feature representation of neural network such that for out-of-distribution examples the feature representation magnitude is either close to zero or the feature representation is orthogonal to all class representations. Method to generate examples that are structurally similar to in-domain and semantically out-of domain for use in out-of-domain classification training. Method to prune feature representation dimension to mitigate long tail error of unused dimension in out-of-domain classification. Using these techniques, the accuracy of both in-domain and out-of-distribution identification can be improved.

    Parameter utilization for language pre-training

    公开(公告)号:US12072955B2

    公开(公告)日:2024-08-27

    申请号:US17532851

    申请日:2021-11-22

    CPC classification number: G06F18/2148 G06F18/2163 G06F40/00

    Abstract: Embodiments are directed to pre-training a transformer model using more parameters for sophisticated patterns (PSP++). The transformer model is divided into a held-out model and a main model. A forward pass and a backward pass are performed on the held-out model, where the forward pass determines self-attention hidden states of the held-out model and the backward pass determines loss of the held-out model. A forward pass on the main model is performed to determine a self-attention hidden states of the main model. The self-attention hidden states of the main model are concatenated with the self-attention hidden states of the held-out model. A backward pass is performed on the main model to determine a loss of the main model. The parameters of the held-out model are updated to reflect the loss of the held-out model and parameters of the main model are updated to reflect the loss of the main model.

    Generative language model for few-shot aspect-based sentiment analysis

    公开(公告)号:US11853706B2

    公开(公告)日:2023-12-26

    申请号:US17468950

    申请日:2021-09-08

    CPC classification number: G06F40/30 G06F40/284 G06N3/04 G06N3/08

    Abstract: Sentiment analysis is a task in natural language processing. The embodiments are directed to using a generative language model to extract an aspect term, aspect category and their corresponding polarities. The generative language model may be trained as a single, joint, and multi-task model. The single-task generative language model determines a term polarity from the aspect term in the sentence or a category polarity from an aspect category in the sentence. The joint-task generative language model determines both the aspect term and the term polarity or the aspect category and the category polarity. The multi-task generative language model determines the aspect term, term polarity, aspect category and category polarity of the sentence.

    PARAMETER UTILIZATION FOR LANGUAGE PRE-TRAINING

    公开(公告)号:US20220391640A1

    公开(公告)日:2022-12-08

    申请号:US17532851

    申请日:2021-11-22

    Abstract: Embodiments are directed to pre-training a transformer model using more parameters for sophisticated patterns (PSP++). The transformer model is divided into a held-out model and a main model. A forward pass and a backward pass are performed on the held-out model, where the forward pass determines self-attention hidden states of the held-out model and the backward pass determines loss of the held-out model. A forward pass on the main model is performed to determine a self-attention hidden states of the main model. The self-attention hidden states of the main model are concatenated with the self-attention hidden states of the held-out model. A backward pass is performed on the main model to determine a loss of the main model. The parameters of the held-out model are updated to reflect the loss of the held-out model and parameters of the main model are updated to reflect the loss of the main model.

    SYSTEMS AND METHODS FOR FEW-SHOT INTENT CLASSIFIER MODELS

    公开(公告)号:US20220366893A1

    公开(公告)日:2022-11-17

    申请号:US17534008

    申请日:2021-11-23

    Abstract: Some embodiments of the current disclosure disclose methods and systems for training for training a natural language processing intent classification model to perform few-shot classification tasks. In some embodiments, a pair of an utterance and a first semantic label labeling the utterance may be generated and a neural network that is configured to perform natural language inference tasks may be utilized to determine the existence of an entailment relationship between the utterance and the semantic label. The semantic label may be predicted as the intent class of the utterance based on the entailment relationship and the pair may be used to train the natural language processing intent classification model to perform few-shot classification tasks.

Patent Agency Ranking