Textual explanations for abstract syntax trees with scored nodes

    公开(公告)号:US12260306B2

    公开(公告)日:2025-03-25

    申请号:US17891350

    申请日:2022-08-19

    Abstract: Herein is a machine learning (ML) explainability (MLX) approach in which a natural language explanation is generated based on analysis of a parse tree such as for a suspicious database query or web browser JavaScript. In an embodiment, a computer selects, based on a respective relevance score for each non-leaf node in a parse tree of a statement, a relevant subset of non-leaf nodes. The non-leaf nodes are grouped in the parse tree into groups that represent respective portions of the statement. Based on a relevant subset of the groups that contain at least one non-leaf node in the relevant subset of non-leaf nodes, a natural language explanation of why the statement is anomalous is generated.

    ENCODING LOG-SPECIFIC ATTRIBUTES WITH NLP MODELS

    公开(公告)号:US20250021759A1

    公开(公告)日:2025-01-16

    申请号:US18219763

    申请日:2023-07-10

    Abstract: Herein is natural language processing (NLP) to detect an anomalous log entry using a language model that infers an encoding of the log entry from novel generation of numeric lexical tokens. In an embodiment, a computer extracts an original numeric lexical token from a variable sized log entry. Substitute numeric lexical token(s) that represent the original numeric lexical token are generated, such as with a numeric exponent or by trigonometry. The log entry does not contain the substitute numeric lexical token. A novel sequence of lexical tokens that represents the log entry and contains the substitute numeric lexical token is generated. The novel sequence of lexical tokens does not contain the original numeric lexical token. The computer hosts and operates a machine learning model that generates, based on the novel sequence of lexical tokens that represents the log entry, an inference that characterizes the log entry with unprecedented accuracy.

    GENERAL PURPOSE SQL REPRESENTATION MODEL

    公开(公告)号:US20240370429A1

    公开(公告)日:2024-11-07

    申请号:US18143776

    申请日:2023-05-05

    Abstract: In an embodiment, a computer generates sentence fingerprints that represent respective pluralities of similar database statements. Based on the sentence fingerprints, an artificial neural network is trained. After training the artificial neural network on a large corpus of fingerprinted database statements, the artificial neural network is ready to be used for zero-shot transfer learning to a downstream task in training. Database statement fingerprinting also anonymizes literal values in raw SQL statements. The trained artificial neural network can be safely reused without risk of disclosing sensitive data in the artificial neural network's vocabulary. After training, the artificial neural network infers a fixed-size encoded database statement from a new database statement. Based on the fixed-size encoded database statement, the new database statement is detected as anomalous, which increases database security and preserves database throughput by not executing the anomalous database statement.

    TRAINING SYNTAX-AWARE LANGUAGE MODELS WITH AST PATH PREDICTION

    公开(公告)号:US20240345815A1

    公开(公告)日:2024-10-17

    申请号:US18202564

    申请日:2023-05-26

    CPC classification number: G06F8/427

    Abstract: In an embodiment, a computer stores and operates a logic encoder that is an artificial neural network that infers a fixed-size encoded logic from textual or tokenized source logic. Without machine learning, a special parser generates a parse tree that represents the source logic and a fixed-size correctly encoded tree that represents the parse tree. For finetuning the logic encoder, an encoded tree generator is an artificial neural network that accepts the fixed-size encoded logic as input and responsively infers a fixed-size incorrectly encoded tree that represents the parse tree. The neural weights of the logic encoder (and optionally of the encoded tree generator) are adjusted based on backpropagation of error (i.e. loss) as a numerically measured difference between the fixed-size incorrectly encoded tree and the fixed-size correctly encoded tree.

    BACKPROPAGATION-BASED EXPLAINABILITY METHOD FOR UNSUPERVISED ANOMALY DETECTION MODELS BASED ON AUTOENCODER ARCHITECTURES

    公开(公告)号:US20240037372A1

    公开(公告)日:2024-02-01

    申请号:US17873491

    申请日:2022-07-26

    CPC classification number: G06N3/0454 G06N3/088 G06N3/084

    Abstract: The present invention relates to machine learning (ML) explainability (MLX). Herein are techniques for a novel relevance propagation rule in layer-wise relevance propagation (LRP) for feature attribution-based explanation (ABX) for a reconstructive autoencoder. In an embodiment, a reconstruction layer of a reconstructive neural network in a computer generates a reconstructed tuple that is based on an original tuple that contains many features. A reconstruction residual cost function calculates a reconstruction error that measures a difference between the original tuple and the reconstructed tuple. Applied to the reconstruction error is a novel reconstruction relevance propagation rule that assigns a respective reconstruction relevance to each reconstruction neuron in the reconstruction layer. Based on the reconstruction relevance of the reconstruction neurons, a respective feature relevance of each feature is determined, from which an ABX explanation may be automatically generated.

    SUPER-FEATURES FOR EXPLAINABILITY WITH PERTURBATION-BASED APPROACHES

    公开(公告)号:US20230334343A1

    公开(公告)日:2023-10-19

    申请号:US17719617

    申请日:2022-04-13

    CPC classification number: G06N5/04

    Abstract: In an embodiment, a computer hosts a machine learning (ML) model that infers a particular inference for a particular tuple that is based on many features. The features are grouped into predefined super-features that each contain a disjoint (i.e. nonintersecting, mutually exclusive) subset of features. For each super-feature, the computer: a) randomly selects many permuted values from original values of the super-feature in original tuples, b) generates permuted tuples that are based on the particular tuple and a respective permuted value, and c) causes the ML model to infer a respective permuted inference for each permuted tuple. A surrogate model is trained based on the permuted inferences. For each super-feature, a respective importance of the super-feature is calculated based on the surrogate model. Super-feature importances may be used to rank super-features by influence and/or generate a local ML explainability (MLX) explanation.

    PARTIAL GRAPH PATH PREDICTION AND NEXT TOKEN PREDICTION JOINT TRAINING ALGORITHM FOR GENERATIVE LANGUAGE MODELS

    公开(公告)号:US20250165852A1

    公开(公告)日:2025-05-22

    申请号:US18514391

    申请日:2023-11-20

    Abstract: During pretraining, a computer generates three untrained machine learning models that are a token sequence encoder, a token predictor, and a decoder that infers a frequency distribution of graph traversal paths. A sequence of lexical tokens is generated that represents a lexical text in a training corpus. A graph is generated that represents the lexical text. In the graph, multiple traversal paths are selected that collectively represent a sliding subsequence of the sequence of lexical tokens. From the subsequence, the token sequence encoder infers an encoded sequence that represents the subsequence of the sequence of lexical tokens. The decoder and token predictor accept the encoded sequence as input for respective inferencing for which respective training losses are measured. Both training losses are combined into a combined loss that is used to increase the accuracy of the three machine learning models by, for example, backpropagation of the combined loss.

Patent Agency Ranking