SUPERVISED MODEL SELECTION VIA DIVERSITY CRITERIA

    公开(公告)号:US20250077876A1

    公开(公告)日:2025-03-06

    申请号:US18239416

    申请日:2023-08-29

    Abstract: Techniques for selecting machine-learned (ML) models using diversity criteria are provided. In one technique, for each ML model of multiple ML models, output data is generated based on input data to the ML model. Multiple pairs of ML models are identified, where each ML model in the multiple pairs is from the multiple ML models. For each pair of ML models in the multiple pairs of ML models: (1) first output data that was previously generated by a first ML model in the pair is identified; (2) second output data that was previously generated by a second ML model in the pair is identified; (3) a diversity value that is based on the first and second output data is generated; and (4) the diversity value is added to a set of diversity values. A subset of the multiple ML models is selected based on the set of diversity values.

    SIMULTANEOUS DATA SAMPLING AND FEATURE SELECTION VIA WEAK LEARNERS

    公开(公告)号:US20250013909A1

    公开(公告)日:2025-01-09

    申请号:US18218970

    申请日:2023-07-06

    Abstract: From many features and many multidimensional points, a computer generates exploratory training configurations. Each point contains a value for each of the features. Each exploratory training configuration identifies a random subset of the features and a random subset of the points. A performance score is generated for each of the exploratory training configurations. A feature weight is generated for each of the features that is based on the performance scores of the exploratory training configurations whose random subset of features contains the feature. A point weight is generated for each of the points that is based on the performance scores of the exploratory training configurations whose random subset of the many points contains the point. A machine learning model is trained using an optimized training corpus that consists of a subset of the many features based on feature weight and a subset of the many points based on point weight.

    EXPERT-OPTIMAL CORRELATION: CONTAMINATION FACTOR IDENTIFICATION FOR UNSUPERVISED ANOMALY DETECTION

    公开(公告)号:US20240095231A1

    公开(公告)日:2024-03-21

    申请号:US18075824

    申请日:2022-12-06

    CPC classification number: G06F16/2365

    Abstract: In a computer, each of multiple anomaly detectors infers an anomaly score for each of many tuples. For each tuple, a synthetic label is generated that indicates for each anomaly detector: the anomaly detector, the anomaly score inferred by the anomaly detector for the tuple and, for each of multiple contamination factors, the contamination factor and, based on the contamination factor, a binary class of the anomaly score. For each particular anomaly detector excluding a best anomaly detector, a similarity score is measured for each contamination factor. The similarity score indicates how similar, between the particular anomaly detector and the best anomaly detector, are the binary classes of labels with that contamination factor. For each contamination factor, a combined similarity score is calculated based on the similarity scores for the contamination factor. Based on a contamination factor that has the highest combined similarity score, the computer detects that an additional anomaly detector is inaccurate.

    N-1 EXPERTS: MODEL SELECTION FOR UNSUPERVISED ANOMALY DETECTION

    公开(公告)号:US20230334364A1

    公开(公告)日:2023-10-19

    申请号:US18075667

    申请日:2022-12-06

    CPC classification number: G06N20/00

    Abstract: In an embodiment in a computer, each of several anomaly detectors infers a respective anomaly inference for each of many test tuples. For each available anomaly detector that is not the candidate anomaly detector, a respective fitness score is measured for the candidate anomaly detector that indicates how similar are anomaly inferences of the candidate anomaly detector to anomaly inferences of the available anomaly detector. Fitness scores of the candidate anomaly detector are combined into a combined fitness score for the candidate anomaly detector. The best anomaly detector that has a highest combined fitness score is selected for further operation such as inferring an anomaly inference for a new tuple while retraining or in production.

    USING HYPERPARAMETER PREDICTORS TO IMPROVE ACCURACY OF AUTOMATIC MACHINE LEARNING MODEL SELECTION

    公开(公告)号:US20200334569A1

    公开(公告)日:2020-10-22

    申请号:US16388830

    申请日:2019-04-18

    Abstract: Techniques are provided for selection of machine learning algorithms based on performance predictions by using hyperparameter predictors. In an embodiment, for each mini-machine learning model (MML model) of a plurality of MML models, a respective hyperparameter predictor set that predicts a respective set of hyperparameter settings for a first data set is trained. Each MML model represents a respective reference machine learning model (RML model) of a plurality of RML models. A first plurality of data set samples is generated from the first data set. A first plurality of first meta-feature sets is generated, each first meta-feature set describing a respective first data set sample of said first plurality. A respective target set of hyperparameter settings are generated for said each MML model using a hypertuning algorithm. The first plurality of first meta-feature sets and the respective target set of hyperparameter settings are used to train the respective hyperparameter predictor set. Each hyperparameter predictor set is used during training and inference to improve the accuracy of automatically selecting a RML model per data set.

    MULTIPLIER TUNING POSTPROCESSING FOR MACHINE LEARNING BIAS MITIGATION

    公开(公告)号:US20240403674A1

    公开(公告)日:2024-12-05

    申请号:US18529300

    申请日:2023-12-05

    Abstract: In an embodiment, a computer infers, from an input (e.g. that represents a person) that contains a value of a sensitive feature that has a plurality of multipliers, a probability of a majority class (i.e. an outcome). Based on the value of the sensitive feature in the input, from the multipliers of the sensitive feature, a multiplier is selected that is specific to both of the sensitive feature and the value of the sensitive feature. The input is classified based on a multiplicative product of the probability of the majority class and the multiplier that is specific to both of the sensitive feature and the value of the sensitive feature. In an embodiment, a black-box bi-objective optimizer generates multipliers on a Pareto frontier from which a user may interactively select a combination of multipliers that provide a best tradeoff between fairness and accuracy.

    THRESHOLD TUNING FOR IMBALANCED MULTI-CLASS CLASSIFICATION MODELS

    公开(公告)号:US20240303541A1

    公开(公告)日:2024-09-12

    申请号:US18386196

    申请日:2023-11-01

    CPC classification number: G06N20/00 G06N7/01

    Abstract: In an embodiment, a computer generates, from an input, an inference that contains multiple probabilities respectively for multiple mutually exclusive classes that contain a first class and a second class. The probabilities contain (e.g. due to overfitting) a higher probability for the first class that is higher than a lower probability for the second class. In response to a threshold exceeding the higher probability, the input is automatically and more accurately classified as the second class. One, some, or almost all classes may have a respective distinct threshold that can be concurrently applied for acceleration. Data parallelism may simultaneously apply a threshold to a batch of multiple inputs for acceleration.

    AUTOMLX COUNTERFACTUAL EXPLAINER (ACE)
    19.
    发明公开

    公开(公告)号:US20240303515A1

    公开(公告)日:2024-09-12

    申请号:US18512438

    申请日:2023-11-17

    CPC classification number: G06N5/04

    Abstract: A computer stores a reference corpus that consists of many reference points that each has a respective class. Later, an expected class and a subject point (i.e. instance to explain) that does not have the expected class are received. Multiple reference points that have the expected class are selected as starting points. Based on the subject point and the starting points, multiple discrete interpolated points are generated that have the expected class. Based on the subject point and the discrete interpolated points, multiple continuous interpolated points are generated that have the expected class. A counterfactual explanation of why the subject point does not have the expected class is directly generated based on continuous interpolated point(s) and, thus, indirectly generated based on the discrete interpolated points. For acceleration, neither way of interpolation (i.e. counterfactual generation) is iterative. Generated interpolated points can be reused to amortize resources consumed while generating counterfactuals.

    Fast and accurate anomaly detection explanations with forward-backward feature importance

    公开(公告)号:US11966275B2

    公开(公告)日:2024-04-23

    申请号:US17992743

    申请日:2022-11-22

    CPC classification number: G06F11/006 G06N20/00 G06F2201/82

    Abstract: The present invention relates to machine learning (ML) explainability (MLX). Herein are local explanation techniques for black box ML models based on coalitions of features in a dataset. In an embodiment, a computer receives a request to generate a local explanation of which coalitions of features caused an anomaly detector to detect an anomaly. During unsupervised generation of a new coalition, a first feature is randomly selected from features in a dataset. Which additional features in the dataset can join the coalition, because they have mutual information with the first feature that exceeds a threshold, is detected. For each feature that is not in the coalition, values of the feature are permuted in imperfect copies of original tuples in the dataset. An average anomaly score of the imperfect copies is measured. Based on the average anomaly score of the imperfect copies, a local explanation is generated that references (e.g. defines) the coalition.

Patent Agency Ranking