Abstract:
Systems, apparatuses, methods, and computer readable mediums for modeling malicious behavior that occurs in the absence of users. A system trains an anomaly detection model using attributes associated with a first plurality of events representing system activity on one or more clean machines when users are not present. Next, the system utilizes the trained anomaly detection model to remove benign events from a second plurality of events captured from infected machines when users are not present. Then, the system utilizes malicious events, from the second plurality of events, to train a classifier. Next, the classifier identifies a first set of attributes which are able to predict if an event is caused by malware with a predictive power greater than a threshold.
Abstract:
The disclosed computer-implemented method for automated whitelisting of files may include (1) obtaining telemetry information that identifies files located on a set of computing systems, (2) establishing a whitelist of files for the set of computing systems by, for each file identified by the telemetry information, (A) calculating an amount by which a cost for using the whitelist will increase if the file is included in the whitelist, (B) calculating an amount by which whitelist coverage of files in the set of computing devices will increase if the file is included in the whitelist, (C) determining whether to include the file in the whitelist by balancing the increase in the cost against the increase in whitelist coverage, and (3) using the whitelist to protect the set of computing systems from undesirable files. Various other methods, systems, and computer-readable media are also disclosed.
Abstract:
The disclosed computer-implemented method for detecting security threats may include (1) detecting, by a software security program, a security incident at a client device such that the software security program generates a signature report to identify the security incident, (2) querying an association database with the signature report to deduce another signature report that a different software security program would have predictably generated at the client device, the different software security program having been unavailable at the client device at a time of detecting the security incident, and (3) performing at least one protective action to protect the client device from a security threat associated with the security incident based on the other signature report deduced by querying the association database. Various other methods, systems, and computer-readable media are also disclosed.
Abstract:
The disclosed computer-implemented method for estimating confidence scores of unverified signatures may include (1) detecting a potentially malicious event that triggers a malware signature whose confidence score is above a certain threshold, (2) detecting another event that triggers another signature whose confidence score is unknown, (3) determining that the potentially malicious event and the other event occurred within a certain time period of one another, and then (4) assigning, to the other signature, a confidence score based at least in part on the potentially malicious event and the other event occurring within the certain time period of one another. Various other methods, systems, and computer-readable media are also disclosed.
Abstract:
Identifying and protecting against malicious apps installed on client devices. In some embodiments, a method may include (a) identifying client devices, (b) identifying apps installed on the client devices, (c) assigning each of the apps known to be a malicious app with a highest app suspicion score, (d) assigning each of the other apps as an unknown app with a lowest app suspicion score, (e) assigning each of the client devices with a device suspicion score, (f) assigning each of the unknown apps with an updated app suspicion score, (g) repeating (e), and repeating (f) with a normalization, until the device suspicion scores and the app suspicion scores converge within a convergence threshold, (h) identifying one of the unknown apps as a malicious app, and (i) protecting against the malicious app by directing performance of a remedial action to protect the client device from the malicious app.
Abstract:
Securing a network device by automatically identifying files belonging to an application. In one embodiment, a method may include collecting file attributes for multiple files from multiple network devices, examining a hash of file contents of each of the multiple files to identify multiple unique files in the multiple files, summarizing the file attributes for each of the multiple unique files to generate a sketch of file attributes for each of the multiple unique files, clustering the multiple unique files into multiple applications, making a security action decision for one application of the multiple applications, and performing a security action on a network device based on the security action decision.
Abstract:
The disclosed computer-implemented method for classifying security events as targeted attacks may include (1) detecting a security event in connection with at least one organization, (2) comparing the security event against a targeted-attack taxonomy that identifies a plurality of characteristics of targeted attacks, (3) determining that the security event is likely targeting the organization based at least in part on comparing the security event against the targeted-attack taxonomy, and then in response to determining that the security event is likely targeting the organization, (4) classifying the security event as a targeted attack. Various other methods, systems, and computer-readable media are also disclosed.
Abstract:
A computer-implemented method for using event-correlation graphs to generate remediation procedures may include (1) detecting a suspicious event involving a first actor within a computing system, (2) constructing, in response to detecting the suspicious event involving the first actor, an event-correlation graph that includes (i) a first node that represents the first actor, (ii) a second node that represents a second actor, and (iii) an edge that interconnects the first node and the second node and represents an additional suspicious event involving the first actor and the second actor, and (3) using the event-correlation graph to generate a procedure for remediating an effect of an attack on the computing system that is reflected in the event-correlation graph. Various other methods, systems, and computer-readable media are also disclosed.
Abstract:
The disclosed computer-implemented method for dynamically validating remote requests within enterprise networks may include (1) receiving, on a target system within an enterprise network, a request to access a portion of the target system from a remote system within the enterprise network, (2) performing a validation operation to determine whether the remote system is trustworthy to access the portion of the target system by (A) querying an enterprise security system to authorize the request from the remote system and (B) receiving, from the enterprise security system in response to the query, a notification indicating whether the remote system is trustworthy to access the portion of the target system, and then (3) determining whether to grant the request based at least in part on the notification received from the enterprise security system as part of the validation operation. Various other methods, systems, and computer-readable media are also disclosed.
Abstract:
The disclosed computer-implemented method for detecting security threats may include (1) detecting, by a software security program, a security incident at a client device such that the software security program generates a signature report to identify the security incident, (2) querying an association database with the signature report to deduce another signature report that a different software security program would have predictably generated at the client device, the different software security program having been unavailable at the client device at a time of detecting the security incident, and (3) performing at least one protective action to protect the client device from a security threat associated with the security incident based on the other signature report deduced by querying the association database. Various other methods, systems, and computer-readable media are also disclosed.