US20150269379A1 - Using confidence about user intent in a reputation system - Google Patents
Using confidence about user intent in a reputation system Download PDFInfo
- Publication number
- US20150269379A1 US20150269379A1 US14/731,826 US201514731826A US2015269379A1 US 20150269379 A1 US20150269379 A1 US 20150269379A1 US 201514731826 A US201514731826 A US 201514731826A US 2015269379 A1 US2015269379 A1 US 2015269379A1
- Authority
- US
- United States
- Prior art keywords
- clients
- reports
- reputation
- client
- confidence
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
-
- H04L51/12—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/21—Monitoring or handling of messages
- H04L51/212—Monitoring or handling of messages using filtering or selective blocking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1441—Countermeasures against malicious traffic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/03—Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
- G06F2221/034—Test or assess a computer or a system
Definitions
- This invention relates generally to computer security and particularly to detecting attempts to manipulate a reputation system for detecting malicious objects.
- Malware can attack modern computers. Malware threats include computer viruses, worms, Trojan horse programs, spyware, adware, crimeware, and phishing websites. Malicious entities sometimes attack servers that store sensitive or confidential data that can be used to the malicious entity's own advantage. Similarly, other computers, including home computers, must be constantly protected from malicious software that can be transmitted when a user communicates with others via electronic mail, when a user downloads new programs or program updates, and in many other situations. The different options and methods available to malicious entities for attack on a computer are numerous.
- malware is becoming less effective.
- Modern malware is often targeted and delivered to only a relative handful of computers.
- a Trojan horse program can be designed to target computers in a particular department of a particular enterprise.
- Such malware might never be encountered by security analysts, and thus the security software might never be configured with signatures for detecting such malware.
- Mass-distributed malware in turn, can contain polymorphisms that make every instance of the malware unique. As a result, it is difficult to develop signature strings that reliably detect all instances of the malware.
- a reputation system can determine the reputation of a file or other object encountered on a computer in order to assess the likelihood that the object is malware.
- One way to develop the reputation for an object is to collect reports from networked computers on which the object is found and base the reputation on information within the reports.
- An embodiment of the method comprises receiving, from clients, reports identifying an object detected at the clients. The method further comprises determining information about the clients from the reports. In addition, the method comprises generating confidence metrics for the clients responsive to the determined information about the clients, the confidence metrics indicating amounts of confidence in the veracity of the reports received from the clients. The method also comprises calculating, based at least in part on the reports from the clients and the confidence metrics for the clients, a reputation score of the object and storing the reputation score of the object.
- Embodiments of the computer-readable medium store computer program instructions for determining a reputation of an object in a reputation system, the instructions comprising instructions for receiving reports from clients in the reputation system, the reports identifying an object detected at the clients and determining information about the clients from the reports.
- the instructions further comprise instructions for generating confidence metrics for the clients responsive to the determined information about the clients, the confidence metrics indicating amounts of confidence in the veracity of the reports received from the clients.
- the instructions additionally comprise instructions for calculating a reputation score of the object responsive at least in part to the reports from the clients and the confidence metrics for the clients and storing the reputation score of the object.
- Embodiments of the computer system comprise a computer-readable storage medium storing executable computer program instructions comprising instructions for receiving reports from clients in the reputation system, the reports identifying an object detected at the clients and determining information about the clients from the reports.
- the instructions further comprise instructions for generating confidence metrics for the clients responsive to the determined information about the clients, the confidence metrics indicating amounts of confidence in the veracity of the reports received from the clients.
- the instructions additionally comprise instructions for calculating a reputation score of the object responsive at least in part to the reports from the clients and the confidence metrics for the clients and storing the reputation score of the object.
- the computer system further comprises a processor for executing the computer program instructions.
- FIG. 1 is a high-level block diagram of a computing environment according to one embodiment of the present invention.
- FIG. 2 is a high-level block diagram of a computer for acting as a security server and/or a client according to one embodiment.
- FIG. 3 is a high-level block diagram illustrating modules within the reputation module according to one embodiment.
- FIG. 4 is a flowchart illustrating the operation of the reputation module in determining reputation scores about objects using information from reports received from clients according to one embodiment.
- FIG. 1 is a high-level block diagram of a computing environment 100 according to one embodiment.
- FIG. 1 illustrates a security server 102 connected to a network 114 .
- the network 114 is also connected to multiple clients 112 .
- FIG. 1 and the other figures use like reference numerals to identify like elements.
- a letter after a reference numeral, such as “ 112 A,” indicates that the text refers specifically to the element having that particular reference numeral.
- a reference numeral in the text without a following letter, such as “ 112 ,” refers to any or all of the elements in the figures bearing that reference numeral (e.g. “ 112 ” in the text refers to reference numerals “ 112 A,” “ 112 B,” and/or “ 112 C” in the figures).
- Embodiments of the computing environment 100 can have thousands or millions of clients 112 , as well as multiple servers 102 . In some embodiments, the clients 112 are only connected to the network 114 for a certain period of time or not at all.
- the client 112 is an electronic device that can host malicious software.
- the client 112 is a conventional computer system executing, for example, a Microsoft Windows-compatible operating system (OS), Apple OS X, and/or a Linux distribution.
- the client 112 is another device having computer functionality, such as a personal digital assistant (PDA), mobile telephone, video game system, etc.
- PDA personal digital assistant
- the client 112 typically stores numerous computer files and/or software applications (collectively referred to as “objects”) that can host malicious software.
- Malicious software is generally defined as software that executes on the client 112 surreptitiously or that has some surreptitious functionality. Malware can take many forms, such as parasitic viruses that attach to legitimate files, worms that exploit weaknesses in the computer's security in order to infect the computer and spread to other computers, Trojan horse programs that appear legitimate but actually contain hidden malicious code, and spyware that monitors keystrokes and/or other actions on the computer in order to capture sensitive information or display advertisements.
- the client 112 executes a security module 110 for detecting the presence of malware.
- the security module 110 can be, for example, incorporated into the OS of the computer or part of a separate comprehensive security package. In one embodiment, the security module 110 is provided by the entity that operates the security server 102 .
- the security module 110 can communicate with the security server 102 via the network 114 in order to download information utilized to detect malicious software.
- the security module 110 can also communicate with the security server 102 via the network 114 to submit information about objects detected at the client 112 .
- security module 110 submits identifiers of objects detected at the client to the security server 102 and receives reputation scores for the objects in return.
- the reputation score represents an assessment of the trustworthiness of the object. An object with a high reputation score has a good reputation and is unlikely to contain malware. An object with a low reputation score, conversely, has a poor reputation and might contain malware.
- the security module 110 uses the reputation score, along with other factors such as behaviors, to evaluate whether an object at the client 112 is malware.
- the security module 110 can report the outcome of the evaluation to the security server 102 .
- the security server 102 is provided by a security software vendor or other entity.
- the security server 102 can include one or more standard computer systems configured to communicate with clients 112 via the network 114 .
- the security server 102 receives reports containing identifiers of objects and other information from the clients 112 via the network 114 .
- the security server 102 sends reputation scores for the objects to the clients 112 via the network 114 in response.
- the security server 102 comprises a data store 104 and a reputation module 106 .
- the reputation module 106 determines reputation scores of the objects based on factors such as how often the objects are encountered by the clients 112 . These reputation scores are stored in the data store 104 by the reputation module 106 .
- the reputation module 106 accesses the data store 104 in response to queries or submissions from clients 112 via the network 114 .
- An embodiment of the reputation module 106 also determines confidence metrics for the clients 112 .
- the confidence metric for a client 112 indicates an amount of confidence in the veracity of the information received from that client, where a high confidence metric indicates that the information is likely true. For example, a high volume of reports coming from a particular client 112 might indicate that the client is being controlled by a malicious entity that is attempting to influence object reputations by submitting false reports.
- the reputation module 106 can detect such attempts to influence object reputations and lower the confidence metrics of the corresponding clients 112 .
- the reputation module 106 can discount the weights of reports from clients 112 having low confidence metrics, and boost the weights of reports from clients having high confidence metrics, when determining the reputations for objects. Therefore, the reputation module 106 is resistant to attempts from malicious entities to manipulate or otherwise “game” the security server 102 .
- the network 114 enables communications between the security server 102 and the clients 112 .
- the network 114 uses standard communications technologies and/or protocols and comprises the Internet.
- the network 114 can include links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, digital subscriber line (DSL), asynchronous transfer mode (ATM), InfiniBand, PCI Express Advanced Switching, etc.
- the networking protocols used on the network 114 can include multiprotocol label switching (MPLS), the transmission control protocol/Internet protocol (TCP/IP), the User Datagram Protocol (UDP), the hypertext transport protocol (HTTP), the simple mail transfer protocol (SMTP), the file transfer protocol (FTP), etc.
- MPLS multiprotocol label switching
- TCP/IP transmission control protocol/Internet protocol
- UDP User Datagram Protocol
- HTTP hypertext transport protocol
- SMTP simple mail transfer protocol
- FTP file transfer protocol
- the data exchanged over the network 114 can be represented using technologies and/or formats including the hypertext markup language (HTML), the extensible markup language (XML), etc.
- HTML hypertext markup language
- XML extensible markup language
- all or some of links can be encrypted using conventional encryption technologies such as secure sockets layer (SSL), transport layer security (TLS), virtual private networks (VPNs), Internet Protocol security (IPsec), etc.
- SSL secure sockets layer
- TLS transport layer security
- VPNs virtual private networks
- IPsec Internet Protocol security
- the entities can use custom and/or dedicated data communications technologies instead of, or in addition to, the ones described above.
- FIG. 2 is a high-level block diagram of a computer 200 for acting as a security server 102 and/or a client 112 according to one embodiment. Illustrated are at least one processor 202 coupled to a chipset 204 . Also coupled to the chipset 204 are a memory 206 , a storage device 208 , a keyboard 210 , a graphics adapter 212 , a pointing device 214 , and a network adapter 216 . A display 218 is coupled to the graphics adapter 212 . In one embodiment, the functionality of the chipset 204 is provided by a memory controller hub 220 and an I/O controller hub 222 . In another embodiment, the memory 206 is coupled directly to the processor 202 instead of the chipset 204 .
- the storage device 208 is any computer-readable storage medium, such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device.
- the memory 206 holds instructions and data used by the processor 202 .
- the pointing device 214 may be a mouse, track ball, or other type of pointing device, and is used in combination with the keyboard 210 to input data into the computer system 200 .
- the graphics adapter 212 displays images and other information on the display 218 .
- the network adapter 216 couples the computer system 200 to a local or wide area network.
- a computer 200 can have different and/or other components than those shown in FIG. 2 .
- the computer 200 can lack certain illustrated components.
- a computer 200 acting as a security server 102 lacks a keyboard 210 , pointing device 214 , graphics adapter 212 , and/or display 218 .
- the storage device 208 can be local and/or remote from the computer 200 (such as embodied within a storage area network (SAN)).
- SAN storage area network
- the computer 200 is adapted to execute computer program modules for providing functionality described herein.
- module refers to computer program logic utilized to provide the specified functionality.
- a module can be implemented in hardware, firmware, and/or software.
- program modules are stored on the storage device 208 , loaded into the memory 206 , and executed by the processor 202 .
- Embodiments of the entities described herein can include other and/or different modules than the ones described here.
- the functionality attributed to the modules can be performed by other or different modules in other embodiments.
- this description occasionally omits the term “module” for purposes of clarity and convenience.
- FIG. 3 is a high-level block diagram illustrating a detailed view of the reputation module 106 according to one embodiment.
- the reputation module 106 is incorporated into the security server 102 as a standalone application or as part of another product. As shown in FIG. 3 , the reputation module 106 includes multiple modules.
- One of skill in the art will recognize that other embodiments of the reputation module 106 may have different and/or other modules than those described here, and that functionalities may be distributed among the modules in various ways.
- a communications module 302 exchanges information with security modules 110 of clients 112 via the network 114 .
- the communications module 302 receives information regarding objects, such as files, detected at the clients 112 by the security modules 110 .
- the communication module 302 can receive a report from a security module 110 containing an identifier of an object detected at a client 112 , along with a request for the reputation score of the object.
- the identifier of the object can be, for example, a hash of the object.
- the communications module 302 interacts with the other modules of the reputation module 106 to determine the reputation score for the identified object and provides the score to the requesting security module 110 .
- the reports also include information that the communications module 302 can use to identify the clients 112 that submit reports.
- a report includes a unique identifier of the client 112 .
- the unique identifier can be a cryptographic key or token that accompanies the report and/or is used to sign or authenticate the report.
- the communications module 302 can also detect other information that can be used to identify the clients 112 , such as the IP addresses from which the reports are received.
- the communications module 302 can access information in the data store 104 that correlates the unique identifier with additional information about the client 112 , such as its geographic location, age in the system (e.g., time elapsed since the client's first report), other reports it submitted, etc.
- a confidence module 304 determines confidence metrics for clients 112 .
- a confidence metric represents an amount of confidence in the veracity of the report received from a client 112 .
- the confidence metric is a continuous value from zero to one (inclusive) and is stored in data store 104 .
- the confidence metric can be associated with entities other than a client 112 .
- the confidence metric can be associated with a particular user of a client 112 or with a particular instance of a security module 110 .
- this description refers to confidence metrics as being associated with clients 112 , but it will be understood that “clients” as used in this sense also refers to other entities with which confidence metrics can be associated.
- the confidence module 304 uses information received from the clients 112 and/or information about the clients received from other sources to calculate the clients' confidence metrics.
- the confidence module 304 uses the clients' unique identifiers to associate and correlate information from the clients. Depending upon the embodiment, the confidence module 304 can use a variety of different factors to determine the confidence metric for a client.
- An embodiment of the confidence module 304 uses the system age of a client as a factor in calculating the client's confidence metric.
- the system age of a client is the elapsed time that a client 112 has been active.
- the system age for a client 112 can be the time elapsed from when the first report was received from the client, from when the security module 110 was installed on the client, or from when the security module was registered with the security server 102 .
- a client 112 that is “older” receives a higher confidence metric.
- a client 112 that has only recently started submitting reports may be unreliable or otherwise untrustworthy.
- a malicious entity may forge a large number of legitimate (and new) client identifiers and then submit, in high volume, reports attempting to boost the reputation scores of malware.
- the confidence module 304 can recognize these reports as coming from “young” clients 112 and use this factor to decrease the confidence metrics for these clients.
- the distinction between “young” and “old” clients can be established using predetermined value. For example, clients 112 having an age of less than six months can be considered “young” while other clients can be considered “old.”
- the confidence module 304 may also calculate a client's age based on characteristics other than elapsed time. In one embodiment, a client's age is measured based on how many reports of “prevalent” objects the client has submitted. Certain software applications, for example, are prevalent, or ubiquitous, among the clients 112 . For example, a very large percentage of clients are likely to have at least one of a limited number of web browsing applications installed. The security modules 110 report such prevalent applications to the security server 102 along with other detected objects. The confidence module 304 can treat a client 112 that has submitted more reports of prevalent objects as “older” than clients that have submitted fewer reports of such objects. Such treatment will tend to decrease the confidence metrics of clients 112 that submit reports for only non-prevalent objects and therefore might be attempting to boost reputation scores of malware.
- the confidence module 304 can calculate a client's age based on a ratio of reports submitted for prevalent objects to reports submitted for “rare” objects, where “rare” objects are objects reported by very few clients. If a client 112 tends to submit more reports for rare objects than for prevalent objects, the client might be attempting to boost reputation scores of malware. Therefore, such clients 112 are treated as being “young” and will have decreased confidence metrics. Factoring a client's age on the system makes “gaming” the reputation system expensive because a client must be “old” to have a higher confidence metric.
- An embodiment of the confidence module 304 uses the geographic location of a client 112 as a factor in calculating the client's confidence metric. Because most clients are not in multiple parts of the world at once, reports about the same object submitted by the same client from different geographic locations received close in time are indicative of suspicious behavior. Such reports might indicate, for example, that the client's identifier is being spoofed by multiple malicious actors. Therefore, the confidence module 304 may reduce the client's confidence metric based on receiving such reports. Additionally, different geographic locations may have varying confidence metrics. Thus, a client 112 submitting a report from a particularly suspicious geographic location may have a lower confidence metric than another client submitting an equivalent report from a less suspicious geographic location.
- the confidence module 304 can also use the frequency of reports submitted by a client 112 as a factor in calculating the client's confidence metric. By tracking the patterns of report submissions of clients, the confidence module 304 may detect an abnormal amount of submissions by a particular client. The threshold of what constitutes an “abnormal” deviation from the expected submission pattern may vary from one client to another, based on the client's previous submission patterns. For example, a client 112 that has historically submitted only a few reports that suddenly submits a large volume of reports may be compromised. As a result, the confidence module 304 may decrease that client's confidence metric.
- the confidence module 304 can also use the client identifiers to determine confidence metrics.
- the confidence module 304 can identify certain client identifiers as invalid, forged, hacked, or otherwise compromised. This identification can be performed, for example, by correlating the identifier in a received report with a list of identifiers maintained in the data store 104 .
- the confidence module 304 may recognize compromised identifiers and utilize that information as a factor in calculating confidence metrics for the affected clients. Thus, a lower confidence metric may be given to a client with an invalid, or compromised, identifier.
- the confidence module 304 can use IP addresses of clients from which reports are received to influence the confidence metrics. Certain IP addresses can be identified as belonging to malicious entities or otherwise associated with low confidence. Therefore, the confidence module 304 can lower the confidence metrics of clients that send reports from certain IP addresses.
- Other information received by the confidence module 304 can also influence the confidence metrics of clients. If a client submits malformed or bogus reports, the confidence module 304 has reason to suspect that the client has been compromised and can reduce the client's confidence metric. In embodiments where the client 112 is associated with a credit card account to which the confidence module 304 has access (such as when the user of the client has purchased the security module 110 from the security server 102 using a credit card), the confidence module 304 can use observed credit activity, such as a chargeback request, to influence the confidence metric. In other embodiments, aggregating reports based on one or more factors described above, such as geographic location, may also identify clients submitting suspicious reports based on irregular reporting patterns and influence the client's confidence metric.
- Additional factors and heuristics received by the confidence module 304 may be used to influence the confidence metrics of clients. For example, receiving simultaneous submissions from more than one IP address by one client (the same client identifier) may indicate that the client has been compromised. Receiving an unusually high rate of submissions from a client, receiving repetitive reports about a few files from clients, and identifying clients that submit a disproportionately large or small number of files (or a disproportionate number of files of a given characteristic—e.g., the client seems to submit files that no one else ever submits) may further influence the confidence metrics of those clients. In addition, identifying clients known to send spam is another factor used by the confidence module 304 to influence the confidence metrics of clients, in one embodiment.
- An embodiment of the confidence module 304 uses one or more of the factors described above to determine the confidence metric for a client 112 .
- the confidence module can ascribe different weights to different factors. For example, the age of the client 112 can have a significant influence on the confidence metric, while the geographic location of the client can have only a minor influence.
- some embodiments use multiple confidence metrics, where each confidence metric corresponds to a single factor (e.g., age).
- the confidence module 304 uses the calculated confidence metrics to assign the clients to a whitelist or blacklist.
- the whitelist lists clients 112 that have high confidence metrics and are therefore presumed trustworthy.
- the blacklist lists clients 112 that have low confidence metrics and are therefore presumed untrustworthy.
- the confidence module 304 uses thresholds to assign clients 112 to the lists. If a client's confidence metric falls below a certain threshold, that client is listed on the blacklist. Likewise, if a client's confidence metric is greater than a certain threshold, the client is listed on the whitelist.
- the threshold is the same for both lists; in other embodiments, each list has a separate threshold.
- some embodiments of the confidence module 304 quantize the clients' confidence metrics to zero or one based on predetermined criteria. For example, a “young” client 112 having an age of less than six months, or a client that concurrently submits reports from two different geographic areas, can receive a confidence metric of zero irrespective of the other factors.
- An object reputation module 306 calculates reputation scores for objects and stores the reputation scores in the data store 104 .
- a reputation score of an object represents an assessment of the trustworthiness of the object.
- the reputation score is a number from zero to one (inclusive).
- a low reputation score indicates that the object is likely to be malware, while a high reputation score indicates that the object is likely to be legitimate.
- the object reputation module 306 calculates the reputation scores for objects based at least in part on the reported prevalence of the objects on the clients. Objects that are widely distributed among clients, such as a popular word processing application, are more likely to be legitimate, while objects that are rarely encountered by the clients may be malware. Thus, an object having a high prevalence receives a higher reputation score in one embodiment.
- the reputation scores for objects are also be based on the hygiene scores of the clients 112 on which the objects are primarily found.
- a hygiene score represent an assessment of the trustworthiness of the client 112 .
- “Trustworthiness” in the context of hygiene refers to the client's propensity for getting infected by malware, where a client 112 more frequently infected with malware is less trustworthy. For example, an object that is frequently found on clients 112 having high hygiene scores is likely to receive a high reputation score indicating a good reputation. In contrast, an object that is primarily found on clients 112 having low hygiene scores is likely to receive a low reputation score indicating a poor reputation.
- Reputation scores can also be based on other factors, such as reputation scores of websites on which objects are found, reputation scores of developers and/or distributors of the objects, and other characteristics such as whether the objects are digitally signed.
- the object reputation module 306 influences the reputation scores for objects based on the confidence metrics of the clients that submitted reports associated with the object.
- the object reputation module 306 excludes reports from clients 112 having confidence metrics below a threshold when calculating the reputation scores for objects. For example, reports from clients 112 on the blacklist described above can be excluded.
- an embodiment of the object reputation module 306 uses reports from only clients having confidence metrics above a threshold when calculating the reputation scores for objects. For example, only reports from clients 112 on the whitelist described above can be used.
- the object reputation module 306 uses the ratios of low- and/or high-confidence metric clients to other clients reporting an object over a given time period to influence the reputation score for the object.
- the object reputation module 306 acts according to the philosophy that an object primarily reported by clients 112 having low confidence metrics should probably have a low reputation score.
- the object reputation module 306 remains flexible enough to enable real-time detection of reputation “gaming.”
- the object reputation module 306 in concert with the other modules of the reputation module 106 tracks the confidence metrics of the clients 112 on a per-object basis.
- the object reputation module 306 can determine the number of clients with low confidence metrics that have reported the object and the number of clients with high confidence metrics that have reported the object, where “high” and “low” confidence levels are determined using thresholds. If a sufficient ratio of high-confidence metric clients to other-confidence metric clients (i.e., non-high-confidence metric clients) have reported the object over a given time period, the object reputation module 306 increases the reputation score for the object. In contrast, if a sufficient ratio of low-confidence metric clients to other-confidence metric clients low-confidence metric clients have reported the object over the same time period, or a different time period, the object reputation module 306 decreases the reputation score for the object. Therefore, real-time detection of reputation “gaming” is enabled, and the object reputation module 306 may respond quickly to malware attacks of reputation “gaming” by malicious entities.
- the object reputation module 306 uses the confidence metrics to weight the reports.
- a report having a confidence level of 1.0 can be weighted twice as much as a report having a confidence level of 0.5 when calculating the reputation score for an object.
- 200 reports from clients having confidence metrics of 0.50 can have the same influence on the reputation score as 100 reports from clients having confidence metrics of 1.0.
- One embodiment of the object reputation module 306 uses machine learning to calculate the reputation scores for the objects.
- a statistical machine learning algorithm can use the confidence metrics, prevalence of reports, and other information about the clients 112 as features to build a classifier for determining the reputation scores.
- the classifier can be trained using features for a set of objects for which the actual dispositions are known (i.e., whether the objects are legitimate or malware is known).
- An adjustment module 308 modifies the confidence metrics for clients 112 and reputation scores for objects as the values change over time. Confidence metrics can affect reputation scores, and, in some embodiments, reputation scores can affect confidence metrics. The adjustment module 308 modifies the confidence metrics and reputation scores as additional reports are received by the security server 102 over time. The adjustment module 308 can modify the metrics and scores continuously, on a periodic basis, and/or at other times depending upon the embodiment.
- an object that is primarily found on young clients may receive a low reputation score because it does not appear prevalent. Once the clients are no longer “young,” the reports from the clients are no longer disregarded and the object now appears more prevalent.
- the adjustment module 308 consequently increases the reputation score for the object.
- a client 112 with a high confidence metric may become compromised and submit numerous reports for objects that are subsequently found to be malware. In such a case, the adjustment module 308 can adjust the confidence metric for the client 112 downward.
- a training module 310 can generate one or more reputation classifiers used to support machine-learning based calculation of reputation scores.
- the training module generates a reputation classifier based on a dataset of features associated with the clients 112 and the objects. These features can include the confidence metrics for the clients 112 , hygiene scores of the clients, reputations of objects, prevalence of objects, etc.
- the reputation classifier is a statistical model which specifies values such as weights or coefficients that are associated with the features used to determine the reputation scores. Suitable types of statistical models for use as the reputation classifier include but are not limited to: neural networks, Bayesian models, regression-based models and support vectors machines (SVMs).
- the reputation classifier can be trained by identifying objects for known cases of malware and legitimate software, and using historical client reports for those objects as ground truths.
- the training module 310 may generate the reputation classifier on a periodic basis or at other times.
- the training module 310 stores the generated reputation classifier for use by the object reputation module 306 .
- FIG. 4 is a flowchart illustrating the operation of the reputation module 106 in determining reputation scores about objects using information from reports received from clients 112 according to one embodiment. It should be understood that these steps are illustrative only. Different embodiments of the reputation module 106 may perform the steps in different orders, omit certain steps, and/or perform additional steps not shown in FIG. 4 .
- the reputation module 106 receives 402 reports submitted by clients 112 about objects.
- the reports identify objects detected at the client and can accompany requests for reputation scores for the objects.
- the reputation module 106 determines 404 information about the clients 112 that submit the reports. As explained above, the information can include an identifier of the client, a hash or other identifier of the object, and other information about the client.
- the reputation module 106 uses the information in the report to determine other information about the client, such as the client's age, geographic location, IP address, etc.
- the reputation module 106 uses the determined information to generate 406 confidence metrics for the clients 112 .
- the reputation module 106 generates 408 reputation scores for the objects based, for example, on the prevalence of the objects at the clients.
- the generated reputation scores are influenced by the confidence metrics of the clients.
- the reputation module 106 provides 410 the object reputation scores to the clients 112 .
- the clients 112 can use the reputation scores to detect malware at the clients.
- the techniques described above may be applicable to various other types of detection systems, such as spam filters for messaging applications and other mechanisms designed to detect malware that utilize reputation scores of objects and confidence metrics of clients.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Virology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computer And Data Communications (AREA)
- Information Transfer Between Computers (AREA)
- Debugging And Monitoring (AREA)
Abstract
Reputations of objects are determined by a reputation system using reports from clients identifying the objects. Confidence metrics for the clients are generated using information determined from the reports. Confidence metrics indicate the amounts of confidence in the veracity of the reports. Reputation scores of objects are calculated using the reports from the clients and the confidence metrics for the clients. Confidence metrics and reputation scores are stored in correlation with identifiers for the objects. An object's reputation score is provided to a client in response to a request.
Description
- This application is a continuation of U.S. patent application Ser. No. 12/540,907, filed on Aug. 13, 2009, which is incorporated by reference it its entirety.
- 1. Field of the Invention
- This invention relates generally to computer security and particularly to detecting attempts to manipulate a reputation system for detecting malicious objects.
- 2. Description of the Related Art
- A wide variety of malicious software (malware) can attack modern computers. Malware threats include computer viruses, worms, Trojan horse programs, spyware, adware, crimeware, and phishing websites. Malicious entities sometimes attack servers that store sensitive or confidential data that can be used to the malicious entity's own advantage. Similarly, other computers, including home computers, must be constantly protected from malicious software that can be transmitted when a user communicates with others via electronic mail, when a user downloads new programs or program updates, and in many other situations. The different options and methods available to malicious entities for attack on a computer are numerous.
- Conventional techniques for detecting malware, such as signature string scanning, are becoming less effective. Modern malware is often targeted and delivered to only a relative handful of computers. For example, a Trojan horse program can be designed to target computers in a particular department of a particular enterprise. Such malware might never be encountered by security analysts, and thus the security software might never be configured with signatures for detecting such malware. Mass-distributed malware, in turn, can contain polymorphisms that make every instance of the malware unique. As a result, it is difficult to develop signature strings that reliably detect all instances of the malware.
- Newer techniques for detecting malware involve the use of reputation systems. A reputation system can determine the reputation of a file or other object encountered on a computer in order to assess the likelihood that the object is malware. One way to develop the reputation for an object is to collect reports from networked computers on which the object is found and base the reputation on information within the reports.
- However, because such a reputation system relies on reports from what are essentially unknown parties, it is susceptible to subversion by malicious actors. For example, an entity distributing malware could attempt to “game” the reputation system by submitting false reports indicating that the malware is legitimate. Thus, there is a need for a reputation system that is able to withstand such attempts to subvert its operation.
- The above and other needs are met by a method, computer-readable medium, and computer system for determining a reputation of an object in a reputation system. An embodiment of the method comprises receiving, from clients, reports identifying an object detected at the clients. The method further comprises determining information about the clients from the reports. In addition, the method comprises generating confidence metrics for the clients responsive to the determined information about the clients, the confidence metrics indicating amounts of confidence in the veracity of the reports received from the clients. The method also comprises calculating, based at least in part on the reports from the clients and the confidence metrics for the clients, a reputation score of the object and storing the reputation score of the object.
- Embodiments of the computer-readable medium store computer program instructions for determining a reputation of an object in a reputation system, the instructions comprising instructions for receiving reports from clients in the reputation system, the reports identifying an object detected at the clients and determining information about the clients from the reports. The instructions further comprise instructions for generating confidence metrics for the clients responsive to the determined information about the clients, the confidence metrics indicating amounts of confidence in the veracity of the reports received from the clients. The instructions additionally comprise instructions for calculating a reputation score of the object responsive at least in part to the reports from the clients and the confidence metrics for the clients and storing the reputation score of the object.
- Embodiments of the computer system comprise a computer-readable storage medium storing executable computer program instructions comprising instructions for receiving reports from clients in the reputation system, the reports identifying an object detected at the clients and determining information about the clients from the reports. The instructions further comprise instructions for generating confidence metrics for the clients responsive to the determined information about the clients, the confidence metrics indicating amounts of confidence in the veracity of the reports received from the clients. The instructions additionally comprise instructions for calculating a reputation score of the object responsive at least in part to the reports from the clients and the confidence metrics for the clients and storing the reputation score of the object. The computer system further comprises a processor for executing the computer program instructions.
- The features and advantages described in this disclosure and in the following detailed description are not all-inclusive, and particularly, many additional features and advantages will be apparent to one of ordinary skill in the relevant art in view of the drawings, specification, and claims hereof. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter, resort to the claims being necessary to determine such inventive subject matter.
-
FIG. 1 is a high-level block diagram of a computing environment according to one embodiment of the present invention. -
FIG. 2 is a high-level block diagram of a computer for acting as a security server and/or a client according to one embodiment. -
FIG. 3 is a high-level block diagram illustrating modules within the reputation module according to one embodiment. -
FIG. 4 is a flowchart illustrating the operation of the reputation module in determining reputation scores about objects using information from reports received from clients according to one embodiment. - The figures depict various embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
-
FIG. 1 is a high-level block diagram of acomputing environment 100 according to one embodiment.FIG. 1 illustrates asecurity server 102 connected to anetwork 114. Thenetwork 114 is also connected to multiple clients 112.FIG. 1 and the other figures use like reference numerals to identify like elements. A letter after a reference numeral, such as “112A,” indicates that the text refers specifically to the element having that particular reference numeral. A reference numeral in the text without a following letter, such as “112,” refers to any or all of the elements in the figures bearing that reference numeral (e.g. “112” in the text refers to reference numerals “112A,” “112B,” and/or “112C” in the figures). Only three clients 112 are shown inFIG. 1 in order to simplify and clarify the description. Embodiments of thecomputing environment 100 can have thousands or millions of clients 112, as well asmultiple servers 102. In some embodiments, the clients 112 are only connected to thenetwork 114 for a certain period of time or not at all. - The client 112 is an electronic device that can host malicious software. In one embodiment, the client 112 is a conventional computer system executing, for example, a Microsoft Windows-compatible operating system (OS), Apple OS X, and/or a Linux distribution. In another embodiment, the client 112 is another device having computer functionality, such as a personal digital assistant (PDA), mobile telephone, video game system, etc. The client 112 typically stores numerous computer files and/or software applications (collectively referred to as “objects”) that can host malicious software.
- Malicious software, sometimes called “malware,” is generally defined as software that executes on the client 112 surreptitiously or that has some surreptitious functionality. Malware can take many forms, such as parasitic viruses that attach to legitimate files, worms that exploit weaknesses in the computer's security in order to infect the computer and spread to other computers, Trojan horse programs that appear legitimate but actually contain hidden malicious code, and spyware that monitors keystrokes and/or other actions on the computer in order to capture sensitive information or display advertisements.
- The client 112 executes a security module 110 for detecting the presence of malware. The security module 110 can be, for example, incorporated into the OS of the computer or part of a separate comprehensive security package. In one embodiment, the security module 110 is provided by the entity that operates the
security server 102. The security module 110 can communicate with thesecurity server 102 via thenetwork 114 in order to download information utilized to detect malicious software. The security module 110 can also communicate with thesecurity server 102 via thenetwork 114 to submit information about objects detected at the client 112. - In one embodiment, security module 110 submits identifiers of objects detected at the client to the
security server 102 and receives reputation scores for the objects in return. The reputation score represents an assessment of the trustworthiness of the object. An object with a high reputation score has a good reputation and is unlikely to contain malware. An object with a low reputation score, conversely, has a poor reputation and might contain malware. The security module 110 uses the reputation score, along with other factors such as behaviors, to evaluate whether an object at the client 112 is malware. The security module 110 can report the outcome of the evaluation to thesecurity server 102. - The
security server 102 is provided by a security software vendor or other entity. Thesecurity server 102 can include one or more standard computer systems configured to communicate with clients 112 via thenetwork 114. Thesecurity server 102 receives reports containing identifiers of objects and other information from the clients 112 via thenetwork 114. Thesecurity server 102 sends reputation scores for the objects to the clients 112 via thenetwork 114 in response. - In one embodiment, the
security server 102 comprises adata store 104 and areputation module 106. Thereputation module 106 determines reputation scores of the objects based on factors such as how often the objects are encountered by the clients 112. These reputation scores are stored in thedata store 104 by thereputation module 106. Thereputation module 106 accesses thedata store 104 in response to queries or submissions from clients 112 via thenetwork 114. - An embodiment of the
reputation module 106 also determines confidence metrics for the clients 112. The confidence metric for a client 112 indicates an amount of confidence in the veracity of the information received from that client, where a high confidence metric indicates that the information is likely true. For example, a high volume of reports coming from a particular client 112 might indicate that the client is being controlled by a malicious entity that is attempting to influence object reputations by submitting false reports. Thereputation module 106 can detect such attempts to influence object reputations and lower the confidence metrics of the corresponding clients 112. Thereputation module 106 can discount the weights of reports from clients 112 having low confidence metrics, and boost the weights of reports from clients having high confidence metrics, when determining the reputations for objects. Therefore, thereputation module 106 is resistant to attempts from malicious entities to manipulate or otherwise “game” thesecurity server 102. - The
network 114 enables communications between thesecurity server 102 and the clients 112. In one embodiment, thenetwork 114 uses standard communications technologies and/or protocols and comprises the Internet. Thus, thenetwork 114 can include links using technologies such as Ethernet, 802.11, worldwide interoperability for microwave access (WiMAX), 3G, digital subscriber line (DSL), asynchronous transfer mode (ATM), InfiniBand, PCI Express Advanced Switching, etc. Similarly, the networking protocols used on thenetwork 114 can include multiprotocol label switching (MPLS), the transmission control protocol/Internet protocol (TCP/IP), the User Datagram Protocol (UDP), the hypertext transport protocol (HTTP), the simple mail transfer protocol (SMTP), the file transfer protocol (FTP), etc. The data exchanged over thenetwork 114 can be represented using technologies and/or formats including the hypertext markup language (HTML), the extensible markup language (XML), etc. In addition, all or some of links can be encrypted using conventional encryption technologies such as secure sockets layer (SSL), transport layer security (TLS), virtual private networks (VPNs), Internet Protocol security (IPsec), etc. In another embodiment, the entities can use custom and/or dedicated data communications technologies instead of, or in addition to, the ones described above. -
FIG. 2 is a high-level block diagram of acomputer 200 for acting as asecurity server 102 and/or a client 112 according to one embodiment. Illustrated are at least oneprocessor 202 coupled to achipset 204. Also coupled to thechipset 204 are amemory 206, astorage device 208, akeyboard 210, agraphics adapter 212, apointing device 214, and anetwork adapter 216. Adisplay 218 is coupled to thegraphics adapter 212. In one embodiment, the functionality of thechipset 204 is provided by amemory controller hub 220 and an I/O controller hub 222. In another embodiment, thememory 206 is coupled directly to theprocessor 202 instead of thechipset 204. - The
storage device 208 is any computer-readable storage medium, such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. Thememory 206 holds instructions and data used by theprocessor 202. Thepointing device 214 may be a mouse, track ball, or other type of pointing device, and is used in combination with thekeyboard 210 to input data into thecomputer system 200. Thegraphics adapter 212 displays images and other information on thedisplay 218. Thenetwork adapter 216 couples thecomputer system 200 to a local or wide area network. - As is known in the art, a
computer 200 can have different and/or other components than those shown inFIG. 2 . In addition, thecomputer 200 can lack certain illustrated components. In one embodiment, acomputer 200 acting as asecurity server 102 lacks akeyboard 210, pointingdevice 214,graphics adapter 212, and/ordisplay 218. Moreover, thestorage device 208 can be local and/or remote from the computer 200 (such as embodied within a storage area network (SAN)). - As is known in the art, the
computer 200 is adapted to execute computer program modules for providing functionality described herein. As used herein, the term “module” refers to computer program logic utilized to provide the specified functionality. Thus, a module can be implemented in hardware, firmware, and/or software. In one embodiment, program modules are stored on thestorage device 208, loaded into thememory 206, and executed by theprocessor 202. - Embodiments of the entities described herein can include other and/or different modules than the ones described here. In addition, the functionality attributed to the modules can be performed by other or different modules in other embodiments. Moreover, this description occasionally omits the term “module” for purposes of clarity and convenience.
-
FIG. 3 is a high-level block diagram illustrating a detailed view of thereputation module 106 according to one embodiment. In some embodiments, thereputation module 106 is incorporated into thesecurity server 102 as a standalone application or as part of another product. As shown inFIG. 3 , thereputation module 106 includes multiple modules. One of skill in the art will recognize that other embodiments of thereputation module 106 may have different and/or other modules than those described here, and that functionalities may be distributed among the modules in various ways. - A
communications module 302 exchanges information with security modules 110 of clients 112 via thenetwork 114. Thecommunications module 302 receives information regarding objects, such as files, detected at the clients 112 by the security modules 110. For example, thecommunication module 302 can receive a report from a security module 110 containing an identifier of an object detected at a client 112, along with a request for the reputation score of the object. The identifier of the object can be, for example, a hash of the object. Thecommunications module 302 interacts with the other modules of thereputation module 106 to determine the reputation score for the identified object and provides the score to the requesting security module 110. - In one embodiment, the reports also include information that the
communications module 302 can use to identify the clients 112 that submit reports. In one embodiment, a report includes a unique identifier of the client 112. The unique identifier can be a cryptographic key or token that accompanies the report and/or is used to sign or authenticate the report. Thecommunications module 302 can also detect other information that can be used to identify the clients 112, such as the IP addresses from which the reports are received. Depending upon the embodiment, thecommunications module 302 can access information in thedata store 104 that correlates the unique identifier with additional information about the client 112, such as its geographic location, age in the system (e.g., time elapsed since the client's first report), other reports it submitted, etc. - A
confidence module 304 determines confidence metrics for clients 112. As mentioned above, a confidence metric represents an amount of confidence in the veracity of the report received from a client 112. In one embodiment, the confidence metric is a continuous value from zero to one (inclusive) and is stored indata store 104. Depending upon the embodiment, the confidence metric can be associated with entities other than a client 112. For example, the confidence metric can be associated with a particular user of a client 112 or with a particular instance of a security module 110. For clarity, this description refers to confidence metrics as being associated with clients 112, but it will be understood that “clients” as used in this sense also refers to other entities with which confidence metrics can be associated. - The
confidence module 304 uses information received from the clients 112 and/or information about the clients received from other sources to calculate the clients' confidence metrics. Theconfidence module 304 uses the clients' unique identifiers to associate and correlate information from the clients. Depending upon the embodiment, theconfidence module 304 can use a variety of different factors to determine the confidence metric for a client. - An embodiment of the
confidence module 304 uses the system age of a client as a factor in calculating the client's confidence metric. The system age of a client is the elapsed time that a client 112 has been active. For example, the system age for a client 112 can be the time elapsed from when the first report was received from the client, from when the security module 110 was installed on the client, or from when the security module was registered with thesecurity server 102. - In general, a client 112 that is “older” receives a higher confidence metric. A client 112 that has only recently started submitting reports may be unreliable or otherwise untrustworthy. For example, a malicious entity may forge a large number of legitimate (and new) client identifiers and then submit, in high volume, reports attempting to boost the reputation scores of malware. The
confidence module 304 can recognize these reports as coming from “young” clients 112 and use this factor to decrease the confidence metrics for these clients. The distinction between “young” and “old” clients can be established using predetermined value. For example, clients 112 having an age of less than six months can be considered “young” while other clients can be considered “old.” - The
confidence module 304 may also calculate a client's age based on characteristics other than elapsed time. In one embodiment, a client's age is measured based on how many reports of “prevalent” objects the client has submitted. Certain software applications, for example, are prevalent, or ubiquitous, among the clients 112. For example, a very large percentage of clients are likely to have at least one of a limited number of web browsing applications installed. The security modules 110 report such prevalent applications to thesecurity server 102 along with other detected objects. Theconfidence module 304 can treat a client 112 that has submitted more reports of prevalent objects as “older” than clients that have submitted fewer reports of such objects. Such treatment will tend to decrease the confidence metrics of clients 112 that submit reports for only non-prevalent objects and therefore might be attempting to boost reputation scores of malware. - Similarly, the
confidence module 304 can calculate a client's age based on a ratio of reports submitted for prevalent objects to reports submitted for “rare” objects, where “rare” objects are objects reported by very few clients. If a client 112 tends to submit more reports for rare objects than for prevalent objects, the client might be attempting to boost reputation scores of malware. Therefore, such clients 112 are treated as being “young” and will have decreased confidence metrics. Factoring a client's age on the system makes “gaming” the reputation system expensive because a client must be “old” to have a higher confidence metric. - An embodiment of the
confidence module 304 uses the geographic location of a client 112 as a factor in calculating the client's confidence metric. Because most clients are not in multiple parts of the world at once, reports about the same object submitted by the same client from different geographic locations received close in time are indicative of suspicious behavior. Such reports might indicate, for example, that the client's identifier is being spoofed by multiple malicious actors. Therefore, theconfidence module 304 may reduce the client's confidence metric based on receiving such reports. Additionally, different geographic locations may have varying confidence metrics. Thus, a client 112 submitting a report from a particularly suspicious geographic location may have a lower confidence metric than another client submitting an equivalent report from a less suspicious geographic location. - The
confidence module 304 can also use the frequency of reports submitted by a client 112 as a factor in calculating the client's confidence metric. By tracking the patterns of report submissions of clients, theconfidence module 304 may detect an abnormal amount of submissions by a particular client. The threshold of what constitutes an “abnormal” deviation from the expected submission pattern may vary from one client to another, based on the client's previous submission patterns. For example, a client 112 that has historically submitted only a few reports that suddenly submits a large volume of reports may be compromised. As a result, theconfidence module 304 may decrease that client's confidence metric. - The
confidence module 304 can also use the client identifiers to determine confidence metrics. Theconfidence module 304 can identify certain client identifiers as invalid, forged, hacked, or otherwise compromised. This identification can be performed, for example, by correlating the identifier in a received report with a list of identifiers maintained in thedata store 104. Theconfidence module 304 may recognize compromised identifiers and utilize that information as a factor in calculating confidence metrics for the affected clients. Thus, a lower confidence metric may be given to a client with an invalid, or compromised, identifier. - Further, the
confidence module 304 can use IP addresses of clients from which reports are received to influence the confidence metrics. Certain IP addresses can be identified as belonging to malicious entities or otherwise associated with low confidence. Therefore, theconfidence module 304 can lower the confidence metrics of clients that send reports from certain IP addresses. - Other information received by the
confidence module 304 can also influence the confidence metrics of clients. If a client submits malformed or bogus reports, theconfidence module 304 has reason to suspect that the client has been compromised and can reduce the client's confidence metric. In embodiments where the client 112 is associated with a credit card account to which theconfidence module 304 has access (such as when the user of the client has purchased the security module 110 from thesecurity server 102 using a credit card), theconfidence module 304 can use observed credit activity, such as a chargeback request, to influence the confidence metric. In other embodiments, aggregating reports based on one or more factors described above, such as geographic location, may also identify clients submitting suspicious reports based on irregular reporting patterns and influence the client's confidence metric. - Additional factors and heuristics received by the
confidence module 304 may be used to influence the confidence metrics of clients. For example, receiving simultaneous submissions from more than one IP address by one client (the same client identifier) may indicate that the client has been compromised. Receiving an unusually high rate of submissions from a client, receiving repetitive reports about a few files from clients, and identifying clients that submit a disproportionately large or small number of files (or a disproportionate number of files of a given characteristic—e.g., the client seems to submit files that no one else ever submits) may further influence the confidence metrics of those clients. In addition, identifying clients known to send spam is another factor used by theconfidence module 304 to influence the confidence metrics of clients, in one embodiment. - An embodiment of the
confidence module 304 uses one or more of the factors described above to determine the confidence metric for a client 112. In embodiments where theconfidence module 304 uses multiple factors, the confidence module can ascribe different weights to different factors. For example, the age of the client 112 can have a significant influence on the confidence metric, while the geographic location of the client can have only a minor influence. In addition, some embodiments use multiple confidence metrics, where each confidence metric corresponds to a single factor (e.g., age). - In one embodiment, the
confidence module 304 uses the calculated confidence metrics to assign the clients to a whitelist or blacklist. Generally, the whitelist lists clients 112 that have high confidence metrics and are therefore presumed trustworthy. The blacklist, in contrast, lists clients 112 that have low confidence metrics and are therefore presumed untrustworthy. In some embodiments, theconfidence module 304 uses thresholds to assign clients 112 to the lists. If a client's confidence metric falls below a certain threshold, that client is listed on the blacklist. Likewise, if a client's confidence metric is greater than a certain threshold, the client is listed on the whitelist. In some embodiments, the threshold is the same for both lists; in other embodiments, each list has a separate threshold. - Similarly, some embodiments of the
confidence module 304 quantize the clients' confidence metrics to zero or one based on predetermined criteria. For example, a “young” client 112 having an age of less than six months, or a client that concurrently submits reports from two different geographic areas, can receive a confidence metric of zero irrespective of the other factors. - An
object reputation module 306 calculates reputation scores for objects and stores the reputation scores in thedata store 104. As mentioned above, a reputation score of an object represents an assessment of the trustworthiness of the object. In one embodiment, the reputation score is a number from zero to one (inclusive). A low reputation score indicates that the object is likely to be malware, while a high reputation score indicates that the object is likely to be legitimate. - In one embodiment, the
object reputation module 306 calculates the reputation scores for objects based at least in part on the reported prevalence of the objects on the clients. Objects that are widely distributed among clients, such as a popular word processing application, are more likely to be legitimate, while objects that are rarely encountered by the clients may be malware. Thus, an object having a high prevalence receives a higher reputation score in one embodiment. - In some embodiments, the reputation scores for objects are also be based on the hygiene scores of the clients 112 on which the objects are primarily found. A hygiene score represent an assessment of the trustworthiness of the client 112. “Trustworthiness” in the context of hygiene refers to the client's propensity for getting infected by malware, where a client 112 more frequently infected with malware is less trustworthy. For example, an object that is frequently found on clients 112 having high hygiene scores is likely to receive a high reputation score indicating a good reputation. In contrast, an object that is primarily found on clients 112 having low hygiene scores is likely to receive a low reputation score indicating a poor reputation. Reputation scores can also be based on other factors, such as reputation scores of websites on which objects are found, reputation scores of developers and/or distributors of the objects, and other characteristics such as whether the objects are digitally signed.
- In addition, the
object reputation module 306 influences the reputation scores for objects based on the confidence metrics of the clients that submitted reports associated with the object. In one embodiment, theobject reputation module 306 excludes reports from clients 112 having confidence metrics below a threshold when calculating the reputation scores for objects. For example, reports from clients 112 on the blacklist described above can be excluded. Similarly, an embodiment of theobject reputation module 306 uses reports from only clients having confidence metrics above a threshold when calculating the reputation scores for objects. For example, only reports from clients 112 on the whitelist described above can be used. - In another embodiment, the
object reputation module 306 uses the ratios of low- and/or high-confidence metric clients to other clients reporting an object over a given time period to influence the reputation score for the object. In this embodiment, theobject reputation module 306 acts according to the philosophy that an object primarily reported by clients 112 having low confidence metrics should probably have a low reputation score. At the same time, theobject reputation module 306 remains flexible enough to enable real-time detection of reputation “gaming.” For this embodiment, theobject reputation module 306 in concert with the other modules of thereputation module 106 tracks the confidence metrics of the clients 112 on a per-object basis. - For a given object, the
object reputation module 306 can determine the number of clients with low confidence metrics that have reported the object and the number of clients with high confidence metrics that have reported the object, where “high” and “low” confidence levels are determined using thresholds. If a sufficient ratio of high-confidence metric clients to other-confidence metric clients (i.e., non-high-confidence metric clients) have reported the object over a given time period, theobject reputation module 306 increases the reputation score for the object. In contrast, if a sufficient ratio of low-confidence metric clients to other-confidence metric clients low-confidence metric clients have reported the object over the same time period, or a different time period, theobject reputation module 306 decreases the reputation score for the object. Therefore, real-time detection of reputation “gaming” is enabled, and theobject reputation module 306 may respond quickly to malware attacks of reputation “gaming” by malicious entities. - In one embodiment, the
object reputation module 306 uses the confidence metrics to weight the reports. Thus, a report having a confidence level of 1.0 can be weighted twice as much as a report having a confidence level of 0.5 when calculating the reputation score for an object. Said another way, 200 reports from clients having confidence metrics of 0.50 can have the same influence on the reputation score as 100 reports from clients having confidence metrics of 1.0. - One embodiment of the
object reputation module 306 uses machine learning to calculate the reputation scores for the objects. A statistical machine learning algorithm can use the confidence metrics, prevalence of reports, and other information about the clients 112 as features to build a classifier for determining the reputation scores. The classifier can be trained using features for a set of objects for which the actual dispositions are known (i.e., whether the objects are legitimate or malware is known). - An
adjustment module 308 modifies the confidence metrics for clients 112 and reputation scores for objects as the values change over time. Confidence metrics can affect reputation scores, and, in some embodiments, reputation scores can affect confidence metrics. Theadjustment module 308 modifies the confidence metrics and reputation scores as additional reports are received by thesecurity server 102 over time. Theadjustment module 308 can modify the metrics and scores continuously, on a periodic basis, and/or at other times depending upon the embodiment. - For example, in an embodiment where reports from young clients are disregarded (e.g., clients less than six months old receive confidence metrics of zero), an object that is primarily found on young clients may receive a low reputation score because it does not appear prevalent. Once the clients are no longer “young,” the reports from the clients are no longer disregarded and the object now appears more prevalent. The
adjustment module 308 consequently increases the reputation score for the object. Similarly, a client 112 with a high confidence metric may become compromised and submit numerous reports for objects that are subsequently found to be malware. In such a case, theadjustment module 308 can adjust the confidence metric for the client 112 downward. - A
training module 310 can generate one or more reputation classifiers used to support machine-learning based calculation of reputation scores. In one embodiment, the training module generates a reputation classifier based on a dataset of features associated with the clients 112 and the objects. These features can include the confidence metrics for the clients 112, hygiene scores of the clients, reputations of objects, prevalence of objects, etc. The reputation classifier is a statistical model which specifies values such as weights or coefficients that are associated with the features used to determine the reputation scores. Suitable types of statistical models for use as the reputation classifier include but are not limited to: neural networks, Bayesian models, regression-based models and support vectors machines (SVMs). The reputation classifier can be trained by identifying objects for known cases of malware and legitimate software, and using historical client reports for those objects as ground truths. According to the embodiment, thetraining module 310 may generate the reputation classifier on a periodic basis or at other times. Thetraining module 310 stores the generated reputation classifier for use by theobject reputation module 306. -
FIG. 4 is a flowchart illustrating the operation of thereputation module 106 in determining reputation scores about objects using information from reports received from clients 112 according to one embodiment. It should be understood that these steps are illustrative only. Different embodiments of thereputation module 106 may perform the steps in different orders, omit certain steps, and/or perform additional steps not shown inFIG. 4 . - As shown in
FIG. 4 , thereputation module 106 receives 402 reports submitted by clients 112 about objects. The reports identify objects detected at the client and can accompany requests for reputation scores for the objects. Thereputation module 106 determines 404 information about the clients 112 that submit the reports. As explained above, the information can include an identifier of the client, a hash or other identifier of the object, and other information about the client. Thereputation module 106 uses the information in the report to determine other information about the client, such as the client's age, geographic location, IP address, etc. Thereputation module 106 uses the determined information to generate 406 confidence metrics for the clients 112. Thereputation module 106 generates 408 reputation scores for the objects based, for example, on the prevalence of the objects at the clients. The generated reputation scores are influenced by the confidence metrics of the clients. Thereputation module 106 provides 410 the object reputation scores to the clients 112. The clients 112 can use the reputation scores to detect malware at the clients. - The techniques described above may be applicable to various other types of detection systems, such as spam filters for messaging applications and other mechanisms designed to detect malware that utilize reputation scores of objects and confidence metrics of clients.
- The foregoing description of the embodiments of the invention has been presented for the purpose of illustration; it is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible in light of the above disclosure.
- Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the invention be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments of the invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the following claims.
Claims (20)
1. A method of using a computer to determine a reputation of an object in a reputation system, comprising:
receiving reports from clients in the reputation system, the reports identifying an object detected at the clients;
determining a prevalence of the object on the clients in the reputation system based on the reports received from the clients;
determining information about the clients from the reports received from the clients;
generating confidence metrics for the clients responsive to the determined information about the clients, the confidence metrics indicating amounts of confidence in the veracity of the reports received from the clients, wherein higher confidence metrics for the clients indicate that information in reports received from the clients is more likely to be true;
calculating a reputation score of the object responsive at least in part to the reports received from the clients, the prevalence of the object, and the confidence metrics for the clients, wherein a high prevalence of the object on the clients causes the object to receive a higher reputation score indicating that the object is unlikely to contain malicious software; and
storing the reputation score of the object.
2. The method of claim 1 , wherein a report from a client includes a request for the reputation score of the object and further comprising:
providing the reputation score of the object to the client.
3. The method of claim 1 , wherein the high prevalence of the object on the clients indicates a likelihood that the object is legitimate.
4. The method of claim 1 , wherein the determined information about the clients includes a geographic location of a client in the reputation system, and wherein a confidence metric for the client is based at least in part on the geographic location of the client.
5. The method of claim 1 , wherein the determined information about the clients includes an expected frequency submission pattern for reports submitted by a client, and wherein a confidence metric for the client is based at least in part on whether a frequency of reports submitted by the client deviates from the expected submission pattern.
6. The method of claim 1 , wherein calculating the reputation score of the object comprises:
using a confidence metric threshold to identify clients having low confidence metrics;
determining a ratio of clients having low confidence metrics that submitted reports identifying the object to all clients that submitted reports identifying the object; and
calculating the reputation score of the object responsive at least in part to the determined ratio.
7. The method of claim 1 , wherein calculating the reputation score of the object comprises:
using a confidence metric threshold to identify clients having high confidence metrics;
determining a ratio of clients having high confidence metrics that submitted reports identifying the object to all clients that submitted reports identifying the object; and
calculating the reputation score of the object responsive at least in part to the determined ratio.
8. The method of claim 1 , wherein calculating the reputation score of the object comprises:
using a statistical machine learning algorithm to calculate the reputation score of the object.
9. A non-transitory computer-readable storage medium storing executable computer program instructions for determining a reputation of an object in a reputation system, the computer program instructions comprising instructions for:
receiving reports from clients in the reputation system, the reports identifying an object detected at the clients;
determining a prevalence of the object on the clients in the reputation system based on the reports received from the clients;
determining information about the clients from the reports received from the clients;
generating confidence metrics for the clients responsive to the determined information about the clients, the confidence metrics indicating amounts of confidence in the veracity of the reports received from the clients, wherein higher confidence metrics for the clients indicate that information in reports received from the clients is more likely to be true;
calculating a reputation score of the object responsive at least in part to the reports received from the clients, the prevalence of the object, and the confidence metrics for the clients, wherein a high prevalence of the object on the clients causes the object to receive a higher reputation score indicating that the object is unlikely to contain malicious software; and
storing the reputation score of the object.
10. The non-transitory computer-readable storage medium of claim 9 , wherein a report from a client includes a request for the reputation score of the object and the computer program instructions further comprise:
providing the reputation score of the object to the client.
11. The non-transitory computer-readable storage medium of claim 9 , wherein the high prevalence of the object on the clients indicates a likelihood that the object is legitimate.
12. The non-transitory computer-readable storage medium of claim 9 , wherein the determined information about the clients includes a geographic location of a client in the reputation system, and wherein a confidence metric for the client is based at least in part on the geographic location of the client.
13. The non-transitory computer-readable storage medium of claim 9 , wherein the determined information about the clients includes an expected frequency submission pattern for reports submitted by a client, and wherein a confidence metric for the client is based at least in part on whether a frequency of reports submitted by the client deviates from the expected submission pattern.
14. The computer-readable storage medium of claim 9 , wherein calculating the reputation score of the object comprises:
using a confidence metric threshold to identify clients having low confidence metrics;
determining a ratio of clients having low confidence metrics that submitted reports identifying the object to all clients that submitted reports identifying the object; and
calculating the reputation score of the object responsive at least in part to the determined ratio.
15. The computer-readable storage medium of claim 9 , wherein calculating the reputation score of the object comprises:
using a confidence metric threshold to identify clients having high confidence metrics;
determining a ratio of clients having high confidence metrics that submitted reports identifying the object to all clients that submitted reports identifying the object; and
calculating the reputation score of the object responsive at least in part to the determined ratio.
16. A computer system for determining a reputation of an object in a reputation system, the computer system comprising:
a non-transitory computer-readable storage medium storing executable computer program instructions, the computer program instructions comprising instructions for:
receiving reports from clients in the reputation system, the reports identifying an object detected at the clients;
determining a prevalence of the object on the clients in the reputation system based on the reports received from the clients;
determining information about the clients from the reports received from the clients;
generating confidence metrics for the clients responsive to the determined information about the clients, the confidence metrics indicating amounts of confidence in the veracity of the reports received from the clients, wherein higher confidence metrics for the clients indicate that information in reports received from the clients is more likely to be true;
calculating a reputation score of the object responsive at least in part to the reports received from the clients, the prevalence of the object, and the confidence metrics for the clients, wherein a high prevalence of the object on the clients causes the object to receive a higher reputation score indicating that the object is unlikely to contain malicious software; and
storing the reputation score of the object.
17. The computer system of claim 16 , wherein the determined information about the clients includes at least one of a geographic location of a client in the reputation system, and wherein a confidence metric for the client is based at least in part on the geographic location of the client.
18. The computer system of claim 16 , wherein the determined information about the clients includes an expected frequency submission pattern for reports submitted by a client, and wherein a confidence metric for the client is based at least in part on whether a frequency of reports submitted by the client deviates from the expected submission pattern.
19. The computer system of claim 16 , wherein calculating the reputation score of the object comprises:
using a confidence metric threshold to identify clients having low confidence metrics;
determining a ratio of clients having low confidence metrics that submitted reports identifying the object to all clients that submitted reports identifying the object; and
calculating the reputation score of the object responsive at least in part to the determined ratio.
20. The computer system of claim 16 , wherein calculating the reputation score of the object comprises:
using a confidence metric threshold to identify clients having high confidence metrics;
determining a ratio of clients having high confidence metrics that submitted reports identifying the object to all clients that submitted reports identifying the object; and
calculating the reputation score of the object responsive at least in part to the determined ratio.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/731,826 US20150269379A1 (en) | 2009-08-13 | 2015-06-05 | Using confidence about user intent in a reputation system |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/540,907 US9081958B2 (en) | 2009-08-13 | 2009-08-13 | Using confidence about user intent in a reputation system |
US14/731,826 US20150269379A1 (en) | 2009-08-13 | 2015-06-05 | Using confidence about user intent in a reputation system |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/540,907 Continuation US9081958B2 (en) | 2009-08-13 | 2009-08-13 | Using confidence about user intent in a reputation system |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150269379A1 true US20150269379A1 (en) | 2015-09-24 |
Family
ID=42829476
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/540,907 Expired - Fee Related US9081958B2 (en) | 2009-08-13 | 2009-08-13 | Using confidence about user intent in a reputation system |
US14/731,826 Abandoned US20150269379A1 (en) | 2009-08-13 | 2015-06-05 | Using confidence about user intent in a reputation system |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/540,907 Expired - Fee Related US9081958B2 (en) | 2009-08-13 | 2009-08-13 | Using confidence about user intent in a reputation system |
Country Status (6)
Country | Link |
---|---|
US (2) | US9081958B2 (en) |
EP (1) | EP2465071A1 (en) |
JP (1) | JP5599884B2 (en) |
CN (1) | CN102656587B (en) |
CA (1) | CA2763201C (en) |
WO (1) | WO2011019720A1 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180034835A1 (en) * | 2016-07-26 | 2018-02-01 | Microsoft Technology Licensing, Llc | Remediation for ransomware attacks on cloud drive folders |
US10250623B1 (en) * | 2017-12-11 | 2019-04-02 | Malwarebytes, Inc. | Generating analytical data from detection events of malicious objects |
US10873456B1 (en) * | 2019-05-07 | 2020-12-22 | LedgerDomain, LLC | Neural network classifiers for block chain data structures |
US11102239B1 (en) * | 2017-11-13 | 2021-08-24 | Twitter, Inc. | Client device identification on a network |
US11736290B1 (en) | 2022-11-07 | 2023-08-22 | Ledgerdomain Inc. | Management of recipient credentials leveraging private keys on keystores read by provisioned devices |
US11741215B1 (en) | 2022-11-07 | 2023-08-29 | Ledgerdomain Inc. | Recipient credentialing leveraging private keys on keystores read by provisioned devices |
US11741216B1 (en) | 2022-11-07 | 2023-08-29 | Ledgerdomain Inc. | Credential revocation leveraging private keys on keystores read by provisioned devices |
US11769577B1 (en) | 2020-01-15 | 2023-09-26 | Ledgerdomain Inc. | Decentralized identity authentication framework for distributed data |
US11829510B2 (en) | 2020-01-15 | 2023-11-28 | Ledgerdomain Inc. | Secure messaging in a machine learning blockchain network |
US11848754B1 (en) | 2022-11-07 | 2023-12-19 | Ledgerdomain Inc. | Access delegation leveraging private keys on keystores read by provisioned devices |
US12105842B1 (en) | 2020-01-15 | 2024-10-01 | Ledgerdomain Inc. | Verifiable credentialling and message content provenance authentication |
US12141267B1 (en) | 2023-08-28 | 2024-11-12 | Ledgerdomain Inc. | Recipient credentialing leveraging private keys on keystores read by provisioned devices |
Families Citing this family (76)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8190868B2 (en) | 2006-08-07 | 2012-05-29 | Webroot Inc. | Malware management through kernel detection |
US8250657B1 (en) | 2006-12-29 | 2012-08-21 | Symantec Corporation | Web site hygiene-based computer security |
US8312536B2 (en) | 2006-12-29 | 2012-11-13 | Symantec Corporation | Hygiene-based computer security |
US8499063B1 (en) | 2008-03-31 | 2013-07-30 | Symantec Corporation | Uninstall and system performance based software application reputation |
US8595282B2 (en) | 2008-06-30 | 2013-11-26 | Symantec Corporation | Simplified communication of a reputation score for an entity |
US8413251B1 (en) | 2008-09-30 | 2013-04-02 | Symantec Corporation | Using disposable data misuse to determine reputation |
US8838976B2 (en) * | 2009-02-10 | 2014-09-16 | Uniloc Luxembourg S.A. | Web content access using a client device identifier |
US8904520B1 (en) | 2009-03-19 | 2014-12-02 | Symantec Corporation | Communication-based reputation system |
US8381289B1 (en) | 2009-03-31 | 2013-02-19 | Symantec Corporation | Communication-based host reputation system |
US11489857B2 (en) | 2009-04-21 | 2022-11-01 | Webroot Inc. | System and method for developing a risk profile for an internet resource |
US8800030B2 (en) * | 2009-09-15 | 2014-08-05 | Symantec Corporation | Individualized time-to-live for reputation scores of computer files |
US9082128B2 (en) * | 2009-10-19 | 2015-07-14 | Uniloc Luxembourg S.A. | System and method for tracking and scoring user activities |
US8341745B1 (en) | 2010-02-22 | 2012-12-25 | Symantec Corporation | Inferring file and website reputations by belief propagation leveraging machine reputation |
US8510836B1 (en) * | 2010-07-06 | 2013-08-13 | Symantec Corporation | Lineage-based reputation system |
US8935785B2 (en) * | 2010-09-24 | 2015-01-13 | Verisign, Inc | IP prioritization and scoring system for DDoS detection and mitigation |
US9122877B2 (en) | 2011-03-21 | 2015-09-01 | Mcafee, Inc. | System and method for malware and network reputation correlation |
US8838992B1 (en) * | 2011-04-28 | 2014-09-16 | Trend Micro Incorporated | Identification of normal scripts in computer systems |
US9118702B2 (en) * | 2011-05-31 | 2015-08-25 | Bce Inc. | System and method for generating and refining cyber threat intelligence data |
US9106680B2 (en) | 2011-06-27 | 2015-08-11 | Mcafee, Inc. | System and method for protocol fingerprinting and reputation correlation |
AU2012100459B4 (en) | 2011-08-15 | 2012-11-22 | Uniloc Usa, Inc. | Personal control of personal information |
US9223978B2 (en) | 2011-10-28 | 2015-12-29 | Confer Technologies, Inc. | Security policy deployment and enforcement system for the detection and control of polymorphic and targeted malware |
US8881273B2 (en) * | 2011-12-02 | 2014-11-04 | Uniloc Luxembourg, S.A. | Device reputation management |
AU2012100470B4 (en) * | 2012-02-15 | 2012-11-29 | Uniloc Usa, Inc. | Anonymous whistle blower system with reputation reporting of anonymous whistle blowers |
AU2012100464B4 (en) | 2012-02-20 | 2012-11-29 | Uniloc Usa, Inc. | Computer-based comparison of human individuals |
US9311650B2 (en) * | 2012-02-22 | 2016-04-12 | Alibaba Group Holding Limited | Determining search result rankings based on trust level values associated with sellers |
US8931043B2 (en) | 2012-04-10 | 2015-01-06 | Mcafee Inc. | System and method for determining and using local reputations of users and hosts to protect information in a network environment |
US9152784B2 (en) | 2012-04-18 | 2015-10-06 | Mcafee, Inc. | Detection and prevention of installation of malicious mobile applications |
US9124472B1 (en) | 2012-07-25 | 2015-09-01 | Symantec Corporation | Providing file information to a client responsive to a file download stability prediction |
AU2014302603A1 (en) | 2013-06-24 | 2016-01-07 | Cylance Inc. | Automated system for generative multimodel multiclass classification and similarity analysis using machine learning |
US9065849B1 (en) * | 2013-09-18 | 2015-06-23 | Symantec Corporation | Systems and methods for determining trustworthiness of software programs |
US10237303B2 (en) | 2013-09-29 | 2019-03-19 | Mcafee, Llc | Prevalence-based reputations |
US9319423B2 (en) | 2013-11-04 | 2016-04-19 | At&T Intellectual Property I, L.P. | Malware and anomaly detection via activity recognition based on sensor data |
US9262296B1 (en) | 2014-01-31 | 2016-02-16 | Cylance Inc. | Static feature extraction from structured files |
US8930916B1 (en) | 2014-01-31 | 2015-01-06 | Cylance Inc. | Generation of API call graphs from static disassembly |
US20150304343A1 (en) | 2014-04-18 | 2015-10-22 | Intuit Inc. | Method and system for providing self-monitoring, self-reporting, and self-repairing virtual assets in a cloud computing environment |
EP3103070B1 (en) | 2014-02-07 | 2023-09-13 | Cylance Inc. | Application execution control utilizing ensemble machine learning for discernment |
US9866581B2 (en) | 2014-06-30 | 2018-01-09 | Intuit Inc. | Method and system for secure delivery of information to computing environments |
US10757133B2 (en) | 2014-02-21 | 2020-08-25 | Intuit Inc. | Method and system for creating and deploying virtual assets |
US10121007B2 (en) | 2014-02-21 | 2018-11-06 | Intuit Inc. | Method and system for providing a robust and efficient virtual asset vulnerability management and verification service |
US11294700B2 (en) | 2014-04-18 | 2022-04-05 | Intuit Inc. | Method and system for enabling self-monitoring virtual assets to correlate external events with characteristic patterns associated with the virtual assets |
US9323924B1 (en) * | 2014-05-09 | 2016-04-26 | Symantec Corporation | Systems and methods for establishing reputations of files |
US9794341B2 (en) * | 2014-06-30 | 2017-10-17 | Sandisk Technologies Llc | Data storage verification in distributed storage system |
US9118714B1 (en) * | 2014-07-23 | 2015-08-25 | Lookingglass Cyber Solutions, Inc. | Apparatuses, methods and systems for a cyber threat visualization and editing user interface |
US9313218B1 (en) * | 2014-07-23 | 2016-04-12 | Symantec Corporation | Systems and methods for providing information identifying the trustworthiness of applications on application distribution platforms |
CN105376265B (en) * | 2014-07-24 | 2019-04-02 | 阿里巴巴集团控股有限公司 | A kind of application method and device of network exhaustive resource |
US10102082B2 (en) | 2014-07-31 | 2018-10-16 | Intuit Inc. | Method and system for providing automated self-healing virtual assets |
US9798883B1 (en) | 2014-10-06 | 2017-10-24 | Exabeam, Inc. | System, method, and computer program product for detecting and assessing security risks in a network |
US10083295B2 (en) * | 2014-12-23 | 2018-09-25 | Mcafee, Llc | System and method to combine multiple reputations |
US9465940B1 (en) | 2015-03-30 | 2016-10-11 | Cylance Inc. | Wavelet decomposition of software entropy to identify malware |
WO2016201593A1 (en) | 2015-06-15 | 2016-12-22 | Nokia Technologies Oy | Control of unwanted network traffic |
US9992211B1 (en) * | 2015-08-27 | 2018-06-05 | Symantec Corporation | Systems and methods for improving the classification accuracy of trustworthiness classifiers |
US10496815B1 (en) | 2015-12-18 | 2019-12-03 | Exabeam, Inc. | System, method, and computer program for classifying monitored assets based on user labels and for detecting potential misuse of monitored assets based on the classifications |
CA3015352A1 (en) | 2016-02-23 | 2017-08-31 | Carbon Black, Inc. | Cybersecurity systems and techniques |
US11140167B1 (en) | 2016-03-01 | 2021-10-05 | Exabeam, Inc. | System, method, and computer program for automatically classifying user accounts in a computer network using keys from an identity management system |
US10178108B1 (en) * | 2016-05-31 | 2019-01-08 | Exabeam, Inc. | System, method, and computer program for automatically classifying user accounts in a computer network based on account behavior |
US10169581B2 (en) | 2016-08-29 | 2019-01-01 | Trend Micro Incorporated | Detecting malicious code in sections of computer files |
US10333965B2 (en) | 2016-09-12 | 2019-06-25 | Qualcomm Incorporated | Methods and systems for on-device real-time adaptive security based on external threat intelligence inputs |
US20180077188A1 (en) * | 2016-09-12 | 2018-03-15 | Qualcomm Incorporated | Methods And Systems For On-Device Real-Time Adaptive Security Based On External Threat Intelligence Inputs |
US10069823B1 (en) * | 2016-12-27 | 2018-09-04 | Symantec Corporation | Indirect access control |
US10887325B1 (en) | 2017-02-13 | 2021-01-05 | Exabeam, Inc. | Behavior analytics system for determining the cybersecurity risk associated with first-time, user-to-entity access alerts |
US10645109B1 (en) | 2017-03-31 | 2020-05-05 | Exabeam, Inc. | System, method, and computer program for detection of anomalous user network activity based on multiple data sources |
US10841338B1 (en) | 2017-04-05 | 2020-11-17 | Exabeam, Inc. | Dynamic rule risk score determination in a cybersecurity monitoring system |
US9825994B1 (en) | 2017-04-19 | 2017-11-21 | Malwarebytes Inc. | Detection and removal of unwanted applications |
CN107465686A (en) * | 2017-08-23 | 2017-12-12 | 杭州安恒信息技术有限公司 | IP credit worthinesses computational methods and device based on the heterogeneous big data of network |
CN107370754B (en) * | 2017-08-23 | 2020-04-07 | 杭州安恒信息技术股份有限公司 | Website protection method based on IP credit rating scoring model of cloud protection |
US10586038B2 (en) | 2017-09-08 | 2020-03-10 | Qualcomm Incorporated | Secure stack overflow protection via a hardware write-once register |
CN107819631B (en) * | 2017-11-23 | 2021-03-02 | 东软集团股份有限公司 | Equipment anomaly detection method, device and equipment |
US11423143B1 (en) | 2017-12-21 | 2022-08-23 | Exabeam, Inc. | Anomaly detection based on processes executed within a network |
US11431741B1 (en) | 2018-05-16 | 2022-08-30 | Exabeam, Inc. | Detecting unmanaged and unauthorized assets in an information technology network with a recurrent neural network that identifies anomalously-named assets |
US10965708B2 (en) * | 2018-06-06 | 2021-03-30 | Whitehat Security, Inc. | Systems and methods for machine learning based application security testing |
US11178168B1 (en) | 2018-12-20 | 2021-11-16 | Exabeam, Inc. | Self-learning cybersecurity threat detection system, method, and computer program for multi-domain data |
US11625366B1 (en) | 2019-06-04 | 2023-04-11 | Exabeam, Inc. | System, method, and computer program for automatic parser creation |
TWI721446B (en) * | 2019-06-05 | 2021-03-11 | 中國信託商業銀行股份有限公司 | Personal credit scoring method based on big data of household registration |
US11956253B1 (en) | 2020-06-15 | 2024-04-09 | Exabeam, Inc. | Ranking cybersecurity alerts from multiple sources using machine learning |
US12063226B1 (en) | 2020-09-29 | 2024-08-13 | Exabeam, Inc. | Graph-based multi-staged attack detection in the context of an attack framework |
CN113282922B (en) * | 2021-06-29 | 2024-08-20 | 北京安天网络安全技术有限公司 | Method, device, equipment and medium for protecting and controlling mobile storage equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050114452A1 (en) * | 2003-11-03 | 2005-05-26 | Prakash Vipul V. | Method and apparatus to block spam based on spam reports from a community of users |
US20050198181A1 (en) * | 2004-03-02 | 2005-09-08 | Jordan Ritter | Method and apparatus to use a statistical model to classify electronic communications |
US20060253581A1 (en) * | 2005-05-03 | 2006-11-09 | Dixon Christopher J | Indicating website reputations during website manipulation of user information |
US20090282476A1 (en) * | 2006-12-29 | 2009-11-12 | Symantec Corporation | Hygiene-Based Computer Security |
US20100235915A1 (en) * | 2009-03-12 | 2010-09-16 | Nasir Memon | Using host symptoms, host roles, and/or host reputation for detection of host infection |
US8001606B1 (en) * | 2009-06-30 | 2011-08-16 | Symantec Corporation | Malware detection using a white list |
US8566932B1 (en) * | 2009-07-31 | 2013-10-22 | Symantec Corporation | Enforcing good network hygiene using reputation-based automatic remediation |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5712914A (en) * | 1995-09-29 | 1998-01-27 | Intel Corporation | Digital certificates containing multimedia data extensions |
US6845453B2 (en) * | 1998-02-13 | 2005-01-18 | Tecsec, Inc. | Multiple factor-based user identification and authentication |
MXPA02009908A (en) * | 2000-04-05 | 2006-03-09 | Ods Properties Inc | Interactive wagering systems and methods for restricting wagering access. |
US7092943B2 (en) * | 2002-03-01 | 2006-08-15 | Enterasys Networks, Inc. | Location based data |
US7269732B2 (en) * | 2003-06-05 | 2007-09-11 | Sap Aktiengesellschaft | Securing access to an application service based on a proximity token |
US7055392B2 (en) | 2003-07-04 | 2006-06-06 | Robert Bosch Gmbh | Micromechanical pressure sensor |
US7437755B2 (en) * | 2005-10-26 | 2008-10-14 | Cisco Technology, Inc. | Unified network and physical premises access control server |
KR100721522B1 (en) * | 2005-11-28 | 2007-05-23 | 한국전자통신연구원 | Method for providing location based service using location token |
JP2007164465A (en) | 2005-12-14 | 2007-06-28 | Hitachi Ltd | Client security management system |
US7860752B2 (en) * | 2006-08-30 | 2010-12-28 | Ebay Inc. | System and method for measuring reputation using take volume |
US8849921B2 (en) | 2007-06-28 | 2014-09-30 | Symantec Corporation | Method and apparatus for creating predictive filters for messages |
CN101399683B (en) | 2007-09-25 | 2011-05-11 | 中国科学院声学研究所 | Credit computing method in credit system |
US8220034B2 (en) * | 2007-12-17 | 2012-07-10 | International Business Machines Corporation | User authentication based on authentication credentials and location information |
US8001582B2 (en) * | 2008-01-18 | 2011-08-16 | Microsoft Corporation | Cross-network reputation for online services |
US8799630B2 (en) * | 2008-06-26 | 2014-08-05 | Microsoft Corporation | Advanced security negotiation protocol |
US8595282B2 (en) | 2008-06-30 | 2013-11-26 | Symantec Corporation | Simplified communication of a reputation score for an entity |
CN101459718B (en) | 2009-01-06 | 2012-05-23 | 华中科技大学 | Junk voice filtering method and system based on mobile communication network |
JP5540836B2 (en) | 2010-03-31 | 2014-07-02 | ソニー株式会社 | Base station, communication system, and communication method |
-
2009
- 2009-08-13 US US12/540,907 patent/US9081958B2/en not_active Expired - Fee Related
-
2010
- 2010-08-10 CN CN201080032172.7A patent/CN102656587B/en active Active
- 2010-08-10 WO PCT/US2010/045022 patent/WO2011019720A1/en active Application Filing
- 2010-08-10 EP EP10752209A patent/EP2465071A1/en not_active Withdrawn
- 2010-08-10 JP JP2012524786A patent/JP5599884B2/en active Active
- 2010-08-10 CA CA2763201A patent/CA2763201C/en active Active
-
2015
- 2015-06-05 US US14/731,826 patent/US20150269379A1/en not_active Abandoned
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050114452A1 (en) * | 2003-11-03 | 2005-05-26 | Prakash Vipul V. | Method and apparatus to block spam based on spam reports from a community of users |
US20050198181A1 (en) * | 2004-03-02 | 2005-09-08 | Jordan Ritter | Method and apparatus to use a statistical model to classify electronic communications |
US20060253581A1 (en) * | 2005-05-03 | 2006-11-09 | Dixon Christopher J | Indicating website reputations during website manipulation of user information |
US20090282476A1 (en) * | 2006-12-29 | 2009-11-12 | Symantec Corporation | Hygiene-Based Computer Security |
US20100235915A1 (en) * | 2009-03-12 | 2010-09-16 | Nasir Memon | Using host symptoms, host roles, and/or host reputation for detection of host infection |
US8001606B1 (en) * | 2009-06-30 | 2011-08-16 | Symantec Corporation | Malware detection using a white list |
US8566932B1 (en) * | 2009-07-31 | 2013-10-22 | Symantec Corporation | Enforcing good network hygiene using reputation-based automatic remediation |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10715533B2 (en) * | 2016-07-26 | 2020-07-14 | Microsoft Technology Licensing, Llc. | Remediation for ransomware attacks on cloud drive folders |
US20180034835A1 (en) * | 2016-07-26 | 2018-02-01 | Microsoft Technology Licensing, Llc | Remediation for ransomware attacks on cloud drive folders |
US11102239B1 (en) * | 2017-11-13 | 2021-08-24 | Twitter, Inc. | Client device identification on a network |
US10250623B1 (en) * | 2017-12-11 | 2019-04-02 | Malwarebytes, Inc. | Generating analytical data from detection events of malicious objects |
US11764959B1 (en) * | 2019-05-07 | 2023-09-19 | Ledgerdomain Inc. | Neural network classifiers for block chain data structures |
US10873456B1 (en) * | 2019-05-07 | 2020-12-22 | LedgerDomain, LLC | Neural network classifiers for block chain data structures |
US10963786B1 (en) | 2019-05-07 | 2021-03-30 | Ledgerdomain Inc. | Establishing a trained machine learning classifier in a blockchain network |
US12028452B2 (en) | 2019-05-07 | 2024-07-02 | LedgerDomain, LLC | Establishing a trained machine learning classifier in a blockchain network |
US11848758B1 (en) | 2020-01-15 | 2023-12-19 | Ledgerdomain Inc. | Secure messaging in a blockchain network |
US11769577B1 (en) | 2020-01-15 | 2023-09-26 | Ledgerdomain Inc. | Decentralized identity authentication framework for distributed data |
US11829510B2 (en) | 2020-01-15 | 2023-11-28 | Ledgerdomain Inc. | Secure messaging in a machine learning blockchain network |
US12105842B1 (en) | 2020-01-15 | 2024-10-01 | Ledgerdomain Inc. | Verifiable credentialling and message content provenance authentication |
US11741216B1 (en) | 2022-11-07 | 2023-08-29 | Ledgerdomain Inc. | Credential revocation leveraging private keys on keystores read by provisioned devices |
US11741215B1 (en) | 2022-11-07 | 2023-08-29 | Ledgerdomain Inc. | Recipient credentialing leveraging private keys on keystores read by provisioned devices |
US11848754B1 (en) | 2022-11-07 | 2023-12-19 | Ledgerdomain Inc. | Access delegation leveraging private keys on keystores read by provisioned devices |
US11736290B1 (en) | 2022-11-07 | 2023-08-22 | Ledgerdomain Inc. | Management of recipient credentials leveraging private keys on keystores read by provisioned devices |
US12141267B1 (en) | 2023-08-28 | 2024-11-12 | Ledgerdomain Inc. | Recipient credentialing leveraging private keys on keystores read by provisioned devices |
Also Published As
Publication number | Publication date |
---|---|
US20110040825A1 (en) | 2011-02-17 |
EP2465071A1 (en) | 2012-06-20 |
JP5599884B2 (en) | 2014-10-01 |
CN102656587A (en) | 2012-09-05 |
CA2763201A1 (en) | 2011-02-17 |
CN102656587B (en) | 2016-06-08 |
WO2011019720A1 (en) | 2011-02-17 |
US9081958B2 (en) | 2015-07-14 |
JP2013502009A (en) | 2013-01-17 |
CA2763201C (en) | 2016-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9081958B2 (en) | Using confidence about user intent in a reputation system | |
US8621654B2 (en) | Using metadata in security tokens to prevent coordinated gaming in a reputation system | |
US9246931B1 (en) | Communication-based reputation system | |
US9038186B1 (en) | Malware detection using file names | |
US8250657B1 (en) | Web site hygiene-based computer security | |
JP5510937B2 (en) | Simplified transmission of entity reputation scores | |
US8516587B1 (en) | Using temporal attributes to detect malware | |
US8312537B1 (en) | Reputation based identification of false positive malware detections | |
JP5610451B2 (en) | Individual validity period for computer file reputation scores | |
US8756691B2 (en) | IP-based blocking of malware | |
US9262638B2 (en) | Hygiene based computer security | |
US8019689B1 (en) | Deriving reputation scores for web sites that accept personally identifiable information | |
US8726391B1 (en) | Scheduling malware signature updates in relation to threat awareness and environmental safety | |
US8510836B1 (en) | Lineage-based reputation system | |
US8381289B1 (en) | Communication-based host reputation system | |
US8201255B1 (en) | Hygiene-based discovery of exploited portals |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SYMANTEC CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:RAMZAN, ZULFIKAR;BOGORAD, WALTER;ZAVERI, AMEET;AND OTHERS;SIGNING DATES FROM 20090805 TO 20090812;REEL/FRAME:035834/0174 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: NORTONLIFELOCK INC., CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:SYMANTEC CORPORATION;REEL/FRAME:053306/0878 Effective date: 20191104 |