AI threat intelligence is the future, and the future is now
Threat intelligence services and tools get a boost from advanced technology like AI and, specifically, machine learning. Learn how that works.
The next progression in organizations using threat intelligence is adding AI threat intelligence capabilities, in the form of machine learning technologies, to improve attack detection. Machine learning is a form of AI that enables computers to analyze data and learn its significance. The rationale for using machine learning with threat intelligence is to enable computers to more rapidly detect attacks than humans can and stop those attacks before more damage occurs. In addition, because the volume of threat intelligence is often so large, traditional detection technologies inevitably generate too many false positives. Machine learning can analyze the threat intelligence and condense it into a smaller set of things to look for, thereby reducing the number of false positives.
This sounds fantastic, but there's a catch -- actually, a few catches. Expecting AI to magically improve security is unrealistic, and deploying machine learning without preparation and ongoing support may make things worse.
Here are three steps enterprises should take to use AI threat intelligence tools with machine learning capabilities to improve attack detection.
Use the highest quality threat intelligence feeds
AI threat intelligence products that use machine learning work by taking inputs, analyzing them and producing outputs. For attack detection, machine learning's inputs include threat intelligence, and its outputs are either alerts indicating attacks or automated actions stopping attacks. If the threat intelligence has errors, it will give "bad" information to the attack detection tools, so the tools' machine learning algorithms may produce "bad" outputs.
Many organizations subscribe to multiple sources of threat intelligence. These include feeds, which contain machine-readable signs of attacks, like the IP addresses of computers issuing attacks and the file names used by malware. Other sources of threat intelligence are services, which generally provide human-readable prose describing the newest threats. Machine learning can use feeds but not services.
Organizations should use the highest quality threat intelligence feeds for machine learning. Characteristics to consider include the following:
- How often is the feed updated? Threats can change quickly, so the feed should be updated continually.
- How accurate is the data in the feed? For example, is an IP address reported to be issuing attacks really doing that?
- How comprehensive is the feed? Does it cover threats from around the world? Does it include the types of information about threats that your detection tools need?
It's hard to directly evaluate the quality of threat intelligence, but you can indirectly evaluate it based on the number of false positives that occur from using it. High-quality threat intelligence should lead to minimal false positives when it's used directly by detection tools -- without machine learning.
Give machine learning the context it needs to minimize false positives
False positives are a real concern if you're using threat intelligence with machine learning to do things like automatically block attacks. Mistakes will disrupt benign activity and could negatively affect operations.
Ultimately, threat intelligence is just one part of assessing risk. Another part is understanding context -- like the role, importance and operational characteristics of each computer. Providing contextual information to machine learning can help it get more value from threat intelligence. Suppose threat intelligence indicates a particular external IP address is malicious. Detecting outgoing network traffic from an internal database server to that address might merit a different action than outgoing network traffic to the same address from a server that sends files to subscribers every day.
The toughest part of using machine learning is providing the actual learning. Machine learning needs to be told what's good and what's bad, as well as when it makes mistakes so it can learn from them. This requires frequent attention from skilled humans. A common way of teaching machine learning-enabled technologies is to put them into a monitor-only mode where they identify what's malicious but don't block anything. Humans review the machine learning tool's alerts and validate them, letting it know which were erroneous. Without feedback from humans, machine learning can't improve on its mistakes.
Use threat intelligence and machine learning to complement and enhance threat hunting
Conventional wisdom is to avoid relying on AI threat intelligence that uses machine learning to detect attacks because of concern over false positives. That makes sense in some environments, but not in others. Older detection techniques are more likely to miss the latest attacks, which may not follow the patterns those techniques typically look for. Machine learning can help security teams find the latest attacks, but with potentially higher false positive rates. If missing attacks is a greater concern than the resources needed to investigate additional false positives, then more reliance on automation utilizing machine learning may make sense to protect those assets.
Many organizations will find it best to use threat intelligence without machine learning for some purposes, and to get machine learning-generated insights for other purposes. For example, threat hunters might use machine learning to get suggestions of things to investigate that would have been impossible for them to find in large threat intelligence data sets. Also, don't forget about threat intelligence services -- their reports can provide invaluable insights for threat hunters on the newest threats. These insights often include things that can't easily be automated into something machine learning can process.