Information Security
- Editor's letterWhen cyberthreats are nebulous, how can you plan?
- Cover storyAI for good or evil? AI dangers, advantages and decisions
- InfographicEnterprises feel the pain of cybersecurity staff shortages
- FeatureA cybersecurity skills gap demands thinking outside the box
- ColumnReport shows CISOs, IT unprepared for privacy regulations
- ColumnCISOs, does your incident response plan cover all the bases?

fotohansel/stock.adobe.com
AI for good or evil? AI dangers, advantages and decisions
Good guys and bad guys both use AI, but the bad guys don't need to worry about complying with rules and regulations. What can security leaders do to level the playing field?
AI isn't inherently moral -- it can be used for evil just as well as for good. And while it may appear that AI provides an advantage for the good guys in security now, the pendulum may swing when the bad guys really embrace it to do things like unleashing malware infections that can learn from their hosts. It's imperative that CISOs and all security team leaders stay aware of the lurking AI dangers.
AI defined
When people talk about AI in cybersecurity, the terms machine learning and deep learning tend to be used interchangeably with artificial intelligence.

What's the difference? "AI is a big subfield of computer science, and it examines the ability of a program or machine to accomplish tasks that normally require human intelligence like perception, reasoning, abstraction and learning," explained Michelle Cantos, strategic intelligence analyst for security vendor FireEye.
According to Sridhar Muppidi, vice president and CTO for IBM Security, machine learning is a big part of AI, which is primarily used to extract anomalies and outliers from giant data haystacks and to evaluate risk. "It involves training a model on a specific element of data using algorithms," he said. "Deep learning is a type of self-learning or on-the-fly learning."
Using AI for good
The main ways AI is being used for good today is for "predictive analytics, intelligence consolidation and to act as a trusted advisor that can respond automatically," IBM Security's Muppidi said.
Jon Oltsik, senior principal analyst at the consultancy Enterprise Strategy Group (ESG), sees it the same. In an email interview, he said, "Most organizations I speak to or research are adding AI [and machine learning] as a layer of defense."
AI processes large amounts of data much faster than humans. "The huge volume of data we observe on a daily basis would bury any normal researcher," Cantos said. "But AI applications, specifically machine learning, help us do the heavy lifting to dig analysts out from under the data."
"I would characterize it as a 'helper app,'" said Oltsik, "in that it is used to supplement human analysis."
Machine learning models can "learn from previous calculations and sort of adapt to new environments where they can perform trend analysis, make predictions and even examine past behaviors to see how threat actors might change in the future," Cantos noted.
Machine learning can illuminate relationships within the data set that a human might not see. For instance, a cybersecurity application can be used in the security operations center (SOC) where it can help triage alerts by identifying which ones need to be dealt with immediately, which are probably false positives and that sort of thing, Cantos added.

While most of the industry seems focused on providing the value of AI and machine learning quickly, it's important to point out that it needs to be safeguarded. "Many customers are realizing this right now and want some level of protection, but it's not mainstream yet," said IBM Security's Muppidi. "Much like we scan the application for vulnerabilities, we need to scan AI models for potential blind spots or open threat issues. There's a growing realization that we can't trust it 100%."
This realization is critical because AI can be mistrained or tricked as part of an attack. "State-sponsored actors and hackers will try to compromise systems, whether that involves an AI-enabled system or not," Cantos said. "But nation states tend to have more resources and funds to devote to this problem. It means they can develop more sophisticated tools to target new, higher-level, sophisticated environments like AI-enabled systems."
As we rely on AI-driven decision making, it's important to realize it's a model that's "fundamentally uncanny, in that we can observe that it works, but we don't always fully understand why," said Michael Tiffany, co-founder and president of cybersecurity vendor White Ops, which specializes in botnet mitigation.
"This brings up a whole new form of vulnerability. If it's suddenly mistrained, there's no way to test it." Tiffany also noted that some "practical attacks within this domain" have taken place, "increasing the noise in a system still establishing what a baseline is." In other words, by raising the "noise floor," hackers can obscure anomalies that might otherwise catch the eye of the security team and their tools.
AI dangers
AI is already widely used for fraud -- including for operating botnets out of infected computers that work solely as internet traffic launderers, Tiffany said. But myriad other ways exist for AI to be harnessed.

Oltsik of ESG said that he's not yet seen AI for targeted attacks. "It's more embedded in things like disinformation through bots, sock puppets" and the like.
Many parties are particularly well-suited to use machine learning for evil purposes. "These are the people who are already doing mass exploitation, which is the willy-nilly compromise of millions of computers," Tiffany said. "If you have the ability to build a botnet, it means you have an infection vector that potentially works for millions of computers. And if you can infect millions of computers, you can do lots of interesting things with them -- one of which is to learn from them."
A big, but sometimes overlooked, truth when it comes to the use of AI is that, unlike corporate America, cybercriminals don't have to care about or comply with the General Data Protection Regulation, privacy regulations -- or laws and regulations of any kind, for that matter. This allows them to do vastly more invasive data collection, according to Tiffany.
"This means they can develop an information advantage," he pointed out. "If you can infect a million computers and make your malware learn from them, you can make some extraordinarily lifelike malware because you have a training set that almost no legitimate AI researcher would ever have."

Cybercriminals are already using AI; it isn't something looming on the horizon. "But it's not like all the bad guys get together to compare notes. There's a complex ecosystem of different criminal groups of different levels of sophistication," Tiffany said. "The hardest to detect operations are the ones that are doing the best job of learning."
Years ago, one of the ways you could differentiate between a real human web visitor and a bot was that bots tended to look robotic. They repeated the same actions with similar time patterns. "There was an evolutionary period when people started tuning their bots to look more random -- rather than having them work 24/7," Tiffany explained. "But you could still identify their populations because, although they might be installed on a diverse number of computers, all of the bots fundamentally behaved like each other, and classical AI techniques could cluster them together."
Every compromised computer is owned by a different person with his or her individual habits of moving a mouse, using other input devices, sleeping and other usage patterns. "If each bot is training off their pet human, if you will, then the bots will not only become more lifelike -- they'll become uniquely lifelike. That's where we are today on the bot evolutionary scale," Tiffany said.

Is the reality of malware bots learning from their hosts not enough doom for you? Consider two words: autonomous weapons. As AI systems continue to get smarter, AI dangers multiply. "Criminals and rogue states are building autonomous weapons," said Staffan Truvé, CTO for Recorded Future, an internet technology company specializing in real-time threat intelligence. "And they won't be following any international conventions. Autonomous weapons are the area we should be most worried about with AI."
Another real concern is that researchers may unintentionally unleash AI demons. "If you look back at the original Robert Morris internet worm from 1988, he had no idea when he wrote it that it would get out of control," Truvé said. "Analogously, we could see a researcher or some group launch an uncontrollable botnet. It's not unlikely, and it could start spreading. So we need to do research on these kinds of systems; otherwise we won't be able to defend ourselves against them. Others will inevitably make similar mistakes."
Time for defenders to change their approach?
Now might be a good time to rethink our cybersecurity defense strategy.
Tiffany compared his company's defensive security development to running an arms race.
Right now, a lot of defensive security work isn't really about presenting an impregnable barrier to adversaries. Rather it's about creating a better barrier than other potential victims so that predators choose a different victim, Tiffany said. "A lot of security works like this: It's not about outrunning the bear; it's about outrunning the other people who are running from the bear."
This model assumes attackers you thwart won't decide to modify their attack to target you specifically, Tiffany added, but instead will just try their attack against someone else. In other words, this model of security thinking doesn't assume an adaptive arms race.
"At first it felt like amazing breakthroughs in applying machine learning were giving defenders an advantage, but some of those gains are being eroded because the other side is using AI as well," Tiffany said.
ESG's Oltsik noted that there is growing pressure in the cybersecurity industry for security tools that work together better. Collaboration is increasing too, he noted, in the form of Information Sharing and Analysis Centers and threat intelligence.

Decades spent trying to make computers and networks more difficult to break into hasn't resulted in substantially more impregnable computers or networks. "Perhaps it's time to bring arms race mechanics to this fight," Tiffany suggested. "This involves the same shift in thinking on display between the U.S. and the Soviets during the Cold War, where increasing the tempo of the adaptation on the part of the Western allies, who could sustainably fund that innovation, exposed vulnerabilities on the other side's ability to keep up in that arms race. So it eventually stopped being rationally sustainable."
Are we opening Pandora's box with AI? The impact of inherent AI dangers remains to be seen. but, done right, AI is a good thing, said IBM Security's Muppidi. "We will need to leverage the best practices -- and there are a lot of them -- from training data to scanning and to hardening the model and data from any distortion."
Related Resources
- Demystifying the myths of public cloud computing –TechTarget ComputerWeekly.com
- Evolve your Endpoint Security Strategy Past Antivirus and into the Cloud –TechTarget Security
- Towards an Autonomous Vehicle Enabled Society: Cyber Attacks and Countermeasures –TechTarget ComputerWeekly.com
- Five Tips to Improve a Threat and Vulnerability Management Program –TechTarget Security