5G networks vulnerable to adversarial ML attacks
A team of academic researchers introduced an attack technique that could disrupt 5G networks, requiring new ways to protect against adversarial machine learning attacks.
A research paper published this week has called into question the security protections placed on 5G networks.
A team of academic researchers from the University of Liechtenstein claimed that a surprisingly simple strategy for jamming networks could allow an attacker with zero insider knowledge to disrupt traffic on next-generation networks, even with advanced defenses. The key to the attacks, according to the research team, is the use of an adversarial machine learning (ML) technique that does not rely on any prior knowledge or reconnaissance of the targeted network.
In a research paper published on July 4, the team described how the shift to 5G networks has enabled a new class of adversarial machine learning attacks. The paper, titled "Wild Networks: Exposure of 5G Network Infrastructures to Adversarial Examples," was authored by Giovanni Apruzzese, Rodion Vladimirov, Aliya Tastemirova and Pavel Laskov.
As 5G networks are deployed, and more devices begin to use those networks to move traffic, the current methods of managing network packets no longer hold up. To compensate for this, the researchers noted, many carriers are planning to use machine learning models that can better sort and prioritize traffic.
Those machine learning models proved to be the weak point for the attack, as confusing them and redirecting their priorities will let attackers tinker with how traffic is handled. The researchers suggested that by flooding the network with garbage traffic, a technique known as a "myopic attack" can take down a 5G mobile setup.
The base idea, the researchers wrote, lies in making slight changes to the data set. By doing something as simple as a data packet request appended with additional data, a machine learning setup would be fed unexpected information. Over time, those poisoned requests could modify the behavior of the machine learning software to thwart legitimate network traffic and ultimately slow or stop the flow of data.
While the real-world results would depend on the type of 5G network and machine learning model being deployed, the research team's academic tests produced resounding results. In five out of six lab experiments performed, the network was taken down using a technique that involved no knowledge of the carrier, its infrastructure or machine learning technology.
"It is simply required to append junk data to the network packets," Apruzzese told SearchSecurity. "Indeed, [one example] targets a model that is agnostic of the actual payload of network packets."
The results are relatively benign in terms of the long-term effects, but by triggering service outages and slowing network traffic, they would certainly cause a problem for those hoping to use the targeted network.
More important, the team said, is how the research underscores the need for a better model to test and address vulnerabilities in the machine learning models that future networks plan to deploy in the wild.
"The 5G paradigm enables a new class of harmful adversarial ML attacks with a low entry barrier, which cannot be formalized with existing adversarial ML threat models," the team wrote. "Furthermore, such vulnerabilities must be proactively assessed."
Adversarial machine learning and artificial intelligence have been concerns within the infosec community for some time. While the number of attacks in the wild is believed to be extremely small, many experts have cautioned that algorithmic models can be vulnerable to poisoned data and influenced by threat actors.