The EU AI Act’s Cybersecurity Gamble: Hackers Don’t Need Permission
As AI development advances, its use in cybersecurity is becoming inevitable – it can help detect and prevent cyber threats in unprecedent ways.
But, of course, there is the other side: bad actors can also use AI to develop more sophisticated attack methods, empowering their illicit activities. And criminals generally don’t bother adhering to any constraints on how to utilize this tech.
As the EU forges ahead with the AI Act, many people in the industry find themselves wondering: will this regulation actually help, making Europe more secure? Or will it become an obstacle, dropping new challenges on businesses that are trying to leverage artificial intelligence to protect themselves?
Here’s my take on this topic.
The AI Act’s cybersecurity measures
The EU AI Act is the first major regulatory framework to set clear AI development and deployment rules. Among its many provisions, the AI Act directly addresses cybersecurity risks by introducing measures to ensure AI systems are secure and used responsibly.
It does so by introducing a risk-based classification of AI applications, each class having different compliance requirements. Naturally, the more high-risk systems—the ones that could negatively affect people’s health and safety—are subject to stricter security and transparency demands.
Additionally, AI systems must undergo regular mandatory security testing to identify vulnerabilities and reduce the chances of them being exploited by cybercriminals. And, at the same time, it establishes better transparency and reporting obligations. These are solid first steps in bringing structure to this industry and legitimizing it.
But when it comes to cybersecurity, this approach has its share of complications and downsides.
Requiring AI systems to undergo so many checks and certifications means that, in practice, the release of security updates gets slowed down considerably. If each modification to AI-based security measures needs a long approval process, it gives attackers plenty of time to exploit known weaknesses while the target businesses are tied up in red tape and left vulnerable for it.
The issue of transparency is also a double-edged sword, depending on how you look at it. The AI Act requires that developers disclose technical details about their AI systems to government bodies so as to ensure accountability. A valid point, admittedly, but this introduces another critical vulnerability: if this kind of information gets leaked, it could fall into the hands of bad actors, effectively handing them a map of how to exploit AI systems. This violates one of the basic tenets of security: security through obscurity.
Compliance as the source of vulnerability?
There’s another layer of risk that we need to take a harder look at: the compliance-first mindset.
The stricter regulation becomes, the more security teams will focus on building systems that meet legal checkboxes rather than real-world threats. There is a very high chance of this resulting in AI systems that are technically compliant but operationally brittle.
Systems built for compliance will inevitably share patterns, and once malicious actors get their hands on the knowledge of those patterns, it will be that much easier for them to engineer exploits around them. End result? Similarly built systems are left equally defenceless.