putilov_denis - stock.adobe.com

Guest Post

GenAI development should follow secure-by-design principles

Every company wants a piece of the GenAI pie, but rushing to develop a product without incorporating secure-by-design principles could harm their business and customers.

The rise of generative AI is akin to the California gold rush. Like the mass influx of fortune seekers who descended upon San Francisco in the late 1840s, today's Silicon Valley has its sights set on generative AI in pursuit of digital treasure. Given how dangerous the gold rush was and how long it took to incorporate safety measures, the time is now for organizations using GenAI to follow secure-by-design principles and follow CISA's example.

The mad dash to GenAI makes sense for companies. Beyond writing faux movie scripts and passing school exams, GenAI is projected to generate as much as $4.4 trillion annually into the global economy. The hype surrounding its potential is real, but it has also created a problematic environment where go-to-market timelines and cost efficiency seemingly precede safety and security. The faster AI system developers can build and scale advanced AI models, the more "gold" they'll collect via ROI before the market saturates. It's a dangerous line of thinking that could have serious ramifications from a cybersecurity perspective.

We've reached a major inflection point amid the early stages of GenAI's ascent. It's important to remember there were virtually no laws or regulations governing mining practices during the gold rush. Overwork and lax safety measures led to tens of thousands of deaths from mining-related accidents and violence. It wasn't until two decades later that the United States enacted the General Mining Law of 1872.

History is a great teacher. While advanced AI models will redefine the boundaries of possibility for digital innovation, they also add a new layer of complexity to the cyberthreat landscape. We can't afford to be years -- let alone two decades -- late on governance this time.

"Google Cloud Cybersecurity Forecast 2024" warned that attackers will use GenAI and large language models (LLMs) for cyberattacks, such as phishing, smishing and other social engineering attacks. In addition, a Fastly research report found more than two-thirds (67%) of IT decision-makers believe GenAI will open new attack avenues, while nearly half (46%) are concerned about their inability to defend against AI-enabled threats.

We're not talking about the latest doomsday scenario crafted by cybersecurity vendors hoping to sell products. This is a real and imminent threat driven by the rapid acceleration of digital transformation. Nation-state adversaries could use GenAI to target U.S. critical infrastructure sites, such as electric grids, water treatment plants and healthcare facilities, putting lives at risk. We're in a race against the clock to put stronger parameters in place that facilitate secure AI systems and foster a safer future. The stakes are too high to be caught behind the curve.

CISA's "2023-2024 CISA Roadmap for Artificial Intelligence," released Nov. 23, 2023, underscored that notion and stressed the importance of integrating security as a core component of the AI system development lifecycle. The roadmap outlined four strategic goals -- cyberdefense, risk reduction and resilience, operational collaboration and agency unification -- driven by the following:

  1. Responsibly use AI to support CISA's mission. Use AI-enabled software tools to strengthen cyberdefense through responsible, ethical and safe usage.
  2. Assure AI systems. Facilitate the adoption of secure-by-design principles to drive safe AI software development and implementation across the public and private sectors.
  3. Protect critical infrastructure from malicious use of AI. Monitor AI-based cyberthreats in partnership with government agencies and industry partners to safeguard U.S. critical infrastructure from adversaries.
  4. Collaborate with and communicate on key AI efforts with the interagency, international partners and the public. Coordinate with international partners to advance global AI security best practices, and ideate effective policy approaches for the U.S. government's national AI strategy.
  5. Expand AI expertise in CISA workforce. Lead efforts to actively recruit and develop AI-enabled employees through skills-based hiring approaches and cybersecurity certification training.

Among the five pillars, the second and third present the highest degree of difficulty. There isn't a straightforward solution to executing them at scale, but it starts by ensuring AI system developers weigh security objectives and business objectives as equal.

Blending secure by design with AI alignment

The introduction of CISA's roadmap called for AI system developers to prioritize secure-by-design principles as a top business priority:

The security challenges associated with AI parallel cybersecurity challenges associated with previous generations of software that manufacturers did not build to be secure by design, putting the burden of security on the customer. Although AI software systems might differ from traditional forms of software, fundamental security practices still apply. … As the use of AI grows and becomes increasingly incorporated into critical systems, security must be a core requirement and integral to AI system development from the outset and throughout its lifecycle.

Implemented during the early stages of product development, secure-by-design principles help reduce an application's exploit surface before it is made available for broad use -- promoting the security of the customer as a core business requirement rather than a technical feature. It requires more than merely enforcing major AI suppliers, like OpenAI and Google, to ingrain stringent guardrails within their products, however. Guardrails can be bypassed; just ask any penetration tester. Consider them a temporary solution.

The larger challenge is that, in addition to assuring AI systems, we also must protect everything AI is capable of touching -- critical infrastructure and private networks alike. Therefore, secure by design must be implemented through the lens of AI alignment, ensuring systems are built to uphold fundamental human values and ethical boundaries. Outside of Big Tech's power players, thousands of lesser-known advanced AI models are currently in development, and many of them will be open sourced with weaker guardrails than the GPT-4s and Bards of the world. Without AI alignment, all it takes is the right product in the wrong hands to wreak havoc.

There's no denying that AI alignment is an expensive and time-intensive undertaking. OpenAI is dedicating 20% of its total computing power to achieving AI alignment by 2027. Developers should view it as a necessary cost of doing business, especially with the potential consequences of inaction.

Assessing the consequences of inaction around AI security

Beyond danger to human life, failing to prioritize safe and secure AI systems could have legal consequences for AI system developers. CISA's roadmap emphasized the importance of holding developers more accountable for damages caused by their products, also a key point of the Biden administration's executive order on AI in October. This would shift the burden of responsibility away from victims, opening potential pathways for criminal or civil penalties in the wake of a major attack. We're seeing a similar trend across cybersecurity amid new federal regulations, with the Security and Exchange Commission recently issuing fraud charges against SolarWinds and its CISO for allegedly concealing cyber-risk from investors and customers.

It's a dilemma. On one hand, if an AI-powered attack succeeds because a utility provider's weak security architecture let it fall to a zero-day exploit, should the developer still be deemed liable? What if that developer didn't take standard measures to prevent its product from being exploited for adversarial intent? Who bears the ultimate responsibility? Is it reasonable and even possible to share responsibility and liability?

There may not be a right answer, but regardless, developers need to be cognizant about the financial and brand reputational risk of inaction. While danger to human life should be enough, quantifying the correlation between cyber-risk and business risk is an effective way to move the needle. Meanwhile, cyberdefenders have a role to play as well. Making cyber-resilience an organizational priority through strong cyber hygiene is non-negotiable in today's threat environment.

The rise of GenAI in 2023 showed how much can change in a year. And, while we can't predict where the AI era is headed, a steadfast commitment to facilitating safe and secure systems is paramount to navigating it safely. By following CISA's roadmap and blending secure by design with AI alignment throughout the development lifecycle, we can take proactive steps to ensure AI remains a force for good.

Ed Skoudis, president of SANS Technology Institute, is founder of SANS Penetration Testing Curriculum and Counter Hack.

Dig Deeper on Security operations and management

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close