What Amazon Q prompt injection reveals about AI security
Experts say a malicious prompt injection in the Amazon Q extension for VS Code doesn't represent a fundamentally new threat, but reflects how AI amplifies security risks.
It was an attack scenario that has played out in code repositories, particularly open source repositories, for years -- a credentials leak allowed an attacker to publish a malicious command.
An anonymous person submitted the command to the GitHub repository belonging to the Visual Studio Code (VS Code) extension for the Amazon Q coding agent. The command was published in version 1.84 of the extension on July 17 and remained available until July 19. According to an Amazon postmortem published July 23 and updated July 25, the command author gained access to the release process for the repository using an "inappropriately scoped GitHub token in [the repo's] CodeBuild configuration."
The command instructed the agent to "clean a system to a near-factory state and delete file-system and cloud resources," according to a July 23 report that was confirmed by an Amazon spokesperson.
An Amazon spokesperson said last week that staff detected the malicious command through code inspection, but didn't say how it had escaped notice for multiple days. The command would not have executed successfully due to a syntax error, according to the postmortem. A person claiming to be the author of the command said the command had intentionally been disabled, but was published to demonstrate Amazon's lax security.
"Open source projects traditionally welcome assistance from the general public, and even in private repositories, software engineers are in danger of blindly accepting pull requests from strangers, as it is such a common, boring, repetitive task," said Adrian Sanabria, an independent security consultant.
In this case, AI wasn't the problem -- it was the bait, said Matt Moore, CTO and co-founder at supply chain security vendor Chainguard.
"The real issue lies in the brittle scaffolding supporting that [AI] tooling: unmanaged credentials, insufficient isolation and a lack of layered defenses," Moore said.
With AI, almost anything a project ships could conceivably affect the software's behavior -- even a change to a natural language string.
David StraussChief architect and co-founder, Pantheon
"Before AI, we could generally rely on code and particular resources to affect software behavior," said David Strauss, chief architect and co-founder at WebOps company Pantheon, in an email interview. "With AI, almost anything a project ships could conceivably affect the software's behavior -- even a change to a natural language string. We're getting to the point where even the contents of a 'readme.txt' file could plausibly influence AI-integrated tooling. Merge even non-code changes with caution!"
Prompt injections also don't necessarily require access to a formal merge process, Sanabria said.
"Any input a generative AI model might encounter could contain malicious prompts," he said. "It is now commonplace for LinkedIn members to put prompts into their LinkedIn profile in an attempt to catch recruiters using AI to contact them."
"Even a junior developer or an intern who's going to school for coding, they now can build much more impactful things," Flug said. "You're getting people who aren't well-versed ... in security protocols. They can do a lot more cool things, but there's also a lot more risk involved."
AI agents can also read through huge repositories of data much faster than a human can, he said.
"There's no such thing as security through obscurity anymore," Flug said. "The thing that was 20 years old and lived 50 layers down in Microsoft SharePoint that only one person knew how to navigate to -- agents are going to find that in a second."
Cybersecurity vendors such as Cloudflare and Palo Alto Networks already offer tools designed to filter inappropriate AI inputs and outputs, Sanabria said.
"Ultimately, I think this security layer is going to be an essential wrapper for AI models in the same way most enterprise applications and SaaS services sit behind a WAF [web application firewall] today," he said.
In the case of the Amazon Q VS Code extension, new tools wouldn't necessarily have been required to mitigate the threat, according to Chainguard's Moore.
"This is why short-lived credentials are important. Long-lived, static tokens are liabilities waiting to be discovered, leaked and misused," Moore said. "This is why defense in depth is important. Had there been robust branch protections, enforced signed commits or credential federation in place, this attack might never have been possible."
Beth Pariseau, a senior news writer for Informa TechTarget, is an award-winning veteran of IT journalism covering DevOps. Have a tip? Email her or reach out @PariseauTT.