
Getty Images/iStockphoto
Security risks of AI-generated code and how to manage them
Application security teams are understandably worried about how developers use GenAI and LLMs to create code. But it's not all doom and gloom; GenAI can help secure code, too.
Large language model-based coding assistants, such as GitHub Copilot and Amazon CodeWhisperer, have revolutionized the software development landscape. These AI tools dramatically boost productivity by generating boilerplate code, suggesting complex algorithms and explaining unfamiliar codebases. In fact, research by digital consultancy Publicis Sapient found teams can see up to a 50% reduction in network engineering time using AI-generated code.
However, as AI content generators become embedded in development workflows, security concerns emerge. Consider the following:
- Does AI-generated code introduce new vulnerabilities?
- Can security teams trust code that developers might not fully understand?
- How do teams maintain security oversight when code creation becomes increasingly automated?
Let's explore AI-generated code security risks for DevSecOps teams and how application security (AppSec) teams can ensure the code used doesn't introduce vulnerabilities.
The security risks of AI-generated coding assistants
In February 2025, Andrej Karpathy, a former research scientist and founding member of OpenAI, described a "new kind of coding … where you fully give in to the vibes, embrace exponentials and forget that the code even exists." This tongue-in-cheek statement on vibe coding resulted in a flurry of comments from cybersecurity professionals expressing concerns at the potential rise in vulnerable software due to unchecked use of coding assistants based on large language models (LLMs).
Five security risks of using AI-generated code include the following.
Code based on public domain training
The foremost security risk of AI-generated code is that coding assistants have been trained on codebases in the public domain, many of which contain vulnerable code. Without any guardrails, they reproduce vulnerable code in new applications. A recent academic paper found that at least 48% of AI-generated code suggestions contained vulnerabilities.
Code generated without considering security
AI-generated coding tools do not understand security intent and reproduce code that appears correct based on prevalence in the training data set. This is analogous to copy-pasting code from developer forums and expecting it to be secure.
Code could use deprecated or vulnerable dependencies
A related concern is that coding assistants might ingest vulnerable or deprecated dependencies into new projects in their attempts to solve coding tasks. Left ungoverned, this can lead to significant supply chain vulnerabilities.
Code used is assumed to be vetted and secure
Another risk is that developers could become overconfident in AI-generated code. Many developers mistakenly assume that AI code suggestions are vetted and secure. A Snyk survey revealed that nearly 80% of developers and practitioners said they thought AI-generated code was more secure -- a dangerous trend.
Remember that AI-generated code is only as good as its training data and input prompts. LLMs have a knowledge cutoff and lack awareness of new and emergent vulnerability patterns. Similarly, if a prompt fails to specify a security requirement, the generated code might lack basic security controls or protections.
Code could use another company's IP or code base illegally
Coding assistants present significant intellectual property (IP) and data privacy concerns. Coding assistants might generate large chunks of licensed open source code verbatim, which leads to IP contamination in the new codebase. Some tools protect against the reuse of large chunks of public domain code, but AI can suggest copyrighted code or proprietary algorithms without such protection. To get useful suggestions, developers might prompt these tools with proprietary code or confidential logic. That input could be stored or later used in model training, potentially leaking secrets.
The security benefits of AI-generated coding assistants
Many of the AI-generated code security risks are self-evident, leading to industry speculation about a crisis in the software industry. The benefits are significant too, however, and might outweigh the downsides.
Reduced development time
AI pair-programming with coding assistants can speed up development by handling boilerplate code, potentially reducing human error. Developers can generate code for repetitive tasks quickly, freeing time to focus on security-critical logic. Simply reducing the cognitive load on developers to produce repetitive or error-prone code can result in significantly less vulnerable code.
Providing security suggestions
AI models trained on vast code corpora might recall secure coding techniques that a developer could overlook. For instance, users can prompt ChatGPT to include security features, such as input validation, proper authentication or rate limiting, in its code suggestions. ChatGPT can also recognize vulnerabilities when asked -- for example, a developer can tell ChatGPT to review code for SQL injection or other flaws, and it attempts to identify issues and suggest fixes. This on-demand security expertise can help developers catch common mistakes earlier in the software development lifecycle.
Security reviews
Probably the biggest impact coding assistants can have on the security posture of new codebases is to use their ability to parse these codebases and act as an expert reviewer or a second pair of eyes. By prompting an assistant -- preferably a different one than used to generate the code -- with a security perspective, this kind of AI-driven code review augments a security professional's efforts by quickly covering a lot of ground.
AI coding platforms are evolving to prioritize security. GitHub Copilot, for example, introduced an AI-based vulnerability filtering system that blocks insecure code patterns. At the same time, the Cursor AI editor can integrate with security scanners, such as Aikido Security, to flag issues as code is written, highlighting vulnerabilities or leaked secrets within the integrated development environment (IDE) itself.
Best practices for secure adoption of coding assistants
Follow these best practices to ensure the secure use of code assistants:
- Treat AI suggestions as unreviewed code. Never assume AI-generated code is secure. Treat it with the same scrutiny as a snippet from an unknown developer. Before merging, always perform code reviews, linting and security testing on AI-written code. In practice, this means running static application security testing (SAST) tools, dependency checks and manual review on any code from Copilot or ChatGPT, just as with any human-written code.
- Maintain human oversight and judgment. Use AI as an assistant, not a replacement. Make sure developers remain in the loop, understanding and vetting what the AI code generator produces. Encourage a culture of skepticism.
- Use AI deliberately for security. Turn the tool's strengths into an advantage for AppSec. For example, prompt the AI to focus on security, such as "Explain any security implications of this code" or "Generate this function using secure coding practices (input validation, error handling, etc.)." Remember that any AI output is a starting point; the development team must vet and integrate it correctly.
- Enable and embrace security features. Take advantage of the AI tool's built-in safeguards. For example, if using Copilot, enable the vulnerability filtering and license blocking options to automatically reduce risky suggestions.
- Integrate security scanning in the workflow. Augment AI coding with automated security tests in the DevSecOps pipeline. For instance, use IDE plugins or continuous integration pipelines that run static analysis on new code contributions -- this will flag insecure patterns, whether written by a human or AI. Some modern setups integrate AI and SAST; for example, the Cursor IDE's integration with Aikido Security can scan code in real time for secrets and vulnerabilities as it's being written.
- Establish policies for AI use. Organizations should develop clear guidelines that outline how developers can use AI code tools. Define what types of data can and cannot be shared in prompts to prevent leakage of crown-jewel secrets.
By recognizing both the benefits and the risks of AI code generation, developers and security professionals can strike a balance. Tools such as Copilot, ChatGPT and Cursor can boost productivity and even enhance security through quick access to best practices and automated checks. But without the proper checks and mindset, they can just as easily introduce new vulnerabilities.
In summary, AI coding tools can improve AppSec, but only if they are integrated with strong DevSecOps practices. Pair the AI's speed with human oversight and automated security checks to ensure nothing critical slips through.
Colin Domoney is a software security consultant who evangelizes DevSecOps and helps developers secure their software. He previously worked for Veracode and 42Crunch and authored a book on API security. He is currently a CTO and co-founder, and an independent security consultant.