https://www.techtarget.com/searchsecurity/tip/Security-risks-of-AI-generated-code-and-how-to-manage-them
Large language model-based coding assistants, such as GitHub Copilot and Amazon CodeWhisperer, have revolutionized the software development landscape. These AI tools dramatically boost productivity by generating boilerplate code, suggesting complex algorithms and explaining unfamiliar codebases. In fact, research by digital consultancy Publicis Sapient found teams can see up to a 50% reduction in network engineering time using AI-generated code.
However, as AI content generators become embedded in development workflows, security concerns emerge. Consider the following:
Let's explore AI-generated code security risks for DevSecOps teams and how application security (AppSec) teams can ensure the code used doesn't introduce vulnerabilities.
In February 2025, Andrej Karpathy, a former research scientist and founding member of OpenAI, described a "new kind of coding … where you fully give in to the vibes, embrace exponentials and forget that the code even exists." This tongue-in-cheek statement on vibe coding resulted in a flurry of comments from cybersecurity professionals expressing concerns at the potential rise in vulnerable software due to unchecked use of coding assistants based on large language models (LLMs).
Five security risks of using AI-generated code include the following.
The foremost security risk of AI-generated code is that coding assistants have been trained on codebases in the public domain, many of which contain vulnerable code. Without any guardrails, they reproduce vulnerable code in new applications. A recent academic paper found that at least 48% of AI-generated code suggestions contained vulnerabilities.
AI-generated coding tools do not understand security intent and reproduce code that appears correct based on prevalence in the training data set. This is analogous to copy-pasting code from developer forums and expecting it to be secure.
A related concern is that coding assistants might ingest vulnerable or deprecated dependencies into new projects in their attempts to solve coding tasks. Left ungoverned, this can lead to significant supply chain vulnerabilities.
Another risk is that developers could become overconfident in AI-generated code. Many developers mistakenly assume that AI code suggestions are vetted and secure. A Snyk survey revealed that nearly 80% of developers and practitioners said they thought AI-generated code was more secure -- a dangerous trend.
Remember that AI-generated code is only as good as its training data and input prompts. LLMs have a knowledge cutoff and lack awareness of new and emergent vulnerability patterns. Similarly, if a prompt fails to specify a security requirement, the generated code might lack basic security controls or protections.
Coding assistants present significant intellectual property (IP) and data privacy concerns. Coding assistants might generate large chunks of licensed open source code verbatim, which leads to IP contamination in the new codebase. Some tools protect against the reuse of large chunks of public domain code, but AI can suggest copyrighted code or proprietary algorithms without such protection. To get useful suggestions, developers might prompt these tools with proprietary code or confidential logic. That input could be stored or later used in model training, potentially leaking secrets.
Many of the AI-generated code security risks are self-evident, leading to industry speculation about a crisis in the software industry. The benefits are significant too, however, and might outweigh the downsides.
AI pair-programming with coding assistants can speed up development by handling boilerplate code, potentially reducing human error. Developers can generate code for repetitive tasks quickly, freeing time to focus on security-critical logic. Simply reducing the cognitive load on developers to produce repetitive or error-prone code can result in significantly less vulnerable code.
AI models trained on vast code corpora might recall secure coding techniques that a developer could overlook. For instance, users can prompt ChatGPT to include security features, such as input validation, proper authentication or rate limiting, in its code suggestions. ChatGPT can also recognize vulnerabilities when asked -- for example, a developer can tell ChatGPT to review code for SQL injection or other flaws, and it attempts to identify issues and suggest fixes. This on-demand security expertise can help developers catch common mistakes earlier in the software development lifecycle.
Probably the biggest impact coding assistants can have on the security posture of new codebases is to use their ability to parse these codebases and act as an expert reviewer or a second pair of eyes. By prompting an assistant -- preferably a different one than used to generate the code -- with a security perspective, this kind of AI-driven code review augments a security professional's efforts by quickly covering a lot of ground.
AI coding platforms are evolving to prioritize security. GitHub Copilot, for example, introduced an AI-based vulnerability filtering system that blocks insecure code patterns. At the same time, the Cursor AI editor can integrate with security scanners, such as Aikido Security, to flag issues as code is written, highlighting vulnerabilities or leaked secrets within the integrated development environment (IDE) itself.
Follow these best practices to ensure the secure use of code assistants:
By recognizing both the benefits and the risks of AI code generation, developers and security professionals can strike a balance. Tools such as Copilot, ChatGPT and Cursor can boost productivity and even enhance security through quick access to best practices and automated checks. But without the proper checks and mindset, they can just as easily introduce new vulnerabilities.
In summary, AI coding tools can improve AppSec, but only if they are integrated with strong DevSecOps practices. Pair the AI's speed with human oversight and automated security checks to ensure nothing critical slips through.
Colin Domoney is a software security consultant who evangelizes DevSecOps and helps developers secure their software. He previously worked for Veracode and 42Crunch and authored a book on API security. He is currently a CTO and co-founder, and an independent security consultant.
29 May 2025