Vibe coding -- using generative AI to help write code -- has gained traction as developers tap into AI to build software. Rather than hand-code every line of logic, developers interact with AI systems using natural language and iterative adjustment.
In short, developers convey desired outcomes, workflows or user experiences to the AI system. In response, the AI acts like a copilot by generating, tweaking or refactoring code in real time. The result: a feedback loop of human intent and machine generation.
This approach is emerging as developers increasingly adopt large language models and GenAI assistants -- among them, GitHub Copilot, ChatGPT and others -- to accelerate prototyping, innovation and iteration.
Several trends are pushing vibe coding forward:
Rapid productivity gains. Developers can move from concept to working prototype more quickly.
Lowered skill barrier. AI assists with syntax, dependencies, scaffolding and patterns.
Cultural momentum. Developer communities prize creativity and fluidity.
AI service maturity. GenAI is embedded across integrated development environments and platforms.
While vibe coding can be beneficial, it introduces several risks that organizations must contend with.
Basis for managing vibe coding risk: It's just fancy AI risk
In an article posted by Trusted Cyber Annex, I illustrated how organizations rely on AI, as shown in the following diagram.
It contextualizes vibe coding within a few simple concepts: the organization, the developer and the AI agent. There are some differences between using an internal AI agent and an external agent from a risk perspective -- specifically in terms of control over data gathering. However, most organizations use an external AI agent for vibe coding. To that end, let's focus on the organization, developer, the AI agent and the arrow that links them together.
Security risks unique to vibe coding
To appropriately understand the security risks of vibe coding, let's break down threats based on each element in the picture. Risks in vibe coding include, but are not limited to the following:
Developer risk
Improper training. The developer is not sure what is and is not an acceptable use of the AI coding agent. This initial risk contributes to many of the issues below.
AI agent risk
Improperly trained models. The AI agent could be trained on data that is not relevant to the use case -- niche coding language or paradigm.
Poisoned models. The AI agent could have been trained on a data set in which the outputs of the model were intentionally engineered by an internal or external party to be malicious.
Developer-caused AI agent risks
Data leaks. The developer could send data within a prompt or referenced material that contains sensitive data -- e.g. personally identifiable information (PII) or financials.
Prompt injections. The developer could send a seemingly valid prompt, but data copied from other sources has hidden instructions that cause the AI agent to behave in unintended ways.
AI agent-created developer risks
Insecure code. The AI agent could return code that accomplishes the task but is vulnerable to known exploitable vulnerabilities, such as SQL injection or cross-site scripting.
Hallucinations. Even if the AI agent is trained on the correct data, its response could be non-functional or contain errors.
Cyber supply chain. The AI agent could return code that imports libraries that are actively being exploited or known to be vulnerable.
Organizational risks
Technical debt. The velocity and volume of the code produced can cause the next iteration of the product or feature to take longer due to poor architecture decisions, security flaws or code that simply isn't needed.
Accountability. As the code base grows and morphs with many developers vibe coding at the same time, the lines between who wrote the code -- and thus who is responsible -- begin to blur. Additionally, if code is written by an AI agent and works on the first execution, the developer could commit the code without fully understanding how it works. If the code breaks later in the lifecycle, no one will know how to fix it.
Vibe coding security best practices
Organizations that are extremely risk-averse might want to ban vibe coding outright. Rather than taking that step -- which is likely impractical -- organizations should focus on controlled enablement. Consider the following vibe coding best practices:
Good governance. Appoint an AI czar or director to oversee AI adoption in all forms across the organization.
Human‑in‑the‑loop. Treat AI outputs as drafts and ensure human review and oversight.
Policy framework. Create rules for acceptable use and define secure-by-design expectations.
Visibility over prohibition. Map usage, inventory tools and define approved AI environments.
Prompt and input sanitization. Include explicit security requirements in prompts and avoid using secrets or PII. Set instructions apart from data.
Code review and testing. Subject AI-generated code to static or dynamic analysis and dependency scanning.
Auditability and traceability. Maintain clear logs, tagging and version history for AI-generated content.
Harnessing the opportunity securely
Vibe coding represents a significant evolution in how software is built, merging human creativity with AI-driven acceleration. Ignoring this approach could stifle innovation; embracing it carelessly invites vulnerabilities and compliance failures.
Vibe coding represents a significant evolution in how software is built, merging human creativity with AI-driven acceleration. Ignoring this approach could stifle innovation, yet embracing it carelessly invites vulnerabilities and compliance failures.
Remember that vibe coding will accelerate the production of the current quality of software. If an organization has guardrails in place for normal coding procedures, propagating those to AI agents will produce higher-quality code. If the quality of an organization's code is already suspect, AI agents will create significantly more suspect code.
CISOs and other security leaders should pursue secure enablement: Accept vibe coding as part of the modern software development lifecycle, embed visibility and governance, adapt secure development policies to AI workflows and provide traceability for audits. By doing so, CISOs can create a culture of responsible, resilient and future‑ready development.
Matthew Smith is a vCISO and management consultant specializing in cybersecurity risk management and AI.