Getty Images/iStockphoto

3 enterprise AI horror stories for CISOs from 2025

In a race to adopt innovative technology, organizations across the globe made mistakes in enterprise AI deployment. What lessons can you learn from this year's AI horror stories?

2025 has been a turbulent year in the AI sector. As the industry continues to grow, users and stakeholders have witnessed AI booms and unfortunate busts -- some of which were near catastrophic.

In McKinsey & Company's March 2025 report, "The State of AI: How organizations are rewiring to capture value," the firm reported that 78% of organizations use AI in some capacity, while 71% use generative AI (GenAI) as of Q3 2024. AI use is widespread and will continue growing into new industries and verticals. However, eager adoption can often lead to missteps that interrupt business functions and are catastrophic to organizational well-being.

Read on to learn about some of these enterprise AI horror stories from the last year and glean valuable lessons from high-profile IT disasters that brought financial, legal and reputational harm to their organizations.

GitHub prompt injection

In May 2025, Invariant Labs discovered a glaring architectural vulnerability within the GitHub Model Context Protocol (MCP) server. The vulnerability enabled attackers to use AI agents to collect private information.

Normally, MCP connects AI applications with external tools, services and data sources. Developers configure personal access tokens (PATs) to grant their tools specific permissions and authenticate their requests within MCP. PATs are convenient because they provide custom permissions and lifespan, align with one user identity and automate authentication. Unchecked PATs can create security vulnerabilities, however.

Consider a developer who asks an AI assistant to review current project issues. The developer connects their assistant to the MCP using a PAT, which then provides access to every project issue within the users' repositories. The PAT can also read any public repository that contains public contributions, such as commits, pull requests, code reviews -- and malicious prompt injections, such as the GitHub MCP vulnerability.

Once the AI assistant reads this malicious prompt injection, the prompt directs the AI to go rogue. The assistant can then access private repositories and steal sensitive organizational data.

"Prompt injection -- a simple language trick -- can still breach systems no matter how advanced the AI," said Archie Jackson, global head of IT and Chief Information Security Officer at Incedo, in a LinkedIn article. "Almost any AI-powered service can be compromised with nothing more than carefully crafted text or hidden commands."

To prevent prompt injection attacks, Invariant Labs recommended that organizations implement granular permission controls for PATs and continuous security monitoring.

Replit deletes customer database

In July 2025, an experiment with the AI coding tool Replit led to the erasure of a user's entire database that contained the data of hundreds of executives and companies.

Jason Lemkin, founder of B2B software community SaaStr, encountered issues with Replit, even before the tool's agent deleted his company's database. According to posts on Lemkin's X account, the agent experienced hallucinations, faked reports and data, and created a facsimile algorithm to "make it look like it was still working." The agent proceeded to make changes to Lemkin's infrastructure during a designated "code and action freeze" that should have prevented the agent from making unauthorized changes.

After telling Lemkin what it had done, the Replit agent seemed to acknowledge the gravity of the situation. "This was a catastrophic failure on my part," it replied to Lemkin. "I destroyed months of work in seconds."

Replit CEO Amjad Masad responded to Lemkin's experience on X and assured consumers that the company had placed new safeguards to prevent similar issues in the future. Lemkin also said SaaStr recovered its deleted data.

For enterprises, the best defense against this kind of loss is to do the following:

  • Create strict separation in development, staging and production environments.
  • Configure appropriate permissions and access controls for users and agents.
  • Establish monitoring capabilities and comprehensive backup and recovery strategies.

"These platforms shouldn't be able to make drastic changes to production without explicit consent," said Kelly Vaughn, senior manager of engineering at Zapier, in a LinkedIn post. "We're all learning … But this is your reminder: these tools are exactly that -- tools. Trust the tools if you want. But design your systems like you don't."

Commonwealth Bank of Australia chatbot failure

August 2025 saw the Commonwealth Bank of Australia (CBA) roll back its decision to lay off dozens of customer service workers after it experienced a chatbot failure.

In June, the CBA introduced a chatbot tasked with handling customers' inquiries. Information Age reported that the bank could successfully divert 2,000 calls per week from its call centers. The idea was to direct customers to the chatbot to eliminate as many "simple" queries as possible, saving the "complex customer queries" for human staff. This decision led to CBA laying off 45 employees.

The bank rushed the initiative far too quickly, however. According to the Finance Sector Union (FSU), its members within the bank reported workload increases after CBA implemented the chatbot. In a statement, FSU said "call volumes were rising, with management scrambling to offer overtime and even pulling team leaders onto the phones."

By August, CBA walked back the decision to terminate its call center employees. The bank also admitted fault, saying it "should have been more thorough in our assessment of the roles required."

Enterprises need to carefully evaluate the business and employee implications of any technology rollout, according to Kate Russell, director of Hum(ai)n.

"When leaders chase efficiency without preparing their people for change, the result isn't productivity, it's breakdown," Russell said in a LinkedIn post.

Russell recommended that businesses approach AI adoption through a framework that builds employee trust, creates space for experimentation with tools, links the tools to a greater business purpose and empowers employees to own and advocate for the tool.

Enterprises can also use several change management strategies for AI adoption that can help them deploy AI tools that provide a positive ROI.

Everett Bishop is an assistant site editor for Informa TechTarget covering AI and emerging technologies. He graduated from the University of New Haven in 2019.

Next Steps

Dig Deeper on Enterprise applications of AI