KOHb - Getty Images

How watermarking AI content benefits businesses

AI watermarking is being used to identify deepfakes, enhance transparency and build trust in AI-generated content, but reliability issues and false positives present challenges.

The dramatic growth of AI-generated content is reshaping business risk faster than most organizations can adapt. Among the risks related to AI-generated content are intellectual property theft, consumer trust erosion and deepfake attacks that are happening with alarming regularity.

According to Gartner's 2025 cybersecurity leaders survey of nearly 500 senior business executives, 62% of organizations experienced at least one deepfake attack in the previous 12 months, 43% reported at least one deepfake audio call incident and 37% had experienced deepfakes in video calls.

The cost of deepfakes can be painful. A single incident cost global engineering firm Arup $25.6 million in January 2024 when a finance employee joined what appeared to be a video conference with the CFO and other colleagues, all of which were AI-generated deepfakes.

The inconvenient truth is AI-generated forgeries have become increasingly difficult to detect.

AI watermarks are a potential solution, providing a degree of verifiable authenticity to AI-generated content. There's momentum behind watermark use as new regulations, such as the EU AI Act and California's AI Transparency Act, include watermark use requirements.

How AI watermarking works

Watermarking isn't a new concept. It's been used for centuries on banknotes, postage stamps and official documents to prove authenticity and limit forgery risk. The digital age has adapted this time-tested approach for new media.

AI watermarking embeds a recognizable, unique signal into content during or after generation, creating a digital signature that verifies authenticity without degrading quality. The process involves two stages, encoding watermarks during model training and detecting them after content generation, and requires two primary approaches that have emerged:

  1. Metadata-based systems. The Coalition for Content Provenance and Authenticity (C2PA) oversees an open source technical standard used to verify the origin and subsequent history of media. It cryptographically embeds information about who created content, when, where and with what tools. This data travels with the file and can be verified using free tools, with any tampering becoming immediately evident. The C2PA coalition now includes Adobe, BBC, Google, Meta, Microsoft, OpenAI, Publicis Groupe, Sony and Trupic.
  2. Pattern-based systems. These watermarks alter content in ways imperceptible to people but detectable by algorithms. Google DeepMind's SynthID subtly biases the words an AI chooses during text generation, creating statistical patterns invisible to readers but identifiable through analysis. SynthID watermarks content across text, images, audio and video.
    The strongest argument for adopting watermarking is strategic readiness, building operational experience and governance muscle memory.
    Nik KaleCisco CX Engineering principal engineer

    Business benefits of watermarking

    Watermarking has a range of potential business benefits.

    Regulatory compliance and risk management

    The EU AI Act is among the first to include an AI watermarking mandate. It requires machine-readable marking for AI-generated outputs by August 2026. Penalties for failing to use them reach €15 million or 3% of the preceding financial year's global turnover, whichever is higher. California's AI Transparency Act mandated that AI providers with more than one million monthly users have invisible watermarks in place by January 2026.

    Complying with these emerging regulations is the strongest rationale for acting now, according to Nik Kale, principal engineer at Cisco CX Engineering. Companies that implement watermarking will be better prepared operationally, even if underlying techniques remain imperfect, he said.

    "The strongest argument for adopting watermarking is strategic readiness, building operational experience and governance muscle memory ahead of future regulatory and policy requirements," Kale said.

    Jean-Claude Renaud, CEO of Winston AI, an AI content detection technology provider, had a similar take: "Watermarking makes sense as part of a broader trust and governance stack, not as a silver bullet," he explained. "Implemented early, it helps businesses prepare for regulation, partner requirements and future provenance standards without scrambling later."

    Authenticity and IP protection

    With the growing volume of deepfake incidents, watermarking provides a verification infrastructure. Businesses that attempt to track the provenance of AI-generated content are showing a level of maturity with the technology to customers, partners and regulators, Kale said. "It isn't a guarantee of security," he added, "but it signals seriousness and preparedness when used as part of a broader governance program."

    Customer trust and transparency

    Businesses using AI-generated content must be concerned about how any lack of transparency has potential to erode trust. Watermarks can help mitigate this risk. "To customers, regulators and enterprise buyers, watermarking shows intent, you're taking content transparency seriously, even if the tooling isn't perfect yet," Renaud said.

    Sector-specific benefits

    Early adopters building watermarking infrastructure ahead of regulatory deadlines gain compliance advantages and brand trust positioning.

    For AI use in education and healthcare, watermarking is taking on added significance. Universities and colleges could use it to help alleviate concerns about students no longer creating original content for courses, said Tiffany Masson, founder of AI consultancy Falkovia. Pressure will continue for them to prove that systems are in place with relevant policies and procedures, she said. In healthcare, she noted, transparency is critical for healthcare providers using AI-generated recommendations to ensure ethical healthcare for patients.

    Competitive differentiation

    Early adopters building watermarking infrastructure ahead of regulatory deadlines gain compliance advantages and brand trust positioning. This approach demonstrates proactive governance to customers and business partners.

    Challenges and limitations of AI watermarking

    While watermarking offers important benefits, the current technology faces constraints that businesses must understand.

    Reliability issues

    The track record shows persistent reliability issues. OpenAI launched an AI text detector for ChatGPT in January 2023, but shut it down six months later, citing its low rate of accuracy. This failure underscores a fundamental challenge: Watermarks are often easy to remove or degrade, particularly through routine content workflows.

    "Most watermarking systems hold up reasonably well against light compression or simple re-encoding," Renaud said. "Once you introduce cropping, resizing, screenshots, format hopping or copy-paste workflows, reliability drops quickly."

    The bigger issue is that watermarking only works when the entire pipeline cooperates, Renaud added. If a single step strips metadata, flattens content or re-renders it, the watermark is gone.

    Enterprises must set realistic expectations, Kale said. "From an enterprise risk management perspective," he explained, "organizations should consider watermarking as a deterrent and a signal of intent, rather than a reliable method to prevent tampering or to serve as forensic evidence."

    False positive risks

    Malicious actors could add watermarks to authentic human-created content to cast doubt on its legitimacy. Random chance could also produce patterns that mimic watermarks, leading to incorrect accusations. These risks complicate business decision-making around content verification and dispute resolution.

    Barriers to adoption

    Technical fragility is limiting widespread adoption of watermarking technology. Beyond that, the following main obstacles are preventing wider adoption, according to Renaud:

        • Fragmentation. There's no universal standard that works across models, platforms and downstream tools. A watermark applied in one system might be unreadable in another.
        • False confidence. Some businesses assume watermarks equal protection or traceability, when, in fact, they're easy to remove, intentionally and unintentionally. This mindset creates a dangerous gap between perception and reality.
        • Lack of immediate ROI. Watermarking is mostly defensive. It doesn't drive revenue, improve performance or enhance user experience on its own.

    The future of AI watermarking

    Watermarking is evolving rapidly. Breakthrough technologies like watermark ensembling let multiple watermarks coexist without overwriting, creating stronger provenance chains. Zero-knowledge-proof systems enable verification without exposing detection algorithms.

    Yet, an arms race is underway. Researchers have found watermarks can be removed through sophisticated attacks. Commercial bypass services advertise high success rates. The realistic goal, according to experts, is to raise barriers rather than achieve perfect detection.

    Alternative approaches are gaining traction alongside watermarking. Post-hoc detection tools analyze statistical patterns in content, though accuracy varies. The consensus view favors layered defense: watermarking combined with metadata standards, detection tools and organizational protocols.

    IT leaders should consider the following steps to benefit from AI watermarking's potential:

        • Audit all AI systems and outputs, classifying them by regulatory risk category.
        • Adopt C2PA Content Credentials for published media.
        • Establish internal policies requiring disclosure of AI-generated content.
        • Join industry standards bodies to influence evolving requirements.

    With more regulations on the horizon that could include watermarking requirements, time is of the essence for businesses to be ready to comply.

    "Businesses that move now gain learning, operational readiness and credibility," Renaud said. "In practice, that's often more valuable than waiting for a technically perfect system that may never fully arrive."

    Sean Michael Kerner is an IT consultant, technology enthusiast and tinkerer. He has pulled Token Ring, configured NetWare and been known to compile his own Linux kernel. He consults with industry and media organizations on technology issues.

    Next Steps

    How to ensure AI transparency, explainability and trust

    How to detect AI-generated content

    6 steps in fact-checking AI-generated content

    Generative AI ethics: 11 biggest concerns and risks

    Dig Deeper on AI technologies