When AI-generated content looks authentic but can’t be verified, people hesitate and trust erodes. Eoin Shanley at DigiCert explains how organisations should be protecting themselves

Earlier this year, the central bank of Italy issued a warning after an AI-generated deepfake of its governor, Fabio Panetta, appeared on TV shows and media platforms endorsing sketchy investment products. While the bank proactively alerted the public to the scams, confidence in the authenticity of its legitimate digital content was shaken.
To many casual observers scrolling social feeds, AI-generated content is now appearing legitimate enough to share, debate or question. Internet users have a 50-50 chance of discerning “real” from “fake”, and this blurred line is driving a wedge between businesses and consumer trust.
Major global events, especially those tied to health, policy or corporate leadership, have always attracted rumours and misinformation. But in an era where images, video and fabricated narratives can be created in seconds and distributed globally, the scale and speed of that distortion have changed. What once required coordinated campaigns can now be done by individuals with a laptop.
The result isn’t just false information. It’s questioning whether anything can be believed.
The new problem isn’t fakery, but ambiguity
We tend to frame the modern information crisis as a battle between “real” and “fake.” But that framing misses the deeper issue. Ambiguity.
When content looks authentic but can’t be verified, people hesitate. When multiple versions of a story circulate with equal visual credibility, trust erodes. Over time, this creates an environment where bad actors don’t even need to persuade, only to confuse.
This ambiguity can become fertile ground for manipulated narratives, impersonation and misleading claims that can damage even the strongest brand identity.
AI must be instantly verifiable
AI-generated images, video and audio are now good enough to pass casual scrutiny. More importantly, they are cheap, fast and endlessly remixable. This lowers the barrier not just for sophisticated attackers, but for everyday opportunists chasing attention, clicks or profit.
This misinformation can affect nearly every aspect of a business. From consumers deciding whether a promotion is legitimate, to investors evaluating announcements to determine if they’re real. In this environment, asking people to be more sceptical is not a solution. Scepticism without verification simply leads to disengagement. Instead, people must instantly have verified proof of a content’s creator, whether it’s been altered and where it’s come from.
This is where the emerging concept of content trust becomes essential. Instead of trying to identify what’s fake after the fact, authenticity must be visible at the point of consumption.
Unlike platform-based verification, these credentials must be attached to the file itself. Whether reposted, screenshotted or shared across platforms outside the original publisher’s control, the content must be cryptographically signed and verified, providing tamper-evident provenance and transparency.
Trust should be built in, not bolted on
For decades, the internet has relied on invisible trust mechanisms, such as certificates, encryption and identity verification, which quietly protect users without requiring them to understand the underlying technology. Digital content deserves the same treatment.
Retrospectively verifying whether a social media post is real once the internet has debated its legitimacy is not a viable way forward for businesses. Instead, cryptographic assurances must be embedded directly into content, allowing everyday users to distinguish between verified sources and unverified claims without becoming a debate.
Cryptographic protection and verification must also apply to a business’ AI systems. Every day, AI agents are being embedded into operational functions. Securing consumer trust in digital content is important. But equally so is maintaining trust internally. A rogue AI has the potential to disrupt every aspect of internal processes, undermining not only consumer confidence if data is mishandled, but also investor trust. As AI takes on a growing role as a digital workforce, every action must be clearly attributable and controlled. This demands a verifiable chain of custody for models to ensure they have not been tampered with, are running in trusted environments and are handling sensitive data securely.
Trust established, not rebuilt
The internet has outgrown its ability to signal trust clearly. When everything can look real, authenticity must be provable and in an age of AI-generated everything, trust itself becomes the most valuable content of all.
Businesses are battling against ever-present impersonations and misinformation. Instead of operating on the defensive, scrambling to reestablish trust when a malicious actor has already exploited it, organisations must confirm digital content authenticity from the day it is created, without an expiry date.
Provenance only works if trust is anchored somewhere reliable. The cryptographic principles that secure websites, software and digital identities must now apply to the content we consume every day.
Eoin Shanley is Director of Product Management at DigiCert
Main image courtesy of iStockPhoto.com and SmileStudioAP

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543