ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

AI risk is real

Andrew Bayers at Resilience outlines the issues surrounding artificial intelligence and the safeguarding of data

AI is no longer a novelty. It is now deeply embedded in how businesses operate, from generating client communications and analysing data to supporting decision-making. With this comes a pressing question for senior leaders: what happens to the sensitive information that employees feed into AI tools, and what exposures might arise when that data is no longer within the company’s control?

 

 

Understanding AI-driven risks

When using a large language model (LLM), such as ChatGPT, the risk of data exposure increases due to both human behaviour and the inherent design of generative AI systems. Employees may inadvertently input proprietary financial data, intellectual property or customer records into public-facing AI platforms. Depending on the platform’s configuration, this information may be stored, logged, or even incorporated into future model training datasets. In rare cases, such data could reappear in responses to unrelated users if not properly sanitised, creating a chain of privacy and compliance risks.

 

In 2023, Samsung employees sparked widespread concern when they entered source code, meeting transcripts, and other proprietary information into ChatGPT, highlighting how easily corporate information can be exposed. While the Samsung incident did not involve personally identifiable information (PII), had it contained such data, the company might have faced regulatory penalties, class-action lawsuits, and reputational harm.

 

Emerging legal guidance suggests that companies remain responsible for any data employees put into LLMs, as organisations are typically deemed the “data controller” or “covered entity” under major privacy frameworks such as GDPR. From a regulatory perspective, an employee’s disclosure of client PII to an LLM is treated similarly to a breach caused by an unsecured server or a successful phishing attack.

 

Beyond user behaviour, vulnerabilities within LLM systems themselves can also result in data exposure. A 2023 software flaw in ChatGPT revealed portions of users’ conversation histories. While this was quickly remediated, the incident underscores how even minor technical flaws can lead to unintended disclosure within these complex and rapidly evolving platforms. This is a new frontier for enterprise risk management at the intersection of employee use, AI system design, and evolving privacy law, a frontier that requires clear governance, employee awareness and training, and vendor due diligence.

 

AI introduces a broader range of data-related risks with equally serious consequences. Inadvertent data sharing between interconnected AI platforms and third-party tools can expose confidential business information or customer records without authorisation. Automated data-processing errors may distort analysis and introduce unintended bias, undermine financial accuracy, or trigger regulatory non-compliance. These risks underscore that AI-related exposures extend beyond data leaks, encompassing the integrity, transfer, and governance of information across increasingly complex digital ecosystems.

 

 

Leadership, governance and strategic oversight

For executives, these risks demand more than technical attention. Boards require clear visibility into how employees are using AI in day-to-day operations, along with assurance that controls exist to monitor and limit what data enters AI platforms. These are business continuity and governance issues that require board-level oversight.

 

AI governance should be embedded into enterprise risk frameworks, covering acceptable use policies, vendor management, data-retention practices, and training. Establishing cross-functional oversight, uniting legal, IT, and risk teams, ensures AI deployment with the company’s broader risk appetite and regulatory obligations.

 

 

Insurance as a proactive partner

The role of insurance is evolving to address the unique exposures of AI adoption. While traditional cyber policies cover data exposure, operational errors, and malicious activity, the methods of protecting against these perils in AI systems increase the complexity of corporate resilience.

 

At Resilience, the focus is on building stronger organisations through a combination of financial protection, active risk management, and access to expert resources. Clients benefit from not only financial coverage but also proactive support to identify weaknesses, enhance controls, and strengthen resilience before an incident occurs. Not if, but when incidents do happen, insurance provides more than indemnification. A strong partner delivers forensic specialists, crisis communicators, breach coaches, and recovery experts to contain damage and accelerate recovery. For leadership teams under regulatory and reputational pressure, that coordinated expertise can mean the difference between a controlled recovery and a prolonged crisis.

 

 

Turning AI risk into readiness

The strategic opportunity of AI remains immense, but success depends on balance. Companies must pair productivity gains with sound governance, robust vendor management, comprehensive employee training, and oversight of employee use of AI to prevent inadvertent data exposure. Insurance should be viewed not merely as a safety net but as an active enabler of resilience.

 

For the C-suite, the call to action is clear: review how your organisation uses AI, assess how data is managed, and identify where new exposures are emerging. Confirm that your cyber cover addresses AI-specific risks, from vendor outages to liability for inadvertent data sharing. Work with an insurer that serves as a strategic partner, not just a payer, and use their expertise to strengthen controls and operational resilience.

 

Companies that take this proactive approach will safeguard their operations and gain a competitive advantage by adopting AI with greater speed, confidence and trust. 

 


 

Andrew Bayers is Director of Threat Intelligence at Resilience

 

Main image courtesy of iStockPhoto.com and IvelinRadkov

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543