ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Could AI be the next boardroom scandal?

Sponsored by Gowling WLG

AI is reshaping business, but weak oversight could spark major governance failures. Gowling WLG explains the risks and how leaders can stay ahead

Linked InXFacebook

Artificial intelligence (AI) is rapidly transforming the business landscape, promising efficiency, innovation and competitive advantage. Yet as organisations embrace these technologies, a critical question arises: could unchecked AI systems trigger the next wave of corporate governance scandals and litigation?

 

The spectrum of governance risks

 

AI’s integration into core business functions creates new governance risks. Unlike traditional IT, AI operates autonomously and often opaquely, which can expose organisations to reputational damage, regulatory action and shareholder litigation.

 

Policies alone are insufficient: businesses must actively manage the ethical and operational challenges of AI use. The UK government now expects organisations to adopt “responsible AI” principles of safety, transparency, fairness, accountability and redress.

 

As AI increasingly shapes decisions affecting employees, customers and society, leaders must look beyond compliance to ask whether their AI practices support the values they promise stakeholders and reflect on the wider societal impact of autonomous decision‑making.

 

Your new fiduciary duties 

 

Directors’ legal and ethical duties now extend clearly to AI oversight. They are expected to scrutinise model risk, explainability, data provenance and accountability – failures in any of these areas can cause real‑world harm, from discriminatory outcomes to financial loss. As AI permeates functions from HR to procurement, governance mechanisms must be robust and updated far more frequently than with previous technologies.

 

If an AI system harms stakeholders, where does accountability sit: with the developer, the board or the system’s operators? And how can organisations ensure that the drive for efficiency does not undermine fairness, trust and the values they claim to uphold?

 

Personal liability for directors

 

Could directors face personal liability for algorithmic harm, similar to financial mismanagement or certain GDPR breaches? The legal landscape is evolving, but even without new legislation, directors may still be accountable if AI-driven decisions lead to reidentification, discrimination, negligence or other harms.

 

As regulators and courts begin to address AI failures, the risk of personal exposure is both real and increasing. Some traditional directors and officers insurance policies are now introducing exclusions for AI-related risks, leaving directors with significant potential vulnerability.

 

Multidisciplinary governance

 

Effective AI governance cannot be siloed and, unlike other technologies, cannot be governed only by the IT department – it must span IT, cyber-security, HR, data harvesting and use, the company’s PR and marketing teams, legal, procurement and operations, with clear ownership and accountability.

 

This multidisciplinary approach is challenging, requiring businesses to assemble teams with diverse skills and ensure continuous learning. The pace of AI development means yesterday’s mitigations may be obsolete tomorrow, making cross-functional collaboration essential.

 

Transparency, trust and expectations

 

Transparency is a moving target in the world of AI. What constitutes appropriate disclosure varies across countries, cultures and demographics. Businesses must calibrate their approach to labelling AI-generated content and explaining AI-driven decisions to meet evolving stakeholder expectations.

 

For multinational organisations, this often means adhering to the most stringent regulatory standards. At present these are typically the standards set by the EU, to ensure compliance and maintain trust across jurisdictions, but cultural differences will be harder to identify and harmonise.

 

Lessons from litigation

 

The risk of IP infringement, both in training data and AI outputs, is real and increasing. Recent litigation such as Getty Images vs Stability AI highlight the dangers of using protected content without proper licences, while ongoing UK consultations underscore the complexity of balancing innovation with rights protection.

 

Companies developing AI must also safeguard their own R&D by securing appropriate IP protection across relevant jurisdictions. Directors therefore need clear policies and controls to protect their organisation’s rights and avoid costly, cross‑border disputes, especially given the global nature of both AI deployment and IP law.

 

Reputation management: the cost of cutting corners

 

AI offers opportunities to reduce costs, but not without reputational risks. For example, retailers using AI-generated images of models may save money, but can face backlash from customers (as well as unions and guilds representing actors) who perceive this as inauthentic or unethical.

 

Similarly, the use of deepfakes or AI-generated actors in marketing and advertising can provoke public outcry and damage brand trust. Brands must weigh the immediate cost-saving benefits of AI against potential long-term reputational harm.

 

Upskilling for the AI era

 

Effective AI governance requires new skills at both board and operational levels. Directors must develop enough AI literacy to challenge assumptions, understand risks and make informed decisions.

 

Meanwhile, organisations must cultivate teams with diverse technical, ethical, legal and operational expertise. Shared terminology and a culture of continuous learning are essential; without them, misunderstandings can lead to inconsistent decision-making and increased exposure.

 

Avoiding history’s mistakes

 

History shows that systemic scandals often stem from failures in risk management – whether financial crises, data breaches or product recalls. AI poses similar risks: it is complex, fast‑moving and difficult to govern. The Dutch child benefits scandal, where an AI tool wrongly identified fraud and triggered severe consequences for thousands of families, demonstrates how serious the harm can be.

 

Professional reputations are also at stake, as seen when a major accountancy firm refunded fees to the Australian government after AI‑generated errors appeared in an official report. Businesses must therefore stress‑test AI systems for worst‑case scenarios and learn from past failures.

 

Key takeaways for business leaders

 

  • AI oversight is a boardroom imperative: risks are significant and rapidly evolving
  • Legal and ethical duties now include AI: directors must be proactive and vigilant
  • ESG concerns are real: what is the company’s response to the environmental impact of AI?
  • Multidisciplinary governance is essential: collaboration across functions is key
  • Transparency and compliance matter: adhere to the highest regulatory standards
  • IP and reputational risks are real: safeguards and policies must be robust
  • Continuous learning is vital: upskill and adapt to stay ahead
  • Protecting investment in AI innovation is key: IP policies must be reviewed to ensure they are fit for purpose

 

Proactive governance for a safer future

 

The message for business leaders is clear: proactive AI governance is essential. Implement robust frameworks, conduct regular risk assessments and seek legal consultation to mitigate exposure and protect investment. Do not wait for regulation to catch up – the time to act is now.

 

As AI reshapes the boardroom, the greatest risk may be failing to ask the right questions. The future of corporate governance will be defined not just by compliance, but by the courage to lead with insight, integrity and imagination.


For tailored advice and support on governance and risk management, contact the AI team at Gowling WLG


Alexandra Brodie, Jocelyn Paulley and Patrick Arben, Partners, and Dan Smith, Legal Director, Gowling WLG

Sponsored by Gowling WLG
Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543