
Scott Bridgen at Diligent outlines a strategic approach to managing AI practice
If you were a fly on the wall in boardrooms around the globe, you would likely hear the question, “how can we effectively harness the potential of AI without compromising our governance, data privacy and ethical standards?”
As organisations navigate this complex AI landscape, they must balance the vast opportunities presented by AI with the significant risks they entail. This is especially true of governance and compliance risks, as businesses look to integrate AI throughout their operations. This requires a nuanced approach.
Deloitte research revealed that around 7 million people in the UK have already used generative AI at work, underscoring the urgency of establishing AI policies. The integration of AI with governance, risk and compliance (GRC) processes has become a business-critical issue that demands strategic oversight and a carefully planned approach.
While the pressure is on for leadership teams to start maximising the benefits of AI, it is critical to establish appropriate resources and structures from the outset.
Balancing the benefits and risks of AI
AI is reshaping business processes across industries by streamlining operations, boosting productivity and driving innovation. As a result, AI is becoming increasingly integral to businesses operations. In fact, 67% of business leaders are increasing their investment in AI. However, alongside its significant benefits, AI also introduces risks of data misuse, algorithmic bias and potential legal violations if left unchecked.
A recent roundtable report highlighted key concerns about AI adoption among UK directors, including data privacy issues, the lack of internal capabilities and unpredictable AI hallucinations. To mitigate these risks, establishing a robust governance framework is essential for guiding AI adoption effectively, and for this, human oversight is an essential factor along the way.
Human oversight underpins AI decision-making
Effective AI governance requires a delicate balance between driving technological innovation and ethical responsibility. To achieve this balance, leadership teams must take a proactive role in setting clear policies, ensuring transparency and maintaining control over AI-driven decisions.
One of the primary challenges that organisations face is ensuring AI systems are not only efficient, but also accurate and non-biased. This is particularly critical for audit and compliance teams, whose work depends on consistent, verifiable outputs. If AI models produce inaccurate results or reflect biased assumptions, they can undermine the integrity of audits, trigger regulatory breaches, or distort risk assessments.
The integration of AI into these departments is often complex for two reasons: existing controls and verification methods were not designed for probabilistic outputs, and detecting bias in training data requires specialist expertise. Furthermore, legal and risk teams may struggle to predict AI’s long-term regulatory implications.
These challenges underline the need for human oversight at every stage, ensuring that AI complements human expertise and does not replace critical judgment.
Addressing the privacy concern in the room
In highly sensitive industries where confidentiality is crucial, such as legal and finance, data privacy and security pose major concerns. Potential data breaches in these sectors could lead to substantial reputational damage, regulatory penalties and considerable financial losses. This underscores the importance of working with trusted AI providers that adhere to stringent data privacy and security standards.
According to research, AI-related privacy and security incidents surged by more than 50 percent (56.4%) in 2024 from the previous year, with 233 reported cases globally. In light of these major security threats, businesses must insist on full transparency from their AI providers. Clear, documented models outlining how their systems handle and protect stored data, particularly concerning the use of propriety data for training AI models, should be a baseline requirement.
As a first step, organisations should adopt reputable AI tools that streamline GRC analytics and reduce the workflow for compliance teams.
Education starts at the top
For many organisations, particularly small and medium enterprises, a predominant concern is the lack of in-house AI expertise. Deloitte research revealed that only 18% of European respondents are ‘highly’ or ‘very highly’ prepared in the areas of generative AI risk and governance. Long-term success in AI adoption requires boards and leaders to be armed with the information they need to effectively assess AI’s implications, limitations and opportunities.
As such, comprehensive training initiatives that educate employees on AI in governance, risk and compliance must be a cornerstone of any company’s strategy to manage the adoption of the technology. Regulatory changes, for example, future updates to the EU AI Act, must also remain top of mind, due to the shifting implications of non-compliance.
To address these challenges, all teams must be educated on the capabilities and limitations of AI. Since AI systems can inadvertently introduce bias or perpetuate inaccuracies, human intervention in the process of validation and decision-making remains essential.
This is especially important when it comes to high-stake industries like finance, healthcare and legal services. An important rule of thumb for businesses is to ensure a human is always in the loop when it comes to AI involvement.
Navigating the ethical tightrope
As AI systems continue to evolve and grow in capability, the accompanying ethical issues also become increasingly complex. While directors should embrace AI, they must also take responsibility for applying it within ethical frameworks.
One fundamental aspect of an ethical AI framework is ensuring that it is used responsibly and aligns with the values of the organisation. It is essential to establish clear guidelines and effective monitoring, as well robust governance structures.
Ethical pitfalls can arise quickly if AI tools are used without oversight. For example, relying on generative AI to draft legal or HR documents may lead to biased or inaccurate content, which could give rise to heavy damage to a company’s reputation or financial performance if it isn’t carefully reviewed. Similarly, inputting confidential client or employee data into public AI systems creates serious risks around privacy and trust, and could even breach the EU’s General Data Protection Regulation.
As AI becomes further ingrained in business operations, the handling of the ethical implications will begin to define the successes or failures of AI initiatives in years to come.
Cyber-security policies to safeguard innovation
As with any new technology, implementing AI functions comes with security risks, such as data exposure when using public AI models. This risk is exacerbated by lack of awareness and expertise among leadership. To mitigate these risks, businesses should roll out an overarching AI policy that prioritises cyber-security policies and is complemented by regular and mandatory staff training in relation to AI usage.
Effective AI implementation requires a delicate balance between governance, data privacy frameworks and ethical oversight. The associated risks must be managed proactively and head on.
By fostering a culture of informed decision-making, and ensuring strong human oversight and supervision, organisations can responsibly integrate AI into governance processes and ensure the technology delivers maximum value, ultimately leading to long-term business growth.
Scott Bridgen is GM of Risk and Audit at Diligent
Main image courtesy of iStockPhoto.com and demaerre

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543