ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

The risk of waiting for AI regulation

Don’t wait for AI regulation to catch up, advises Martin Davies at Drata; build governance now

As we head towards 2026, many organisations find themselves caught between regulatory delays and technological reality. The EU AI Act was supposed to provide clear guardrails for artificial intelligence deployment, but key guidance documents remain incomplete. AI systems aren’t waiting – they are already making decisions, processing data and embedding themselves into daily operations at pace. 

 

For many, this regulatory uncertainty feels like permission to wait. After all, if the rule makers haven’t figured out the rules yet, why invest in compliance frameworks that might be rendered obsolete?

 

However, the gap between AI’s evolution and regulatory frameworks isn’t a reason to delay action. In fact, it is precisely why organisations need to move faster. AI doesn’t pause for legislation. The algorithms making hiring decisions, processing customer data or executing commands in your systems today carry the same risks irrespective of whether the EU has published a code of practice. 

 

Recent proposals from the European Commission to simplify GDPR and AI regulations have muddied the waters further. While the intention may be to support innovation, the result is increased ambiguity about what responsible AI use looks like. Organisations waiting for clarity from regulators are betting their reputation on a timeline which is very difficult to nail down.

 

 

Building governance ahead of the rulebook

The delayed EU AI Act Code of Practice was supposed to help organisations understand which AI systems pose high risks and which don’t. Without it, companies lack a clear scorecard for assessment, but this doesn’t mean they’re powerless.

 

You can start by mapping where AI exists in your organisation. Look beyond the obvious systems such as chatbots and recommendation engines to the embedded intelligence in procurement tools, HR platforms and operational software as well. Many organisations don’t know the full extent of AI in their technology stack. You can’t govern what you can’t see.

 

Next, assess autonomy and impact. The most critical question revolved around what decisions the AI is making independently. This is because an AI that helps workers draft emails poses vastly different risks than one that executes database commands or screens job applicants. Focus your governance efforts on systems that can make consequential decisions without human oversight.

 

Create internal frameworks even when external standards lag. Define what high risk means for your organisation based on the potential impact to customers, data sensitivity and operational criticality. Establish clear policies on AI use within your company, including what employees can and can’t input into AI tools – especially regarding commercially sensitive or customer data.

 

Many organisations already use systems such as Gemini or ChatGPT, but this needs to be done with proper guardrails.

 

Document the training data your AI systems use and understand their limitations. Establish human review processes for automated decisions that affect individuals. If your AI can make hiring choices, credit determinations or operational commands, ensure humans review the outcomes and understand how decisions were reached. Transparency matters even when the underlying technology is complex.

 

Finally, document everything. When regulators do provide frameworks, you’ll need evidence of your governance approach. Audit trails showing what controls you implemented, when and why will be invaluable. They also protect you if something goes wrong – being able to demonstrate reasonable care matters for liability.

 

Far from an exercise in predicting what regulators will eventually require, this is about understanding the risks your organisation faces now. Financial services companies have decades of experience here – they don’t wait for perfect regulatory clarity before implementing risk controls. The same muscle memory needs to develop across every sector touching AI.

 

 

The trust imperative in an uncertain regulatory landscape

Indeed, governance is a competitive signal in a crowded market. 

 

Consider two software providers offering similar solutions. One can demonstrate robust AI governance, clear policies on data use and regular audits of automated decision-making. The other shrugs and says they’re waiting to see what regulators decide. Which one wins the enterprise contract? The question answers itself.

 

This matters for organisations in heavily regulated sectors or those serving as critical components in someone else’s supply chain. Under frameworks such as DORA, financial services firms are responsible for the security posture of their technology providers. If you’re that provider, your customer’s compliance obligations become your sales requirement. Dismissing AI governance as bureaucratic overhead means watching deals go to competitors who took it seriously.

 

Trust is also remarkably fragile. The European Commission’s recent proposals, which allow companies to use personal data for AI training without prior consent, have sparked fierce backlash from privacy advocates. Generally, public sensitivity around AI and data rights is intensifying – one poorly governed AI system that produces discriminatory outcomes or mishandles customer data can undo years of brand building overnight.

 

Organisations that demonstrate security responsibility through continued compliance commitments build resilience. When regulatory frameworks do solidify (and they will), companies with mature governance programmes will adapt quickly. 

 

 

Preparing for the inevitable  

The EU AI Act may be delayed, but it’s not going away. Cross-border AI companies will face compliance requirements regardless of how individual member states implement the framework.

 

 UK organisations may wonder how this impacts them, but any business serving European customers or markets will need to demonstrate alignment. We saw the same pattern with GDPR and NIS2 – early adoption creates advantages, whilst reactive scrambling creates costs.

 

The European Commission’s deregulation proposals have been criticised for happening without proper procedures or stakeholder consultation. Whether these changes proceed or not, businesses cannot take them as a signal to delay or reduce efforts. AI innovation shouldn’t come at the expense of individual privacy and security.

 

The steps outlined earlier – mapping AI systems, assessing risk, establishing policies and documenting controls – will help to maintain continuity of trust with customers. In a market where customers increasingly care about how their data is used and how decisions affecting them are made, governance becomes a key differentiator. 

 


 

Martin Davies is Senior Audit Alliance Manager at Drata

 

Main image courtesy of iStockPhoto.com and WANAN YOSSINGKUM

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543