Mark Wilkinson at OpenText explains how organisations can ensure GDPR compliance in an age of artificial intelligence
As widespread use of AI intensifies, so must an emphasis on remaining compliant with the UK’s General Data Protection Regulation (GDPR). GDPR sets guidelines for the collection and processing of individuals’ information, applying to organisations that handle and process user data. The regulation works by outlining seven clear principles, including purpose limitation, accuracy and accountability, that organisations must follow to ensure safe and ethical processing of personal data.
Unfortunately, a report from Stanford University found that an increasing number of businesses are training their AI models on proprietary and potentially sensitive datasets in order to gain a competitive advantage. While this may be effective for curating a personalised customer journey, it can ring alarm bells for data governance and protection concerns.
The customer trust gap
A recent report by KPMG and the University of Melbourne found that only 54% of consumers globally are willing to trust businesses’ AI use. This far-reaching unease suggests that ongoing concerns around transparency are undermining customer confidence and posing wider reputational, financial and legal risks. In this context, regulations like GDPR play a crucial role in enforcing accountability around AI practices.
Though just over half of consumers are willing to trust a company’s AI adoption, a considerable 92% of businesses are planning to increase their AI investments over the next three years. For those planning on leveraging user-specific data, and thereby improving reliability, decision-making and understanding of business context, transparency will be key.
AI misuse implications
AI’s widespread implementation feels relatively new to consumers, making it easier for scaremongering discussions to take hold. For example, there has been frequent media discourse around the prominent role that generative AI (GenAI) plays in fraud and impersonation schemes.
These fear-driven trends likely serve as an aggravating factor in the concern from two-thirds of consumers that AI systems are collecting their personal data, as we revealed in the 2024 GenAI Consumer Trends and Privacy Report.
The necessity of AI
In a landscape of unease, business leaders must balance public concern with the benefits of AI’s capabilities. It’s no secret that the technology’s popularity is due to its tangible benefits for the wider business landscape, with its ability to help employees save up to 122 hours a year and boost the UK economy by £200 billion across the same timeframe. If they avoid AI adoption, companies risk falling behind and losing their competitive edge.
Staying compliant
How can companies lean into the growing trend of training AI models on customer data and curating personalised experiences for consumers, without risking non-compliance with GDPR? I suggest doing so by using accessible data to deliver unique value and intuitive results, and driving transparent processes to increase consumer trust.
The first step to GDPR success is understanding the principles behind the regulation. With company-wide training sessions and open communication across the business, this knowledge foundation can lead to a long-term technology strategy that considers ethical data use and highlights the need for transparent processes.
Take Meta, for example. In order to better understand and reflect users’ languages, cultures and history, Meta has been transparent about its plans to train GenAI models on the interactions that people have with AI. Aware of the risks this can pose to customer privacy, the social media behemoth has highlighted that its approach falls under GDPR’s ‘legitimate interest’, improving services and user experience. This visibility is vital, encouraging customers that their data is only being used to enhance the experience and reliability of the AI models.
To drive both compliance and innovation, companies must keep up to date with AI-focused regulations that protect user data and ensure ethical data training. For those working across the EU and UK, a key example is the EU Commission’s AI Code of Practice and Training Data Summary Template, introduced in early 2025.
Part of the broader AI Act framework, it requires providers of general-purpose AI models to publish a sufficiently detailed summary of their training data, reflecting growing pressure from governments and regulatory bodies to ensure that AI systems are trained on appropriate, transparent, and legally compliant data. For UK-only businesses, this EU framework can be a useful guide for driving ethical AI use.
Building compliance
Failure to comply with GDPR regulations can drive down profits, customer trust, and brand image. As today’s business landscape becomes increasingly data-centred, organisations must find the balance between harnessing AI’s capabilities and ensuring ethical use of customer data.
By implementing company-wide training, ensuring transparency, and keeping up to date with regulatory developments, companies can drive both consumer loyalty and innovation.
Mark Wilkinson is OpenText’s Senior Vice President of the Global Business Network division
Main image courtesy of iStockPhoto.com and Cemile Bingol
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543