ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Unlocking the potential of AI

Camilla Winlo, Head of Data Privacy, Gemserv argues that data privacy is the key to achieving real benefits from artificial intelligence

 

Each time you use an app to navigate your way from point A to point B, use a dictation tool to transcribe speech to text, or unlock your mobile using facial recognition, you’re relying on artificial intelligence (AI). This reliance extends to businesses too, as organisations across industries pour more investment into AI to increase efficiency, streamline processes and improve customer service.

 

At the end of last year, Gartner predicted that worldwide AI software revenue will total $62.5 billion in 2022, marking an increase of 21.3% from 2021. At present, AI is predominantly used for pattern recognition and for automating repetitive tasks – both areas where AI performs better than humans.

 

While leveraging AI for pattern recognition frees up employees to focus on more interesting tasks, it can result in biased outcomes when tasks are automated with insufficient testing. The fact is that machines can make mistakes too – just faster and more consistently.

 

One of the most well-known cases of pattern recognition error involved Amazon, which implemented a recruitment engine intended to help it screen the huge number of applications it received. Amazon’s software was programmed to compare and recognise patterns in CVs to find the candidates most likely to succeed in the interview process.

 

The results revealed that the tool had taught itself bias. Where Amazon was hiring for male-dominated positions, the tool had learned that the company was looking for a man to fill that position. At first Amazon tried to correct the algorithm, but ultimately it disbanded the development team. 

 

Leveraging data privacy for AI

The General Data Protection Regulation (GDPR) sets out rules for automated decision-making that has particularly significant implications for the individual in certain situations. It gives individuals the right to request human review of the decision and there are recitals that set out particular protections such as ensuring that statistical procedures are appropriate and that the risk of errors is minimised.

 

Other GDPR rules, like the need to establish a legal basis for all processing activities, also impact AI.

 

Some jurisdictions including the EU, US and Brazil have chosen to roll out bills to help govern AI. The EU has proposed an Artificial Intelligence Act that enforces conditions on high-risk AI systems. The US has set out an AI Bill of Rights which outlines five principles for responsible AI. And Brazil has passed an AI Bill that sets out aims and principles for AI development in Brazil.

 

In some instances where AI has specific uses, like healthcare, for example, there are rules imposed by existing laws. The UK has chosen not to do this to date, but the decision could change down the line.

 

Use of AI in the public sector

AI has been noted as one of the four Grand Challenges in the UK and is supported by an AI sector deal worth up to £950 m. To facilitate the adoption of AI by both private and public sector organisations, the government has set up three bodies: the Centre for Data Ethics and Innovation, the AI Council, and the Office for AI.

 

Governments typically address challenges with wide social impact. So controlled use of AI in the public sector is particularly important. While there is potential to benefit the public on a large scale, the risk of harmful outcomes is heightened because of the scale of processing involved and, in many instances, the lack of alternatives.

 

For many people, public sector AI-driven services will be the first AI that they encounter, and the experience will go on to influence their perception of AI more widely.

 

Used correctly, AI has the potential to revolutionise public services by facilitating cost effective, socially beneficial outcomes. This is of particular importance in our current economic climate, where the public sector is under pressure to cut costs.

 

Gaining public trust in AI

To achieve the potential social benefits that AI can bring, it’s critical that individuals feel they can trust public and private sector organisations to process their personal data securely. People need to believe that AI-driven outcomes will be fair, and that the process of interacting with the AI will be simple and effective.

 

There are a number of data privacy tools that AI developers can use alongside data privacy experts to identify the risks to potential users, to address them and to ensure that people feel comfortable interacting with the technology. This is a commercial imperative as well as a regulatory requirement. These kinds of risk controls should be viewed as an intrinsic part of system design requirements as any other operational processing activity.

 


 

Camilla Winlo is Head of Data Privacy at Gemserv

 

Main image courtesy of iStockPhoto.com

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543