ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

The pros and cons of regulating AI

Jeremy Swinfen Green explores the extent to which the EU’s AI Act is likely to promote or hinder the responsible use of artificial intelligence applications.

Linked InTwitterFacebook

Artificial intelligence (AI) isn’t new. The concept of robots has been around for over 100 years, while machine learning computer programmes emerged in the early 1950s and the first driverless car was demonstrated back in 1986.

 

But AI only captured the public’s imagination in late 2022, when a demo version of OpenAI’s ChatGPT tool was released. Since then, policymakers around the world have been playing catch up, trying to understand how, indeed whether, this new technology should be regulated.

 

One of the first sets of regulations designed to control AI was proposed by the European Union’s Commission in April 2021, although China was also early into the space, with provisions governing recommendation algorithms formulated that year and with laws specifically regulating generative AI (as opposed to all AI) passed in 2023.

 

Few other legislatures have passed laws specifically designed to regulate AI. Most countries, including the USA at the federal level, Japan and the UK, are taking more of a wait-and-see approach, proposing principles that underpin good AI practice but relying on existing laws, such as privacy requirements, to manage the way the market evolves.

 

The EU, hoping to be a global leader in AI regulation, appears to be something of an outlier. But is the proposed Act the right way forward?

 

The EU AI Act

 

The EU’s Artificial Intelligence Act was passed on 13 March 2024. The Act aims to promote the responsible use of AI within the European Union and introduces a comprehensive regulatory framework to govern the development and use of artificial intelligence systems.

 

Promoting responsible business

 

It can be argued that the EU’s AI Act represents a significant step toward promoting responsible use of AI by requiring the protection of fundamental rights including safety and privacy when organisations use AI technologies.

 

To achieve the protection of fundamental rights, the act emphasises the importance of transparency in AI systems. Businesses must provide clear and understandable information about whether AI is being used to formulate decisions or content; and the limitations of AI systems, as well as how they were developed and trained, must be explained. For example, a company using AI for loan approvals must disclose the parameters that drive the decisions. The intention is that this will help people understand how decisions are made and thus allow them to hold organisations accountable for any biases or errors.

 

While transparency may seem a simple and beneficial requirement, many AI systems that use deep learning will in fact be difficult, if not impossible, to make truly transparent. Should they therefore be banned under the EU’s act?

 

The AI Act also places a strong emphasis on data quality protection. The data used to train and deploy AI systems must be accurate and representative. The intention here is to prevent unfair decisions. For example, an organisation using AI for recruitment must ensure that the data used to train the AI model is free from biases. In practice, this may be impossible to achieve as almost all databases are partial in some way. Privacy is also an important element of the act with a requirement that training data is collected and used in compliance with the GDPR. This doubling-up approach to legislation contrasts with the UK’s approach which relies on existing laws, like the UK GDPR, to ensure privacy.

 

Human oversight and control is also an important principle. Organisations are required to ensure that humans are ultimately responsible for any decisions made by AI systems. For example, a company deploying an AI-driven medical diagnosis system must ensure that healthcare professionals make the final diagnosis with the AI system used as a decision support tool rather than a substitute for human judgement.

 

This seems sensible at first glance. However, as AI is baked further into organisational systems including the IoT systems common in critical national infrastructure, the requirement for human oversight may become burdensome and even damaging, especially where humans are worse at making decisions that machines.

 

These principles are all excellent, although as indicated they may be flawed if relied on too stringently. However, there are some other important principles that the act seems to be less focused on. For example, accountability gets only brief mention (and that mention muddles accountability and responsibility which are very different). Contestability and redress (the ability to do something about an unfair decision) is also barely mentioned, despite being a big part of the EU’s successful GDPR.

 

Risk-based approach

 

A problematic feature of the AI Act is its risk-based approach. The act distinguishes between different levels of risk associated with AI applications and imposes stricter requirements on high-risk AI systems. For example, AI systems used in critical infrastructure, such as transportation and healthcare, are subject to more rigorous oversight compared to those used in less critical areas such as, say, marketing or sport. This approach is designed to ensure that the regulatory burden is proportionate to the potential harm posed by an AI system.

 

Again this seems sensible at first glance. But will this rather rigid approach to defining what is high risk stand up the potential evolution of AI? The principle of ensuring public safety (and other “fundamental rights”) is already built into the act. Defining high-risk applications can only confuse this.

 

And do low-risk applications have a lower need to ensure fundamental rights? Of course not. It’s perfectly possible that certain AI-powered sports, for example, could be developed that are extremely risky to participants. But these, as the current regulations stand, would not be deemed high risk. Trying to second guess where risk will lie is, in itself, risky.

 

The act goes further, banning the use of AI in certain areas that the EU feels pose significant risks to individuals’ rights and safety. For example, it prohibits the use of AI for social scoring or emotion tracking at work.

 

By prohibiting these practices, the AI Act is attempting to prevent potential abuses of the technology that could result in undue surveillance and discrimination. It is very hard to know at this stage whether activities that are beneficial will be restricted as an unintended consequence and whether other activities that are harmful will escape a net that was drawn up before those activities were ever imagined.

 

The downsides of regulation

 

Critics of the EU’s AI Act argue that it won’t be effective in promoting responsible use of AI by businesses for several reasons. First of all, the definition of AI systems used in the act could be a problem: “a machine-based system designed to operate with varying levels with a degree of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments”.

 

That sounds impressive and it is certainly complicated. But if we cut this definition down to its essentials, we might get something like: “a machine with a degree of autonomy that infers, from the input it receives, how to generate outputs”. Leaving aside the fact that not all AI systems are autonomous and the assumption that the objective of an AI system is to “infer how to generate outputs” (which seems odd), this is a very mechanistic description with no mention of (or relevance to) the essence of AI – which is quasi-intelligence, or the ability to perform tasks that would otherwise be performed by a human.

 

But there are other problems. Overregulation may well stifle innovation and create barriers to entry, particularly for smaller businesses, limiting their ability to compete with larger companies. There is also considerable ambiguity and complexity which make it hard for businesses to understand and comply with the act’s requirements. Businesses may also be worried about the costs of compliance, including conducting risk assessments, implementing technical standards and undergoing conformity assessments.

 

More significantly (because regulation always comes at a cost), it can be argued that the act’s one-size-fits-all approach does not take into account the diverse, and indeed currently unknown, range of AI applications and the contexts in which they are, or may be, used.

 

Rather than regulate, the UK government has taken a more flexible approach, expressing some basic principles that responsible organisations should follow when implementing AI including: contestability and redress; accountability and governance; safety and security; transparency and explainability and fairness.

 

Some of these principles are already covered by various laws and regulations. Fairness, for example, is at least partly covered by The Equality Act 2010 while accountability is built into accepted corporate governance principles. Others will be further understood as our knowledge of AI, how it works and where the risks are, grows.

 

Transparency and explainability is, for example, a highly complex area with clear goals (such as the ability to know how decisions are arrived at) that are susceptible to a variety of levels of information (was AI used – easy; what parameters were used – harder; how were the different parameters weighed against one another – much harder and a moving target). At present, agreeing what counts as adequate explainability and what doesn’t is open to enormous debate.

 

AI is still in a state of considerable flux. There is an argument for legislation now, just as there is an argument for waiting a while to see how the field evolves. But it seems clear that any law that makes assumptions about the abilities, contexts and applications of AI is bound to fail in time (possibly very soon). Far better to focus on the outcomes, on what is not allowed, than to attempt to define what is allowed.

 

By attempting to define what constitutes a high-risk AI application, it can be argued that the EU’s AI Act is fundamentally flawed.

Linked InTwitterFacebook
Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings