ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

AI and humans: Who’s in charge?

Jeremy Swinfen Green explores the factors that influence how people and artificial intelligence should work together to maximise the benefits of machine learning responsively

Linked InXFacebook

Artificial intelligence (AI) is transforming the way businesses operate, enabling decisions based on complex data to be made at great speed and often more accurately than humans are able to.

 

However, its successful deployment hinges on finding the best balance between human expertise and experience and the competencies of artificial intelligence. Collaboration between AI systems and humans is most effective when tasks are allocated based on the strengths of each, ensuring that AI augments human decision-making rather than replaces it.

 

Relative strengths

 

Understanding the respective strengths of AI and humans is fundamental to integrating AI into decision-making processes.

 

AI systems excel at rapidly analysing vast amounts of data, identifying patterns and performing repetitive, data-intensive tasks with accuracy and efficiency. This capability allows AI to support organisations in many areas, including financial modelling, medical diagnostics and predictive market analytics.

 

However, while AI thrives on data and rules, it lacks the contextual understanding, lateral thinking and ethical reasoning that people can provide. Humans bring the ability to understand complex social and cultural factors. They may exercise creativity. And they can make nuanced judgments depending on factors that a more logical AI system may not be able to consider.

 

Therefore, while AI can significantly improve decision-making speed and accuracy, humans must remain actively involved to ensure ethical considerations, strategic oversight and an understanding of implications beyond those that can be derived from the immediate data set.

 

Models of human-AI collaboration

 

Several models exist for collaboration between humans and AI, each defined by the level of responsibility and oversight given to the AI system.

 

AI as an assistant

 

In this model, AI functions as a support tool while humans maintain full decision-making authority. For example, in medical imaging, AI can scan radiographs to detect anomalies, but a radiologist makes the final diagnosis. Similarly, AI-generated legal contracts must be reviewed by human lawyers before approval.

 

In this model, humans are always expected to play an active part in the decision-making process.

 

The model ensures that while AI improves efficiency, and may well also increase accuracy (for example with medical scans), humans maintain ultimate control over the process, ensuring outcomes are in line with human norms and expectations, and strengthening the trust other stakeholders may place in the system.

 

AI and humans as partners

 

Here, AI and humans work together in a fairly equal partnership. AI takes on more responsibilities, but humans intervene as needed. An example is AI-powered customer complaint triaging, where the AI system resolves standard queries while humans handle complex cases, or cases where a customer simply wants to speak to another human. Another example is financial trading, where AI executes trades but human traders oversee and adjust parameters if necessary.

 

In this model, responsibility for some tasks is delegated to the AI system, while humans retain the ability to step in when required.

 

The benefit of this model is that AI streamlines some operations while allowing humans to focus on higher-value activities.

 

AI as the main actor

 

In certain environments, AI may take the lead, with human intervention occurring only in exceptional circumstances. In systems with high transaction volumes, such as manufacturing or banking, where efficiency is highly desired, AI can be responsible for most decisions. Autonomous fraud detection systems are an example of this approach: AI analyses transactions for suspicious activity, making decisions (such as halting a payment) that will require a human to overrule.

 

In this model, AI is responsible for completing day-to-day tasks, with humans brought in only when needed.

 

The benefit of this model is that AI streamlines operations, with humans required to act only in unusual circumstances.

 

Allocating responsibility

 

Deciding how much responsibility AI should be given will depend on multiple factors, including task complexity and the risks associated with the task.

 

The complexity of the task is fundamental to where responsibility should lie. Simple tasks with clear rules and little ambiguity, such as spam email filtering, can reasonably be handled by AI. However, complex tasks that require subjective judgment, such as whether a business contract will be easy to understand by both parties, may need a human to take responsibility.

 

There is also a relationship between a task’s value and its complexity: for instance, tasks that require complex physical actions, such as mending a domestic boiler, may be beyond the scope of AI currently, even though it is possible to build an AI-powered robot to do this in a commercial context.

 

The level of risk associated with a task is also significant. In low-risk scenarios, such as recommending a film to watch on television, it is reasonable for AI to operate autonomously (although even here, there should be a degree of oversight, if only for commercial reasons).

 

However, in high-risk areas such as legal sentencing or confirmation of a medical diagnosis – areas where a wrong decision could have catastrophic consequences – human involvement remains crucial. AI decisions in such contexts must be continuously and closely monitored to prevent unintended harm, and human decision-makers’ active involvement should be in place, with quality control processes also present to ensure that involvement is sufficient and appropriate.

 

Tasks where there may be regulatory and legal consequences to a decision can be considered high risk. Regulated industries such as finance and healthcare impose strict accountability on human decision-makers. AI cannot operate independently in these sectors without sufficient human oversight. Examples include hiring processes and loan approvals.

 

The EU’s AI Act attempts to identify these high-risk areas, either forbidding them or requiring certain actions to be taken by the organisation that wishes to employ AI. While a risk-based approach is undoubtedly sensible, questions remain over whether the wording in the AI Act, and specifically the attempt to categorise certain activities as having a specific risk level, will deliver against its aims without hamstringing innovation in the EU.

 

Evolving human-AI interactions

 

AI technology is changing rapidly. And the uses to which it is put are also evolving rapidly. Therefore, to maximise AI’s effectiveness, both AI systems and human users must continuously learn and adapt. Static AI systems that cannot learn from new data will require frequent human intervention to ensure they retain relevance and accuracy as the environment around them changes.

 

A different problem is found with self-improving AI systems that use machine learning techniques. Because these AI models refine their algorithms over time, continuous monitoring is also required. They may become more effective, increasing the level of autonomy that can be given to them. However, it also becomes harder for human overseers to understand how they make their decisions. With the decision mechanisms hidden, there is a need for scrupulous attention being paid to the potential for harm.

 

The way that human stakeholders behave in their use and management of AI is affected by two main factors: whether people trust the AI; and whether the right people are available to interact with the AI.

 

Cultural and psychological factors are important. User confidence in AI inevitably affects how an organisation can use AI. If people believe it is a threat to their jobs or a poor substitute for human decision-making, they are likely to undermine it. Gradual adoption, together with active change management issues and the support of AI “champions” in the organisation, may be necessary to build trust.

 

The availability of human oversight is also important. But this will vary by context. In call centres or remote operating theatres, human supervisors can easily intervene when AI systems encounter issues. However, in remote applications such as autonomous vehicles and machines where humans are not immediately available, AI may need to operate with a higher degree of autonomy: in that case any potential risks must be carefully managed.

 

Striking the right balance

 

While AI can significantly enhance decision-making and operational efficiency, it cannot function in isolation. The ideal balance between AI and human responsibility depends on several factors, including task complexity and risk level, as well as human and ethical considerations.

 

Ultimately, when AI is given any responsibility for decision-making, however small, accountability must rest with humans. Businesses that strategically integrate AI while maintaining an adequate and appropriate level of human oversight will be positioned to harness the full potential of artificial intelligence in a responsible and effective manner.

Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543