ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

When it comes to AI, it turns out things really are either black or white

Should we blindly trust AI to solve our problems, or find the right balance between good outcomes and proper control?

If we’re to increasingly entrust artificial intelligence to take care of more and more aspects of our lives, it surely stands to reason that we’re able to dig into the nuts and bolts of any AI to better understand how it makes its decisions when we need to – and to fix things when they go wrong.

 

Perhaps surprisingly, however, this is the subject of intense debate among the world’s machine learning scientists.

One one side of the argument are “white box”, or “explainable” AI models, which can be explained by patterns in a way understandable to a human expert, and, it is argued, are therefore superior. On the other are the “black box” models, which others contend are more accurate than explainable models in certain situations (such as image processing), as they are driven directly by data from an algorithm. This, the theory goes, makes them more efficient.

 

White versus black


Black box models used in machine learning are produced directly from data by an algorithm, which means that no one, not even the designers, can comprehend how these variables are combined to generate predictions.

Black box predictive models can be such intricate functions of the variables that no human being could feasibly comprehend how those variables are related to one another to reach a final prediction, even if they have a list of the input variables.

 

By contrast, in white box AIs the relations between variables can be more obvious, and by extension the way in which the AI forms its final prediction. A linear model, in which variables are weighted and added together, or a brief, logical statement may only combine a small number of variables.

 

How much control are we ready to relinquish?


The FICO-sponsored 2018 Explainable Machine Learning Challenge served as a case study for debating the benefits and drawbacks of using explainable models over black box ones. The audience, made up of influential figures in finance, robotics and machine learning, was asked to consider two different scenarios, in which they had cancer and required surgery to remove a tumour. They had two options to choose from. The first was a human surgeon who could explain any aspect of the procedure but had a 15 per cent chance of killing the patient.

 

The alternative was a robotic arm, which had only a 2 per cent chance of failure. The robot couldn’t do much by way of explaining, as it was designed to be a black box model. In this case, complete trust in the robot was necessary; no questions could be directed at it, and no detailed explanation of how it reached it decisions could be given.

The audience was then instructed to raise their hands and indicate which surgeon they thought should perform the life-saving procedure. Every hand but one supported the robot.

 

Although it may seem obvious that a 2 per cent chance of dying is preferable to a 15 per cent chance, framing the risks associated with AI systems in this manner raises some important and intriguing questions. Why is the robot required to be a black box? Would the robot lose its capacity to carry out precise surgery if it were able to explain itself? Wouldn’t better communication between the robot and the patient improve patient care rather than deteriorate it?

 

According to some studies, black box models do not actually appear to be more accurate in many healthcare domains, or in other high-stakes machine learning applications where decisions that could change someone’s life are being made, as their inscrutable nature can conceal a wide range of potentially harmful errors.

 

In the meantime, although black box AI is typically too complex for most people to wrap their heads around easily, it is widely used in various applications to produce insights based on a data set. Perhaps regrettably, the popularity of black box AI has enabled businesses to market complex or exclusive models for critical decisions, when very straightforward interpretable models already exist for the same tasks.

 

This enables companies that develop such models to carve out a piece of the market without considering the long-term effects. It was observed in the 2018 Explainable Machine Learning Challenge which serves as a case study for considering the tradeoffs of favoring black box models, that the claims of their creators – that opaqueness is a prerequisite to these machines’ accuracy – have, until now, largely gone unchallenged.

 

Giving transparency a chance


The debate over the two approaches to machine learning has significant implications, as its outcome will likely have a tremendous influence over how our financial, healthcare and criminal justice systems – to name a few – develop in the future.

 

Given that’s the case, perhaps developers should sign up to the rule that black-box machine learning models are considered as a second option only if no explainable model can be created to achieve the same level of accuracy. After all, if it isn’t broke, why fix it?

Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings

Join the Business Reporter community today and get access to all our newsletters, and our full library of talk show episodes

Join the Business Reporter community today and get access to all our newsletters, and our full library of talk show episodes

Join free today
Join Business Reporter