ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

The case for ethical AI

Martin Weis at Infosys Consulting argues that ethical artificial intelligence must be a core value to all organisations in 2023

 

Intelligent AI is now suffused into almost every facet of modern life. As a result, questions are rightly being asked about how organisations are managing these transformative technologies.

 

From privacy concerns to biases in algorithmic decision-making, the development and deployment of AI raise serious ethical questions, some of which are already being tackled through increasing regulatory oversight.

 

AI presents huge opportunities which must be pursued with close regard to connected ethical concerns. From transparency to accountability and explainability to bias and discrimination, security and privacy, AI throws up many ethical concerns. Here we explore why an ethical approach to AI must now be a primary concern for leaders and the organisation as a whole.

 

Charting an evolving regulatory landscape

The impact of biased AI decision-making, as seen in recruitment and financial services has focused public attention on the intended and unintended effects of AI. As the technology develops so does the regulatory landscape, making it hard to navigate.

 

However, the general direction is one of increased scrutiny, reporting and accountability. Where organisations fail to meet these, the costs will be both financial and reputational.

 

To chart this regulatory path businesses must be grounded within a comprehensive AI risk framework. Unfortunately, many organisations have already surged ahead in the development and adoption of AI without laying this foundation. As a result, they are having to reverse engineer their ethical strategy and unpick problematic deployments.

 

A lack of visibility into how their models work, or how decisions are made means responding to changing regulations is an uphill battle. Not all AI models will present ethical dilemmas, but the framework to make this assessment must exist.

 

However, for particularly sensitive industries such as health or finance, the risks may be exponentially higher, with regulators paying close attention. This makes it even more crucial to assess risk thoroughly before forging ahead.

 

Bid goodbye to the black box

The decision-making processes behind AI models are complex and may rely on many inputs that require specific expertise to comprehend. However, for AI to be ethically sound the decision-making process must be transparent. The issue of opaque, black-box AI is the subject of much debate.

 

As explained in the journal, Nature, ‘the decision-making process of a machine-learning model is often referred to as a black box — researchers and users typically know the inputs and outputs, but it is hard to see what’s going on inside.’

 

This lack of transparency leads to a lack of accountability. Why is the algorithm making the decisions it is? And could those decisions be responding to unethical biases in data?

 

This lack of transparency is being met by a move towards Explainable AI (EAI). Organisations must be able to explain why they are using the AI they are, and how it works. This won’t always be a straightforward task, but investment must be made into ensuring and communicating transparency.

 

Of course, this can only be achieved if organisations have first built a risk matrix that puts explainability and accountability at its core. Such values must be integrated throughout the lifecycle of AI initiatives, both to ensure end-to-end transparency and to keep ahead of shifting regulatory landscapes.

 

AI ethics as a core value

AI technologies are now helping to deliver strategic objectives across all areas of business, from HR to finance, operations to sustainability. It is therefore essential that ethical AI and governance are embedded across the organisation.

 

As a core value, this means it is vital that the CEO sets a strong strategic vision for responsible, ethical AI, working with leaders in the business to ensure its permeation.

 

Those companies engaging with AI ethics most successfully are those that commit to cross-functional working groups. Incorporating legal and compliance, technology, HR, and operations, collectively they can navigate regulatory frameworks, and assess risk and opportunity to drive effective and ethical AI.

 

Cascading ethical AI as a culture

While it is vital to drive ethical AI among the organisation’s leaders, this is not enough. Embedding AI ownership and accountability across the business means ensuring all employees understand the impact and importance of AI to their role, and to the objectives of the organisation.

 

With AI impacting all aspects of our lives, personal and professional, educating and empowering employees should be considered a key aspect of ethical AI. Considering the widening AI skills gap, upskilling employees in ethical AI, supporting frameworks and policies should be considered critical.

 

By engaging employees in a culture of ethical AI, and providing upskilling opportunities, employers will not only demonstrate embedded ethics but will empower employees to uncover novel uses for AI that can deliver solid business benefits. 

 

Ethical AI is a win-win

Organisations of all shapes and sizes are increasingly under the spotlight for every decision they take. Corporate values are no longer bullet points in brochures but are being used to hold companies to account. Ethical grey areas are being questioned and exposed, with customers and consumers making decisions based on a myriad of values-based factors.

 

CSR (corporate social responsibility) has shot to the top of the agenda and this increasingly means companies taking responsibility for their use of technology. When designed, governed, and implemented correctly, responsible AI can deliver on a host of CSR objectives leading to better outcomes across society. However, where it is opaque, biased or unaccountable it destroys trust.

 

While AI is still at a nascent stage in many organisations, it is proving instrumental in innovation across industries. The lesson for leaders is not to wait until AI has crossed the threshold to consider its governance. By then the horse has bolted.

 

Make sure to embed ethical AI as a cultural imperative now and create the optimal conditions for it to thrive.

 


 

Martin Weis, AI&A Partner at Infosys Consulting

 

Main image courtesy of iStockPhoto.com

Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings