ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

The EU AI act: the dawn of a new AI era

Linked InTwitterFacebook

Mark your calendars, because a long, complex and adventurous journey begins – your journey to AI compliance. The EU Parliament has formally adopted the EU AI Act. Despite a few formalities remaining, the importance of this event cannot be overstated.

 

The EU AI Act is the world’s first, and so far only, set of binding requirements to mitigate AI risks. The goal is to enable institutions to exploit AI fully, in a safe, trustworthy and inclusive manner.

 

The extraterritorial effect of the rules, the hefty fines and the pervasiveness of the requirements across the AI value chain mean that most organisations using AI, regardless of territory, must comply with the act. And some of its requirements will be ready for enforcement this autumn, so there is a lot to do and little time to do it.

 

If you haven’t done so before, start by assembling your AI compliance team. Meeting the requirements will require strong collaboration among teams, from IT and data science to legal and risk management, as well as close support from the C-suite.

 

Introducing the five chapters of AI compliance

 

The EU AI act is very broad in scope, and it contains complex requirements. Deciding how to tackle it is the first difficult decision to take, but we are here to help. We are working on new research that helps organisations structure their activities for AI compliance in an effective manner. This approach includes five chapters that organisations can run simultaneously and asynchronously, depending on their priorities and need. 

 

The five chapters of AI compliance are:

 

Risk management, auditing, and monitoring

 

This chapter includes classic risk management activities. It has two main components, depending on the ultimate subjects of risk activities: internal or external. Internal risk management activities include the creation of an inventory of AI systems, risk assessments to determine use case risks and inherent risks of use-cases, and so on. External risk management activities are aimed at the delivery of market conformity assessments (when needed) and other documentation directed to compliance authorities, for example, or third-party audits. Auditing and monitoring are also part of this chapter.

 

Data governance, quality and policies

 

This chapter is about all things data. It contains an array of activities that refer to data governance in a broader sense. Better understanding of data sources, which is the foundation of key principles of responsible AI (such as transparency and explainability), ensuring quality of data, and tracking data provenance are part of this chapter.

 

It covers policies, too. From expected usage of systems to policies for the protection of intellectual property, privacy and consumers’ and employees’ protection, organisations will need to refresh existing policies and create new ones to meet the new requirements.

 

Technical measurements

 

Despite the lack of details in technical standards and protocols, at a minimum organisations must prepare to measure and report on the performance of their AI systems.

 

This is arguably one of the main challenges of the new requirements. Companies must start by measuring the performance of their AI and Generative AI (GenAI) systems from critical principles of responsible AI, such as bias, discrimination or unfairness. This chapter will become longer and more detailed with time, as new standards and technical specifications emerge.

 

AI culture and literacy

 

Building a robust AI culture is also a pillar of AI governance, and the AI act makes it even more urgent. Organisations must run their AI literacy program to meet compliance requirements. This goes from standard training for employees who are involved daily in managing AI systems to enabling organisations to create alignment in the design, execution and delivery of business objectives through the use of AI.

 

Literacy also transforms a “human in the loop” into a professional able to effectively perform oversight of AI systems, which, according to the act, must have adequate levels of competence, authority and training.

 

Communication and instructions

 

The ability of an organisation to communicate about the use of AI and GenAI as they relate to products and/or services that they bring to market is another critical element of AI compliance. But communication is not only disclosure to consumers or employees about the use of the technology. Communication also includes the creation of “instructions” that must accompany certain AI systems. Expected and foreseen outcomes from risk assessments, as well as remedies, must also be part of this disclosure exercise.


by Enza Iannopollo, Principal Analyst, Forrester

Linked InTwitterFacebook
Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings

Join the Business Reporter community today and get access to all our newsletters, and our full library of talk show episodes

Join the Business Reporter community today and get access to all our newsletters, and our full library of talk show episodes

Join free today
Join Business Reporter