ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Fostering a positive AI culture in business

Graham Glass at CYPHER Learning argues that artificial intelligence should be treated as a friend, not a foe

 

With the launch of ChatGPT 4 in 2023, suddenly the whole world seemed to be dabbling in AI. But what does that mean for HR teams and business in 2024, as AI filters into the workplace? 

 

Misusing AI when handling confidential information could lead to data security breaches, potentially resulting in fines, reputational damage, and loss of customer confidence. But holding employees back from using AI technology could drive AI usage into the shadows; in fact, “shadow AI” is already a workplace challenge. 

 

The challenge HR professionals face is how to foster a culture of innovation while managing the potential new risks that AI could introduce. However, less than a third of UK businesses (28%) have created policies relating to GenAI. And while having firm guidelines on acceptable use is a good start, for policy to be effective, it must go hand in hand with education.

 

So, what does a safe, measured approach to AI use look like?

 

Shadow AI and the rise of BYOAI 

At the dawn of this chatbots-for-all era, whether it was suggesting recipes for leftovers, writing a wedding speech, or designing a new kitchen, people quickly found ways for AI to make their lives easier. As with the Bring Your Own Device (BYOD) trend that accompanied the proliferation of smartphones, a BYOAI culture has started to creep into the workplace.

 

From drafting emails to writing code, people are using AI hacks to make their work lives a little bit easier, sometimes surreptitiously. In fact, Deloitte estimates that 61% of employees who work with computers already use Gen AI in their day-to-day work.

 

While it’s great that people are finding new ways to be more productive, the quiet, unsanctioned emergence  of AI into our professional lives does present risks. In 2023, Samsung revealed that some of its employees accidentally leaked sensitive data via ChatGPT, raising an alert to companies about the risk of using open AI platforms. From tech giants like Apple and Samsung, to government entities like the UK Department of Work and Pensions – organisations are, properly, increasingly wary of staff using AI without managers knowing.

 

However, an all-out ban has the potential to do more harm than good. Staff will use AI technology, even if it’s forbidden. Research shows that 69% of UK HR and business leaders believe employees are breaking rules, but they often don’t know about it until something goes wrong. 

 

Ultimately, if staff are using AI, organisations need to know about it. Instead of turning a blind eye, leaders should foster a culture where people can be open about AI usage.

 

Responsible AI use: guardrails for success

A good starting point is to establish ground rules. A clearly communicated AI governance policy that defines acceptable use can help mitigate the risk of improper use.

 

Upcoming cooperative UK-US government policy on AI will require HR leaders and staff to prepare for new regulation and compliance hurdles, with non-compliance leading to significant fines.  Businesses might want to get plans in order now to avoid being caught up in a compliance headache.

 

 A good starting point for businesses is to ask questions that relate to their operations and possible risks. For example, should the company disclose when AI is used to generate copy or create a graphic? What use cases in your organisation are suitable candidates for adding AI? Which are off-limits?

 

By putting AI policies in place to establish clear red lines, businesses can improve worker confidence and work to ensure staff know the company’s stance on their AI use. It will also help reduce the chances of employees inadvertently  doing something wrong with AI.

 

Education, education, education

Education also plays a key role in governing AI use. Engaging, memorable, story-based, and timely training on AI can grab users’ attention. This helps bring staff on board and highlights the implications of AI gone wrong.

 

Businesses can hold regular training sessions to acclimatise employees to AI dos and don’ts. There’s plenty of evidence that many are unsure about AI technology, even afraid. Demystifying it is practically a precondition for success. And HR and L&D can work with department leads to ensure training is personalised, so it’s relevant to individual workers and their roles.

 

Employee comprehension of AI “rules of the road” can be tracked and verified via quizzes to ensure that staff truly understand what is expected of them, and the potential consequences if any rules are breached. This can go a long way in helping drive worker confidence, accountability, and compliance regarding AI – and, with AI out of the shadows and responsibly controlled, might even encourage innovation.

 

AI for the future

As staff continue to cultivate more inventive relationships with AI, it’s important to remember the positives. AI adoption can offer a myriad of benefits, including enhanced productivity, innovation, and problem-solving capabilities.

 

So it’s equally important to look for ways to encourage safer AI use. Ultimately, if people find new and useful ways to use AI, that could be something that helps the business, and the chances are that leaders want to know about it. Simply prohibiting or penalising AI is a dead-end strategy which will limit companies from unlocking its benefits.

 

As AI is here to stay, it’s better to harness and control it in bright sunlight with vivid, recognised guardrails in place. This could mean employing AI mainly in well-controlled ways, typically as a component of a technology solution with a limited, well-defined mission and an accountable vendor.

 

Ultimately, if you are not steering the use of AI and laying down ground rules on what flies and what doesn’t, then it’s difficult to mitigate the risks effectively. So, it’s high time to embrace not just AI but smart AI governance, and help make it a productive assistant that empowers all staff to succeed.

 


 

Graham Glass is CEO and Founder of CYPHER Learning 

 

Main image courtesy of iStockPhoto.com and demaerre

Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings