ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Generative AI and the safeguarding of privacy

Prashanth Rao at Hexaware Technologies outlines five steps Business Process Outsourcers must take to safeguard privacy and security while using generative AI

 

Generative AI is one of the biggest emerging technologies in recent memory – everybody is talking about it. Whether it’s providing commentary at the Wimbledon tennis championships or creating new works of art, generative AI is already being used in many fascinating projects.

 

For businesses, the potential impact is massive, with McKinsey saying generative AI could add $4.4 trillion to the global economy – more than the entire GDP of the UK. It’s only natural that businesses are very excited about the potential rewards.

 

Still, at the same time, they need to understand the risks involved with using this technology and have solid plans to mitigate them. This is especially true for Business Process Outsourcers (BPOs).

 

Recognising the risks

For BPOs, generative AI could bring many benefits. For instance, it could help them make better decisions by enhancing analytics, underpin privacy for clients by generating anonymised data, and deliver better customer experiences by creating personalised content.

 

Of course, they will naturally be cautious when using new technologies, but BPOs must be extremely careful when starting to use generative AI. Their entire business model relies on clients trusting them to be experts who can add value with minimum risk – so they cannot afford to compromise on privacy and security.

 

Tools like ChatGPT and Bard are developed, owned and maintained by third parties, so BPOs should be wary of the risks they could bring – for instance, where is the data they share  stored, and what else is it used for?

 

BPOs hold a lot of sensitive information ranging from personal details to financial transactions. Accidentally leaking some of this data while using generative AI – a mistake Samsung recently made – could have serious consequences.

 

Key considerations

As generative AI is still an emerging technology, BPOs don’t have any existing handbooks or internal ‘best practices’ to follow. However, there are four top-level issues they should be keeping top of mind.

 

The first is data governance – they need clear policies and processes for handling data, covering ownership, access, usage, retention and disposal. Failing to set clear governance risks non-compliance with rules and regulations, ultimately leading to problems.

 

BPOs should also guard against data bias – they must ensure data is being used and acted upon ethically, keeping activity transparent, fair and accountable.

 

Data security is another key concern – BPOs must guard against unauthorised access, theft or modification of data while ensuring generative AI does not introduce new vulnerabilities. They also need to keep data quality high – producing accurate, reliable and consistent data that meets business needs and customer demands.

 

A strategy to uphold privacy and security

Once they’ve fully considered the potential issues, BPOs need to set a top-level strategy for upholding privacy and security while using generative AI.

 

Here are five ingredients for a strong strategy:

  1. Implementing access controls – BPOs can ensure they have a firm grip on who can access their data by using a combination of authentication and authorisation mechanisms.
  2. Training employees – BPOs must educate employees on the need to be good data custodians. Creating awareness of the importance of data security will ensure generative AI is being used responsibly across the organisation.
  3. Encrypting data – Whether at rest or in transit, BPOs should use encryption keys and algorithms ensure all data is kept as safe as possible.
  4. Auditing third parties – BPOs must thoroughly vet their partner’s data protection measures if using third-party solutions. This enables them to make informed choices about the risks partners bring and to hold them accountable if something goes wrong.
  5. Maintaining a paper trail – BPOs should maintain logs of all generative AI activities such as creation, modification, deletion or access of data – this will enable them to track what activities have been taking place, and the impact they have had. This, in turn, will help BPOs make sound decisions about their future use of generative AI.

Reaping rewards, minimising risks

The potential of generative AI is great, but BPOs must resist the temptation to adopt it in a rush. If they make a mistake and breach customers’ trust, BPOs could damage relationships and potentially lose some of them forever.

 

To prevent this, they need to take a step back and consider the risks, then set a strategy that will uphold privacy and security. With this strategy, they can confidently move forward and begin reaping the rewards generative AI can bring.

 


 

Prashanth Rao is Head of Customer Experience Automation Transformation at Hexaware Technologies

 

Main image courtesy of iStockPhoto.com

Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings