ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Artificial intelligence: verify, then trust

Michel Isnard at GitLab argues that safe AI adoption needs guardrails

 

UK organisations are at a defining moment in AI adoption. The policies they put into place, the strategies they create, and the ways they shift their workflows to incorporate AI tools will help shape the future success of their businesses. 

 

When implemented strategically, generative AI has the potential to augment functions and improve business processes across teams, from software development to marketing, finance, and beyond. 

 

While many are hurrying to incorporate AI into their workflows to gain a competitive edge, leaders who experience the most significant benefits to their daily operations will be leaders who take a strategic and measured approach to AI adoption. 

 

According to GitLab’s global research, organisations can be around 55% more productive with generative AI, so the stakes are high in getting the roll-out right. 

 

Let’s examine some practical ways organisations can set themselves up for success. 

 

Ensuring privacy from the start

Organisations should put guardrails into place early to successfully and safely use AI tools. By enacting policies around what is, and what is not, an acceptable use of AI, organisations can prepare for responsible and sustainable AI adoption. 

 

For example, organisations may make themselves vulnerable to security risks, fines, customer attrition, and reputational damage without carefully considering how AI tools store and protect proprietary corporate, customer, and partner data.

 

Considering how AI tools protect an organisation’s data is especially important for organisations in highly regulated environments, such as the public sector, financial services, or healthcare, that must comply with strict external regulatory and compliance obligations.

 

In fact, GitLab’s survey also showed that nearly half (48%) of respondents were concerned that code for business-critical software applications generated using AI might not be subject to the same copyright protection as human-generated code. At the same time, 42% were worried that code generated using AI might introduce security vulnerabilities to the software application.  

 

Organisations should create strict policies outlining the approved usage of AI-generated code for their innovation strategies and its underlying software development to ensure that intellectual property is contained and protected.

 

Leaders should conduct a thorough due diligence assessment when incorporating third-party platforms for AI. It’s vital to ensure that their data, both the model prompt, and the output, will not be used for AI/ML model training and fine-tuning, which may inadvertently expose their intellectual property to other companies.

 

While the providers of many popular AI tools can be less than transparent about the source of their model training data, transparency will be foundational to the longevity of AI in the UK. When models, training data, and acceptable use policies are opaque and closed to inspection, it makes it more challenging for organisations to safely and responsibly use these models. 

 

Starting with a low-risk area

Organisations should look for safe and responsible ways to incorporate AI that align with their business goals while also considering how they may need to update security and privacy policies. 

 

First, organisations can avoid potential pitfalls of implementing a new technology, such as data leakage and security vulnerabilities, by beginning where risk is the lowest in their organisation. Identifying the lower risk areas first can allow them to build best practices for those areas before allowing additional teams to adopt AI.

 

Starting with the lower risk areas means that the best practices scale safely across the organisation, its supply chain, and customer relationships. 

 

Baselining shared goals

Second, leaders should start by setting up conversations between their technical teams, legal teams, and their AI service providers. Agreeing on a baseline of shared goals can be critical to deciding on where to focus and how to minimize risk with AI.

 

Organisations can begin setting guardrails and policies for AI implementation, covering employee use, data sanitisation, in-product disclosures, and moderation capabilities from these shared starting points. Organisations must also be willing to participate in well-tested vulnerability detection and remediation programmes.

 

Choosing the right partner

Third, organisations can benefit from partners who can help them securely adopt AI and ensure they are building on security and privacy best practices. Engaging with expert partners will enable organisations to adopt AI successfully without sacrificing adherence to compliance standards or risking relationships with their customers and stakeholders. 

 

Company and IT leaders’ principal worries over AI and data privacy typically fall into one of three categories:

  1. Needing to identify the data sets they are using to train AI/ML models;
  2. Pinpointing how the organisation is using proprietary data; and, 
  3. Understanding the retention levels of the proprietary data, including model output

The more transparent an organisation’s partner or technology vendor is, the more informed people can be when assessing the business relationship. 

 

Proactive security and contingency planning

Finally, company leaders can create security policies and contingency plans surrounding the use of AI and review how AI services handle proprietary and customer data, including the storage of prompts sent to and outputs received from their AI models. 

 

Without these guardrails for adoption, the potential consequences could seriously impact the safe future adoption of AI. Although AI tools have the undoubted potential to transform companies and their core processes and workflows, they can come with real risks, and technologists and business leaders are responsible for properly managing those risks. 

 

Since UK organisations are still in the earliest stages of embracing AI technologies for daily operations, it is too soon for many to fully assess the broader risks and issues that may require specific guidelines and frameworks.

 

As a result, most UK companies and public bodies have yet to develop standard AI guidelines. Still, they are already putting legal protections and assurances in place relating to where code is sent, how it is processed, and potential legal recourse for copyright infringement over intellectual property used by generative AI tools. 

 

How leaders in the UK adopt and integrate AI today will directly impact their competitiveness in this very fast-moving field. By thoughtfully and strategically identifying priority areas to incorporate AI, brands can reap the benefits of these tools without creating vulnerabilities, risking adherence to compliance standards, or risking relationships with their vital stakeholders.

 


 

Michel Isnard is Head of EMEA at GitLab 

 

Main image courtesy of iStockPhoto.com

Business Reporter

23-29 Hendon Lane, London, N3 1RT

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543

We use cookies so we can provide you with the best online experience. By continuing to browse this site you are agreeing to our use of cookies. Click on the banner to find out more.
Cookie Settings

Join the Business Reporter community today and get access to all our newsletters, and our full library of talk show episodes

Join the Business Reporter community today and get access to all our newsletters, and our full library of talk show episodes

Join free today
Join Business Reporter