ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Where does the UK stand in the AI superpower race?

Laurie Mercer at HackerOne considers the role that international collaboration can play in promoting Britain’s place as an AI superpower

 

Last year, WPI’s report highlighted that the UK could be the next big global AI superpower, leading the way with public sector investment. But, in an increasingly crowded market, how does the UK fare now compared to its competition, and what role does security play in an increasingly malicious environment?

 

The UK has a long and distinguished history when it comes to AI. Alan Turing, considered the godfather of computer science and AI, made groundbreaking contributions during World War II with his work on the Turing Machine and the creation of the Turing Test. In 1951, Christopher Strachey created the first-ever computer chess program at the University of Manchester’s Ferranti Mark I computer. Other notable AI contributors include Thomas Bayes, Karen Spärck Jones, Geoffrey Hinton, Demis Hassablis, Nick Bostrom, and Andrew Ng.

 

Today, there is a rapidly changing and advancing ecosystem of AI-enabled apps and services. The stellar rise of ChatGPT and other generative AI solutions in less than a year has brought about further evolution.

 

In addition, the government is seriously investing in the industry for further development, and convening the first global summit on AI. So, perhaps it is time to consider some of the security concerns of AI and whether the UK can truly become an AI superpower. 

 

Securing new models

With new technology comes new vulnerabilities. Typically, AI models are open-source or trained on public information. The next wave of AI models will be trained on private or proprietary information, which raises the question of how to protect this information, and how to share vulnerability information between organisations to contain risk. 

 

Consider an AI model trained on health records to help identify early signs of cancer – or indeed any illness – in the population. Bad actors could infiltrate such a system to identify who in the neighbourhood was afflicted and exploit this fact. Something similar occurred in Singapore when a sophisticated attack on the Ministry of Health exposed one-third of all patient records.

 

That means novel methods of securing AI systems are necessary. During the infancy of cloud computing both novel and existing vulnerabilities profilerated as both attackers and defenders adapted to a new attack surface. The same will apply to AI: we have seen it most recently with prompt injection leading to Remote Control Execution in Langchain, and the takeover of Meta’s huggingface hub, both of which demonstrate the risk of poisoning and theft of AI models.

 

This developing attack surface is the focus of the global Open Web Application Security Project (OWASP). It recently published the Top 10 Vulnerabilities for Large Language Model Applications project, which includes prompt injection, insecure output handling, training data poisoning, and model theft.

 

Another interesting example is “Red Teaming.” AI labs around the world are incentivising and building communities of diverse talent to attack them, for good. These red team activities will enable AI services to prioritise best practices for secure coding tools, supply chain security, the management of exposed assets, and rewards for the disclosure of AI vulnerabilities, among other things. 

 

AI as a co-worker for productivity and profit gains 

Generative AI will best function as a complementary tool, not a replacement, to human labour. This means a potentially vast increase in productivity for UK organisations. Take Github Copilot, for example: an AI programming assistant that helps developers code more quickly, enabling them to concentrate on bigger problems, staying ‘in the flow’ longer, and making work more fulfilling.

 

Such digital assistants will enable knowledge workers to boost productivity substantially, shipping more code faster, contributing to faster product release cycles, and the potential for an acceleration in innovation and product development.

 

However, when it comes to AI-generated code, the issue is that a developer can create code without necessarily having the skills or ability to assess for security vulnerabilities. AI speeds innovation, but it also speeds the creation of vulnerabilities. Organisations must include human gatekeepers to review the generated code for flaws and weaknesses.

 

Where does the UK stand?

Many countries are making impressive strides to harness AI. The UK is positioned strongly, backed by its cultural heritage, native talent, and governmental support. These advantages all aid the UK in playing a leading role in the setting of standards, the development of tools, and the discovery of novel vulnerabilities.

 

Yet, the magnitude of AI’s potential means perhaps the secret to global success will lie in international collaboration, not competition. 

 

With the UK-hosted ‘AI Safety Summit’ fast approaching, the global combined strength at the event will aim to consider the risks of AI, especially at the frontier of development, and discuss how AI can be secured with rapid international action. And as our world is as digitally interconnected as ever, this global collaboration will allow continued innovation whilst ensuring worldwide security. 

 

The starting pistol in the AI race has barely been fired, but perhaps, looking at the wider picture, worldwide AI coordination is a better goal for the UK to strive towards, rather than looking for pole position and power. 

 


 

Laurie Mercer is a Security Architect at HackerOne

 

Main image courtesy of iStockPhoto.com

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543