ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

The risks and rewards of AI-generated computer code

Bertijn Eldering at HackerOne explains how vibe coding is reshaping both innovation and risk

Linked InXFacebook

In common with many other disciplines, AI is bringing profound change to the way software is developed. Once the exclusive preserve of industry specialists, both professionals and hobbyists alike have access to AI code generation tools that can transform traditional development processes. For those without formal programming knowledge, this falls under the banner of ‘vibe coding,’ whereby AI translates the intent behind software development into executable code. 

 

This brings some impressive advantages. Not only does vibe coding lower the barrier to entry, but it can also accelerate prototyping and enable non-developers to build functional applications without the need for traditional programming skills or specialist tools.

 

At the same time, the emergence of vibe coding also signals a move towards community-centred software creation, where human oversight and shared responsibility become as integral to innovation as the code itself. Embedding security principles into this process, from the initial prompt through to production, ensures that progress and protection evolve in tandem. Indeed, applying a secure-by-design mindset can help raise the overall security baseline without slowing innovation.

 

On the flipside, however, it also removes development and review processes that have been honed by industry professionals over many decades. Yes, vibe coding democratises software development, but it also raises the very real prospect of introducing insecure, unverified or hallucinated code into repositories at scale. In fact, around 62% of AI-generated code solutions contain design flaws or known security vulnerabilities. These are often not exotic issues but familiar web weaknesses. For example, string-concatenated SQL queries expose injection risks, missing validation or access checks, and logic errors that emerge when business context is ignored.

 

The generative model doesn’t “think security”, it just offers the shortest path to functional code, often baking in unsafe routines that mirror the patterns in its training set. Leaving that unchecked means a production system may run, but it remains exposed, and the exposure is growing as feel-good development speeds eclipse careful security reviews.

 

On the wider industry side, recent research has put unchecked AI issues into sharp perspective, revealing a 210% increase in AI-related vulnerability reports over the past year, alongside a 540% surge in prompt injection attacks, which have become the fastest-growing threat vector. At the same time, organisations expanded AI program adoption by 270%, illustrating how rapidly both risk and innovation are scaling across the software ecosystem.

 

 

From vibe coding to vibe hacking

Let’s take a step back and examine some of the core issues more closely. Fundamentally, AI can quickly and effectively generate functional code, but this typically lacks awareness of secure design or regulatory requirements. As a result, security holes are often present by default, including those associated with missing validation, exposed APIs and situations where sensitive information is directly written into source code instead of being securely stored.

 

If that wasn’t concerning enough, vibe-coded applications are often deployed without testing or governance, especially in SMEs or personal projects. The volume of these applications is growing quickly, as users are empowered to turn innovative ideas into real-world digital tools. The level of risk multiplies significantly when these projects use sensitive or regulated data, such as PII or PHI.

 

Let’s be clear: the majority of the security issues this creates are not driven by malicious intent, but rather by misplaced trust in automation. Aside from the actual security risks, this growing “trust deficit” is emerging between perceived and actual software security.

 

In this context, however, others are also using vibe coding tools to create malicious code or to modify existing applications for exploitation. WormGPT is just one example and is positioned as an ‘AI tool for hacking and dark work’. As such, it makes it very easy for cybercriminals to create sophisticated, malicious code without also needing the associated knowledge or experience.

 

WormGPT is just one of many AI vibe hacking tools out there; the underlying point being that they lower the barrier for cybercrime. Taking this a step further, autonomous AI agents are also capable of finding and exploiting vulnerabilities without human oversight. By automating the process, these ‘hackbots’ can scan vast attack surfaces and produce tailored exploits in real-time. Their use amplifies the scale and speed of attacks, making attribution far harder and shifting the security imperative from occasional testing to one of continuous mitigation.

 

 

Stronger safeguards

These are worrying developments, but where do they leave the security industry and the teams responsible for keeping threats at bay? Firstly, the vibe coding trend (in all its forms) requires that industry professionals adopt a new mindset, whereby the emphasis on periodic testing is replaced with an approach that builds security into the development process from the start.

 

This should begin with an understanding of where risk enters the process, particularly the early use of threat modelling, where AI-generated code is treated as a separate source of potential risk and weakness rather than relying on the dangerous assumption that it meets established standards.

 

This means that human-in-the-loop review, including community collaboration, remains absolutely essential for ensuring that every AI output is checked for accuracy, compliance and secure design before deployment. There is also an important role here for security researchers and bug bounty programs, which can provide a trusted external check on defences, and in doing so, help organisations see how these systems behave under real-world pressure.  When integrated with AI-driven testing and triage, this combination of human insight and automation delivers a stronger defence at every stage of the software lifecycle.

 

Oversight also matters, and developers need a record of which prompts, models and data sources shaped each piece of generated code. Governance frameworks must adapt to define how AI tools are utilised and approved, as well as how sensitive data is managed.

 

The risks associated with vibe coding in all its forms aren’t just something to bear in mind for the future; the pace of development is so rapid that organisations need to be looking at their processes and potential vulnerabilities right now. In the rush to adopt AI, it’s vital to also create safeguards that ensure software development processes retain their strong security focus,  and that human-in-the-loop processes continue to provide guardrails to maximise protection.

 


 

Bertijn Eldering is an Associate Sales Engineer at HackerOne

 

Main image courtesy of iStockPhoto.com and JuSun

Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543