ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

AI is changing how we code - let’s keep it safe

Michael Burch at Security Journey describes how AI is changing software development and what the future could look like

 

We are in the midst of National Coding Week, an annual event dedicated to encouraging people of all ages and abilities to explore the world of coding. In today’s digital world, coding has become an invaluable skill for everyone, not just those working in tech, as it opens the door to innovation, creativity, and career advancement.

 

This year’s theme, Artificial Intelligence (AI), couldn’t be more timely given the transformative impact AI is having on software development. From automating repetitive tasks to generating entire applications from simple prompts, AI is making coding more accessible than ever before. But with this accessibility comes a fresh set of challenges and responsibilities, especially around ethics, data security and the role of human oversight.

 

 

A new era of software development

One of the most transformative trends currently reshaping software development is vibe coding, a term coined just seven months ago to describe a groundbreaking approach where users express their intentions in natural language and AI translates those ideas into executable code. This intuitive method is totally changing how software is built, making development more accessible than ever before.

 

Vibe coding platforms have seen explosive growth, with Stockholm-based startup Lovable emerging as the fastest-growing software startup in history. The appeal spans across experience levels: seasoned developers are leveraging vibe coding to accelerate workflows, while junior analysts and complete newcomers are using it to bring their ideas to life without needing traditional coding and programming skills.

 

Even high-profile tech leaders, such as Google CEO Sundar Pichai, have acknowledged the potential of this paradigm shift. Meanwhile, individuals from non-technical backgrounds are proving that coding is no longer reserved for the few with formal training.

 

This democratisation of software development marks a profound cultural shift. According to the latest Stack Overflow Developer Survey, one-third of those learning to code now view AI tools positively, emphasising growing trust and enthusiasm. With barriers to entry rapidly disappearing, there’s a renewed optimism that more people will be empowered to explore coding, innovate freely, and contribute to the digital economy.

 

 

The dangers of AI dependency

However, the rise of AI-assisted coding is not without its risks. A study of 800 developers revealed that code generated using GitHub Copilot had a 41% higher bug rate. While experienced developers are more likely to review AI-generated code and catch and correct these issues, junior developers may lack the skills, training and knowledge to review it with the same rigour.

 

It is therefore no surprise that 74% of organisations have experienced incidents due to insecure code, with nearly half having suffered from multiple breaches. As AI tools become more integrated into development workflows, they don’t just increase productivity; they also introduce new security vulnerabilities, especially when developers assume that AI’s output can be trusted.

 

 

AI skills are in, but so is security

AI tools are here to stay, and developers should absolutely be leveraging them. When peers look down on others for using AI - especially in the name of some old-school notion of “real” engineering - that’s not just unhelpful, it’s counterproductive. Bias like that has no place in modern development teams, and those who cling to it will be left behind as AI continues to reshape the way we build software.

 

Proficiency in AI-assisted development is an asset in the current job climate and research by Clutch shows that 3 in 4 developers believe AI tool proficiency impacts hiring decisions. But it’s also introducing new vulnerabilities at scale – it is like handing a power tool to someone who’s never built a house.  

 

It is therefore essential to pair AI with a strong foundation in secure coding practices and principles, and have the ability to critically evaluate and review AI-generated code before deployment. This means embedding AI-specific risks into development training, treating large language models (LLMs) as untrusted components during threat modelling and mandating testing and documentation for AI-generated code

 

For beginners, secure coding knowledge is especially important. Understanding the fundamentals of secure development and issues like prompt injection, model leakage and hallucination squatting can all help prevent costly mistakes and lay the groundwork for long-term success, particularly when using AI tools.

 

As AI’s role in software development continues to grow, every step forward with AI invites a counterpunch from threat actors. We must train developers not just to use these tools, but to challenge them. This National Coding Week, let’s celebrate the idea that anyone can code.

 

Whether you’re 15 or 50, a hobbyist or a professional, there’s a place for everyone in the coding community. But it must be done securely. With the right support, training, and secure code knowledge, AI can be a powerful ally as we continue on this journey of digital transformation. 

 


 

Michael Burch is Director of Application Security at Security Journey

 

Main image courtesy of iStockPhoto.com and Khanchit Khirisutchalual

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543