ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Is AI getting dumber?

Tom Smith at consumer insights platform GWI thinks that, if the issue of training data is not addressed, it’s only a matter of time before AI’s strength reaches a plateau

 

Is AI getting dumber? It’s a provocative question, especially at a time when AI seems to be everywhere, doing everything from driving our cars to writing our emails.

 

If you ask tech giants like Sam Altman and Elon Musk, they’d probably say no. In fact, Altman has spoken publicly about his prediction that AI will eventually outsmart us. But what if the opposite is already beginning to happen? What if instead of getting smarter, AI is hitting its ceiling?

 

The truth of the matter is: today’s AI isn’t evolving in a vacuum. It’s not learning like a human would, through intuition, experience or emotion. Its intelligence is entirely dependent on the data we feed it, and when it comes to data, even Altman and Musk can admit we are running into a problem.

 

So no, AI isn’t getting dumber yet; but it’s only a matter of time until it does.

 

 

The ‘Peak AI’ problem

The rapid growth we’ve seen in AI models hasn’t happened because the algorithms have evolved on their own. Every breakthrough we’ve seen so far has been driven by feeding these systems vast amounts of data scraped from sources across the internet, such as journals, publications and social media. But that supply is now running out.

 

It’s a phenomenon Altman and Musk are calling “Peak AI”, with Ilya Sutskever, co-founder of OpenAI, likening it to “fossil fuels” – a finite resource of internet data that will eventually be depleted. 

 

“Peak AI” is a problem because without new knowledge to fuel further training, even the most advanced AI systems will struggle to improve. And that’s a growing concern for businesses increasingly relying on AI.

 

 

AI stuck in a loop: the risk of model collapse

The shortage of data to feed the constant cycle of training that AI relies on has led to an emerging concern – the idea of a model collapse. This is what happens when an AI system has no more external data to train on and, as such, “feeds” on its own output in a closed loop, and consequently begins to deteriorate in performance.

 

For businesses that depend on AI in their workflows, model collapse is a real risk to their operations. It can lead to AI systems producing inaccurate responses and in extreme cases, even cause the AI to become completely unusable.

 

The risk of model collapse highlights a simple lesson: businesses that input poor training data will lead to poor results coming out. Ensuring the quality of that training data is crucial. 

 

 

Synthetic data isn’t a substitute for the real experience

To combat the shortage of new data, many businesses have turned to synthetic data as a solution. Synthetic data, like AI-generated survey responses and predictive insights, is designed to mimic real-world insights and patterns in order to replace human responses.

 

However, relying solely on synthetic data is a double-edged sword. Without training AI models on genuine human insights, AI systems can quickly fall into a constant loop of recycled, with synthetic data pushing us towards ‘model collapse’ territory.

 

Over time, an AI trained only on synthetic data may also start to repeat the flaws and biases of the old data — each new generation will become more distorted and won’t reflect what is actually happening in the real world, a huge issue for those using synthetic data to draw business insights from.

 

When thinking about the use of synthetic data and the risk of model collapse, it’s important to remember that although an AI model can talk to you like a human, it can’t think like one. As such, a big issue for businesses lies in authenticity and accuracy.

 

AI-generated data is based on existing patterns and information that AI has already encountered. In short, it can only generate what it already knows. As a result, it lacks authentic, genuine responses and context that come from real human experiences.

 

The most important thing businesses need to remember is that AI works best when it’s grounded in real life. Synthetic data can help fill these small gaps in data, but it isn’t a sustainable foundation on its own.

 

 

AI isn’t getting dumber – but the data is

So, is AI getting dumber? Not exactly… but it’s not on a guaranteed path to getting smarter. AI’s future depends less on better algorithms and models and more on better data – the kind that reflects real human experience. Synthetic inputs can only take us so far.

 

While this isn’t a dead end for businesses utilising AI, it should act as a wake-up call for them. AI isn’t getting dumber because it’s defective or broken, but because we’ve mistaken quantity for quality. The current models we’re using aren’t failing us right now, but they will be if we fail to give them what they will truly need: authentic, real-world data.

 

In the end, the goal isn’t just to make AI smarter, but to prevent it from falling behind. To do this, businesses should aim to keep it grounded in reality, reflecting what people actually think, feel, and do.

 


 

Tom Smith is CEO and founder of consumer insights platform GWI.

 

Main image courtesy of iStockPhoto.com and PhonlamaiPhoto

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543