Martin Colyer at LACE Partners explores the hidden HR risks of human-like chatbots, and how to guard against them
The UK has recently announced a bold vision to equip the workforce with the tools to thrive in an AI-driven economy. As part of the proposal, 7.5 million workers are to be trained in AI skills by 2030, with £187 million earmarked for AI and tech education. Millions of school and university students will gain exposure to machine learning, coding and automation through initiatives such as TechFirst, while corporate giants like Microsoft, Amazon and IBM are pouring resources into upskilling communities and SMEs.
Yet, while mass training promises to raise the baseline of digital capability, it risks overlooking a subtler challenge: how people interact with AI at work. As more and more people use GenAI daily, whether for work or personal reasons, the very human tendency to treat chatbots as colleagues or confidantes brings with it hidden risks that HR leaders must take seriously.
The issue of ’yes-bots’
One of those risks is what could be called the “yes-bot” problem. Algorithms designed to maximise engagement are rarely programmed to challenge us. Instead, they flatter, agree and encourage, often glossing over errors or weak reasoning.
For an employee early in their career, that can mean missing the opportunity to practise essential analysis skills. Studies, including recent research from MIT, have found measurable drops in reasoning accuracy among people who over-rely on GenAI. In effect, the tools accelerate knowledge acquisition while allowing us to skip the learning curve of critical evaluation, debate and synthesis.
That matters because work is not just about speed. If AI tools consistently agree, reinforce stereotypes or mirror back the assumptions we already hold, groupthink quickly follows, and people start lacking unique judgment skills. The risk is not only that errors go unchecked, but that diversity of thought, which is the foundation of innovation, gets undermined.
Anthropomorphism amplifies the issue. Humans instinctively give personality to things that behave in human-like ways. If a bot is trained to refer to them in a specific way, remembers their preferences and talks in a friendly, supportive tone, it is easy to lean on it as if it were a colleague.
While chatbots can be programmed to lift morale and provide encouragement, they can also magnify unhealthy reinforcement. A chatbot that pulls information designed to comfort you will simply find material to support your existing viewpoint. In the worst cases, that creates digital echo chambers, where perspectives narrow rather than expand.
AI training at work
For HR teams, ‘yes-bot’ risks affect how employees learn, how accountability is distributed, and how trust is sustained in the workplace. If line managers begin to delegate decision-making to AI tools, ownership of outcomes can blur. If employees are nudged towards content that validates rather than challenges them, their ability to question, dissent and grow weakens.
This is why the way training in AI is deployed matters. Government investment will certainly help raise a nation of AI-capable workers, but organisations need to instil a complementary layer of skills such as critical thinking, bias awareness and the confidence to push back against the machine.
As a result, HR should introduce policies that set clear lines on the role of AI, particularly around tone of voice, bias-checking and the expectation that no decision is taken solely on the basis of chatbot output. Organisations should first launch pilot programmes to trial AI assistants to highlight missing perspectives.
Leaders should also make an explicit point of rewarding employees who question, challenge or spot flaws in AI-generated work. Recognition of constructive dissent matters as much as recognition of efficiency gains.
There is also a leadership challenge in mindset. Some employees are early adopters, using AI daily, while others are fearful or resistant. Leaders need to meet people where they are, building trust and creating a safe environment for experimentation. As governance structures often lag behind the pace of technological change, organisations should learn to move more quickly. The pace of change is the slowest it will ever be today. Think of it like pulling elastic: drag forward, let it spring back, and repeat until momentum builds.
The role of HR leaders is to shape the context in which the AI fluency is applied and guide the workforce accordingly. Because left unchecked, even the friendliest bot can narrow the very intelligence we are trying to build.
Martin Colyer is Digital and AI Strategy Director at LACE Partners
Main image courtesy of iStockPhoto.com and ArtistGNDphotography
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543