ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

AI: automation or empathy

Simon Noble at Cezanne asks whether avatars are ready for the realities of work

 

AI avatars have moved from sci-fi novelty to boardroom reality. Whether it’s UBS deploying virtual assistants, Klarna trialling AI-powered agents, or Zoom offering virtual avatars for meetings, the use of synthetic personas in the workplace is no longer experimental; it’s becoming the norm.  

 

For business leaders, the potential is enticing, as AI avatars promise efficiency, scalability, and cost savings. Technology may streamline processes, but when it comes to people, empathy is not optional. 

 

As AI becomes ever-present in our daily working lives, it’s time for an honest insight into the pros, the pitfalls, and the limitations of what avatars can and should do. 

 

 

Smart support for everyday tasks 

When utilised correctly, AI avatars can significantly ease the administrative load in HR and beyond, allowing human teams to focus on higher-value work that requires judgement, empathy and strategic thinking. When implemented thoughtfully, these tools offer real-time assistance, reduce bottlenecks and provide consistency across departments and time zones. 

 

AI avatars are especially effective for routine, repetitive queries, areas where employees crave quick, accurate information without the need for face-to-face conversation or interpretation. For example, AI avatars can respond instantly to questions about annual leave balances, company policies, expense procedures or training schedules. In doing so, they eliminate the need for employees to chase emails. The result is a smoother employee experience and a more productive HR team. 

 

 

The importance of empathy  

For roles where culture, communication, or emotional intelligence are central, removing the human element does more than feel impersonal; it can actively undermine trust and discourage engagement. Put simply, AI avatars, no matter how advanced, lack the emotional depth, nuance and intuition that human beings bring to sensitive conversations. 

 

Employees might be entirely comfortable asking an avatar about how many days of annual leave they have left. But, when the conversation shifts to a more complex or emotional topic, such as reporting harassment, raising concerns about workplace discrimination, or sharing experiences of stress, burnout or financial strain, the limitations of AI become immediately apparent. 

 

Empathy is not just about responding politely or using the right tone; it’s about listening actively, reading between the lines, offering reassurance, and making people feel seen and heard. This is especially important in HR functions where psychological safety and trust are essential.  

 

There’s also a reputational risk that comes with an over-reliance on AI tech. Humans form opinions quickly, and if their first interaction with an organisation is through a lifeless AI avatar, particularly in interviews or onboarding, it may suggest a culture more focused on process than on people. This can be especially damaging in sectors that rely on strong interpersonal relationships, such as education, care, health, creative industries or non-profit organisations. 

 

 

Appropriate use  

There’s often a grey area which envelopes the use of AI avatars, involving the ethical expectations surrounding their capacity.  

 

For example, asking an avatar about annual leave policies is appropriate; however, what about checking sick leave entitlements while dealing with a long-term health issue? Or submitting expenses that relate to a personal or sensitive situation? These interactions fall into grey areas, as they appear administrative, yet can involve emotions, stress, or confidential context that an avatar is not equipped to manage. 

 

The more intelligent and human-like avatars become, the more difficult it is to draw the line between helpful assistance and inappropriate automation. People may begin to assume that the avatar is capable of handling more than it should. This can create false expectations and, in some cases, a damaging loss of trust if the avatar responds poorly or inappropriately. 

 

From a regulatory perspective, these grey areas introduce significant risk. AI avatars often interact with data that falls under strict privacy and compliance regulations, such as GDPR in the UK and EU.  

 

This includes employee health records, financial data, performance history, and even disciplinary notes. If these systems are not carefully controlled, audited, and monitored, they can inadvertently expose personal information, store it inappropriately, or fail to secure it properly. The consequences are not only legal but also reputational. 

 

 

Choosing technology with purpose

The future of AI in the workplace is not about replacing people. It should focus on empowering them, removing repetitive tasks, improving access to information, and freeing up time for work that requires empathy, creativity, and critical thinking. 

 

Organisations that succeed with AI avatars will not be the ones that adopt the latest technology for its own sake. They will be the ones who introduce it with care, align it with their values, and continually assess how it affects their people and culture. 

 

In the drive to innovate, we must not lose sight of what matters most: human connection, trust, and the reassurance of being heard by someone who understands. Technology should enhance the workplace, not remove the humanity from it. 

 


 

 Simon Noble is CEO of Cezanne

 

Main image courtesy of iStockPhoto.com and imaginima

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543