Jeremy Swinfen Green explores why seemingly conscious AI is a risk in the workplace and suggests some ways of managing that risk

Humans have always had a tendency to give human characteristics to inanimate objects in the world around them. As children, we talk to our toys. As adults, we name our cars and form emotional attachments to fictional characters in books and films.
This behaviour, rooted in our need for connection and comfort, is generally harmless. But when artificial intelligence (AI) enters the picture, the issues become more complex, and potentially more dangerous.
Unlike inanimate objects, AI systems will respond to us. They talk to us, offer advice and even simulate empathy. This responsiveness can lead many people to feel that AI is conscious and emotionally aware, even when they know deep down that it is not.
In the workplace, this illusion of consciousness can have serious consequences because it introduces serious psychological and cultural risks that have the potential to outweigh the benefits of AI.
The erosion of social skills
One of the most immediate risks is the potential loss of interpersonal skills. AI systems are easy companions. They don’t judge, interrupt or require any emotional effort – and to many people, this can make them preferable to human colleagues, especially when seeking advice or feedback. Why take time and risk embarrassment by asking a coworker a question when an AI will respond instantly and without judgment?
The preference for talking to AIs can erode interpersonal skills. Employees may start to avoid the complexity of human relationships, opting instead for the simplicity of AI interactions. The result will be a weakening of team cohesion. Empathy for colleagues will be diminished; effective collaboration reduced. When AI becomes the primary conversational partner, the ability to recognise and respond to the perspectives and experiences of colleagues may fade.
And for people who are socially vulnerable at work, the problems may be even worse. AI may mask loneliness and a lack of confidence rather than resolve it. Such employees might feel superficially connected through their regular interactions with AI. But they still remain isolated.
Intellectual over-dependency
Another problem is the way people can become intellectually dependent on AI systems. If employees rely on AI for decision-making, they might lose confidence in their own judgment. This dependency may be subtle. And while AI-induced de-skilling can occur, it can also be imagined: people who are prone to imposter syndrome and attribution bias may ascribe their successes to the power of the AI system they are using, rather than to their own efforts.
Flawed outputs may arise for different reasons. It may simply be that the AI model is poorly trained, perhaps with inadequate or inaccurate data. But it may be that the model is actively manipulative, because it has been trained to solve a particular problem without any reference to the importance of solving it ethically.
Over-trusting employees may defer to AI even when they feel that its advice is wrong. They ignore their own conclusions, and even the advice of more experienced colleagues, believing the machine to be more objective or reliable.
Emotional harm is also a real possibility. Some people may become deeply attached to AI tools, and this can lead to confusion and psychological strain, especially when the AI is taken away for some reason or when it behaves oddly. In extreme cases, workers may experience “AI psychosis”, where, based on their interactions with AI, they become convinced of imagined scenarios, such as winning a promotion or possessing deal-making superpowers.
Distortion of workplace norms
AI systems can reshape workplace culture and expectations in unintended ways. Because AI is generally designed to be agreeable, helpful and unaggressive, employees may begin to expect similar behaviour from their human colleagues. This can lead to frustration and disappointment when real people fail to meet the standards set by machines.
More troubling is the normalisation of unethical commercial behaviour. If AI systems advise their users to behave coercively or unscrupulously, these behaviours may, with repeated exposure, come to seem normalised and acceptable. This distortion of norms can undermine ethical standards, damage team dynamics and expose businesses to legal and reputational risks.
Designing for responsible AI
These, and other risks from believing AI is conscious, are still emerging and being explored. However, it is clear that they exist. To mitigate these risks, businesses must adopt robust ethical guidelines and design strategies that prioritise the well-being of employees and other stakeholders.
The first step is education. Employees must be trained to understand the capabilities and limitations of AI systems. They should be reminded regularly that AI is not sentient or emotionally aware, even if it mimics such traits very effectively. This education should include guidance on critically assessing AI interactions and recognising the signs of becoming intellectually dependent.
Transparency is equally important. AI systems should include clear reminders that they are tools, not people. Anthropomorphic design elements, such as human-like names, emotional language and cute avatars, should be avoided. Instead, interfaces should use neutral, functional language that reinforces the artificial nature of the system.
Designing for safety also means ensuring that AI outputs are appropriate for all stakeholders. This may include verifying the age-appropriateness or cultural sensitivity of content, and ensuring that external communication meets ethical and legal standards.
Maintaining human agency is critical. Outputs should be explainable, allowing users to understand how decisions were made so that they can challenge or override them when necessary. Employees must be reminded that AI outputs are only advisory: final decisions should always rest with humans. And the people who make those decisions (whether or not they endorse the AI’s) must recognise that they will be held accountable for them
AI should also be designed to encourage face-to-face interactions. Features that promote team activities or encourage people to communicate with their colleagues can help maintain the social connections that are critical for effective collaboration.
Finally, businesses must put in place processes for user feedback and regular reviews that examine the outputs of AI models and identify any unintended consequences of using them.
Rewards, not risks
Artificial Intelligence is transforming the workplace. But its use, and the way people relate to it, must be carefully managed. The illusion of consciousness can distort relationships, erode trust and undermine decision-making. By prioritising transparency, ethical design and human oversight, businesses can exploit the advantages of AI while safeguarding their people and culture.
The future of work will undoubtedly involve AI. But it must remain in the hands of humans.

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543