ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Hidden threats to agentic AI

Saugat Sindhu at Wipro navigates three of agentic AI’s most insidious threats: memory poisoning, tool misuse and employee fatigue

From automating complex workflows to delivering hyper-personalised experiences, Agentic AI is poised to redefine enterprise operations. As AI evolves beyond passive models into agents capable of planning, reasoning and acting independently, the threat landscape is shifting substantially.  Businesses are now trying to future-proof and decipher how cyber-criminals are exploiting the autonomy and adaptability of these AI agents.

 

Data poisoning involves intentionally corrupting the training data of AI models, leading them to learn incorrect patterns or make biased decisions. With over a quarter of recent cyber-attacks involving data poisoning, it is clear that traditional governance frameworks are no longer adequate to address the risk.  Manipulations like this can severely degrade model performance and reliability.

 

One question remains: Do organisations possess truly robust defences against less obvious, yet equally nefarious, AI vulnerabilities like data poisoning? Currently, there are three risks demanding immediate attention. These are memory poisoning, tool misuse, and the often-overlooked challenge of human employees being overwhelmed.

 

 

Corrupting the business with memory poisoning

Imagine an AI agent operating with a corrupted understanding of the world. Memory Poisoning exploits an AI’s short - or long-term memory systems and starts to introduce malicious or false data. Such subtle manipulation corrupts the agent’s context, altering decision-making or leading to unauthorised operations.

 

When considering a sophisticated phishing filter, attackers can create dummy accounts and send phishing emails that the filter correctly flags. Then, custom bots are able to log into these accounts, retrieve quarantined emails and re-report them as legitimate. If thousands of bots repeat this, the filter’s memory is then poisoned. The agent "learns" these emails are legitimate, becoming confused and allowing genuine phishing attempts to bypass defences.

 

To combat this, businesses should implement memory content validation, session isolation and strong authentication for memory access. Anomaly detection and regular memory sanitisation routines are also crucial to flag and purge malicious data.

 

 

Misusing tools with AI

Agentic AI’s power lies in its ability to interact with other systems and tools. Such interconnectedness, while beneficial, creates a new attack vector - tool misuse. Attackers manipulate AI agents to abuse integrated tools through deceptive prompts or commands, operating within authorised permissions. It’s about tricking the AI into using legitimate access for malicious ends, potentially leading to agent hijacking and harmful tool interactions.

 

Looking at pre-AI call centres, human agents accessed "air-gapped" IT systems. Now, incoming calls are transcribed and fed to AI agents identifying intent and recommending actions, which then feed downstream. An attacker would then be able to manipulate the incoming voice call with pre-recorded sounds or crafted speech and the manipulated audio, once transcribed, feeds into AI systems. The attacker could then hijack the model and give automated instructions that lead to unexpected behaviour – similar to tricking the AI into sending a fake email. In a factory, for instance, this could cause an AI to issue damaging commands to machinery.

 

Preventing tool misuse requires strict tool access verification, continuous monitoring of usage patterns, clear operational boundaries for AI agents and rigorous validation of agent instructions.

 

 

Overwhelming human overseers

As AI systems gain autonomy, human oversight still remains critical, even if it is more vulnerable. Cyber-criminals often attack systems monitored by humans for decision validation, exploiting cognitive limitations or compromising interaction frameworks. The goal is to flood operators with information, alerts or complicated scenarios, rendering them ineffective and leading to missed threats or erroneous approvals.

 

Humans are the last line of defence, responsible for validating critical decisions. Yet, they have inherent limitations, such as varying attention spans and processing capacity as well as susceptibility to fatigue. To desensitise humans that are covering AI operations, attackers could generate ambiguous scenarios, alerts or prompts that demand extensive analysis under pressure. Subtle manipulation of trust metrics or false data in human-AI dashboards could further erode effective oversight.

 

To counter this, organisations should develop advanced human-AI interaction frameworks and adaptive trust mechanisms, dynamically adjusting human oversight based on risk, confidence and context. Intelligent alert prioritisation, clear visualisation of AI decision-making and continuous training for human overseers are vital to empower, not overwhelm human decision makers.

 

 

Building a resilient AI future

Although memory poisoning, tool misuse and human failures are the top of Agentic AI risks, the dynamic nature of this technology means new threats constantly emerge. While AI transforms businesses, effective implementation requires expertise, caution and proactive governance. Responsible AI deployment needs a comprehensive strategy to address evolving threats head-on, incorporating technical safeguards, robust policy and a deep understanding of AI’s capabilities and vulnerabilities.

 

Ultimately, securing the promise of Agentic AI hinges on a proactive, multi-layered defence strategy that anticipates threats and empowers human oversight. Only then can we truly harness its transformative power without succumbing to its inherent vulnerabilities.

 


 

Saugat Sindhu is Global Head, Advisory Services, Cybersecurity and Risk Services at Wipro

 

Main image courtesy of iStockPhoto.com and mustafaU

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543