Mark Jow at Gigamon outlines best practices for safeguarding AI adoption in organisations
While enterprises pour billions into AI initiatives, they’re simultaneously building the digital equivalent of houses on quicksand. The foundation supporting many of these projects is undermined by poor data visibility, fragmented tools, and unmanaged risk. AI offers transformative potential, yet without structural resilience, that potential is at risk of collapse.
Across industries, executive teams are under pressure to deliver AI-driven gains in efficiency, speed, and differentiation. But while the ambition is clear, many initiatives begin without the right foundations. Projects are often pushed forward before data quality, governance, and business value are properly established.
As complexity builds, these gaps grow wider. The result is a pattern of stalled progress, rising costs, and unmet expectations, pointing to strategic misalignment that slows momentum and erodes trust.
The challenge now facing leaders is to ensure that AI adoption is not only rapid but responsible.
If you can’t trust the data, don’t feed it
AI is only as smart as the data it’s given. Feed it the wrong information, and it will confidently produce the wrong answers. Despite this, many organisations are pushing forward without the data integrity needed to support safe, reliable outcomes. In our 2025 Hybrid Cloud Security Survey of more than 1,000 global Security and IT leaders, 46 percent admitted their teams lack access to clean, high-quality data for AI workloads. That’s nearly half operating without a clear view of what their models are learning from.
The impact goes far beyond poor performance. Bad data leads to bad decisions, and bad decisions, at scale, become business risks. The danger isn’t hypothetical either. Gartner estimates that by 2028, one in four enterprise breaches will be linked to AI agents used in ways they were never intended. In some cases, attackers won’t even need to break in, they’ll simply manipulate the data and let the model do the damage.
This is the kind of risk that slips beneath the radar, not always looking urgent until it’s costly. To stay ahead of it, organisations need to understand their data pipelines from end to end. That means knowing where the data comes from, how it moves, and whether it can be trusted before it shapes a recommendation, a prediction, or a decision.
Closing the gap with better governance
In its current state, AI innovation is outpacing the ability of organisations to implement appropriate safeguards. The result is an expanding attack surface. The 2025 survey found that nearly half of respondents experienced an increase in attacks targeting large language models (LLMs). Meanwhile, phishing, ransomware, and deepfake incidents have become more effective and widespread.
These developments highlight the urgent need for proactive governance. Today, there is little regulatory clarity around AI security or deployment standards. In this vacuum, organisations must take responsibility for establishing their own internal guardrails. Every AI deployment should include a defined risk framework, supported by real-time monitoring and executive oversight. Trusting AI without verification is not an option.
Boards must be engaged, informed, and active in shaping AI risk strategy, as leadership teams that fail to incorporate security into the foundation of their AI plans risk technical setbacks as well as reputational and financial damage.
Don’t let your tools work against you
In response to growing complexity, many organisations have added more tools to their security stacks. The intent is understandable, yet the outcome can be counterproductive. With an average of 15 tools in use, many organisations now face greater fragmentation and less control, as 55 percent of leaders acknowledged that their existing tools are falling short in detecting AI threats effectively.
This tool overload generates excess noise, reduces visibility, and increases the likelihood of blind spots. Rather than resolving the challenge, it compounds it. Nearly half of respondents said they have had to make compromises because their tools do not integrate effectively across hybrid cloud environments. What starts as an effort to strengthen defenses can lead to silos, overlapping functions, and manual handoffs between systems that were never designed to work together.
Instead of speeding up response, these fragmented stacks slow it down, leaving security teams with less clarity, more alerts, and growing fatigue.
To regain control, organisations need to focus less on volume and more on cohesion. Streamlining toolsets, aligning capabilities, and reducing friction between platforms should be treated as a strategic priority. Adding more technology won’t fix a fractured foundation. The path forward starts by better integrating and tightening what’s already in place.
Building a roadmap for responsible AI
Avoiding fragmentation, blind spots, and rushed decisions requires more than reactive fixes. As AI becomes more deeply embedded across the enterprise, organisations need a clear framework for deploying it securely and sustainably. That framework starts with three priorities:
1. Bake security into AI from the start. AI systems should be treated with the same rigor as any other critical infrastructure. Security and data governance teams need a seat at the table from day one to help define access controls, risk thresholds, and monitoring requirements. Embedding security early, rather than layering it on later, helps prevent blind spots and ensures AI operates within trusted boundaries.
2. Invest in cross-functional education. The impact of AI isn’t limited to technical teams. Legal, compliance, product, and executive leadership all need a baseline understanding of how AI works, where it adds value, and where it introduces risk. Building that awareness across business functions supports stronger governance, better decision-making, and a more unified approach to responsible deployment.
3. Plan for long-term costs and complexity. AI is not a plug-and-play solution. It brings ongoing demands, from infrastructure and tooling to audits and talent. Still, nearly half of enterprises allocate 20 percent or less of their IT budgets to security. Sustained investment is essential to support AI’s growth and protect against evolving threats. Organisations that plan for this complexity from the outset are far better positioned to scale securely.
With these foundations in place, AI can drive innovation without compromising resilience, trust, or long-term business health.
Mark Jow is Technical Evangelist EMEA at Gigamon
Main image courtesy of iStockPhoto.com and Natee127
© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543