Andres Rodriguez at Nasuni argues that treating history as an asset, not an archive, will unlock AI’s real power

Data may be the currency of AI, but its real value isn’t what a company knows today, but how that knowledge was created.
AI can identify patterns in raw data files to make predictions and recommendations. But the true power emerges when AI understands the story behind the data: how a bridge was designed, how a terminal was built, how a company actually worked. When AI can see not just the ‘what’ but the ‘why’ behind decisions, the ceiling to what it can achieve drastically increases.
The value of version history for AI
Capturing not just the current state of data but every iteration of its evolution creates a much bigger and more representative foundation for model training. With access to full version history, AI can form a true picture of how a business operates: its decisions, workflows and value creation. This rich, evolving knowledge is far more instructive than static snapshots and can significantly enhance the quality and accuracy of AI outputs.
It becomes a definitive record of an organisation’s path to innovation: how plans changed, what failed, what improved, and how the final solution took shape. Enterprises don’t invest in AI to repeat past mistakes or recycle ideas they’ve already tried. They want smarter, more trustworthy automation, and that depends on delivering highly contextualised information.
Realising powerful competitive advantages
Public datasets give AI the volume it needs to learn, and many organisations struggle to replicate this scale with proprietary data alone. However, by tapping into their own historical business data, enterprises can go from Terabytes to Petabytes of training material, enabling them to create deeply specialised, domain-specific and organisation-specific AI models that competitors can’t match.
In this new AI economy, a company’s value is directly tied to its knowledge: how well it can access it, understand it, and put it to work. Businesses that are tied to public data to drive growth and innovation will quickly fall behind, while those that can feed AI systems with their own, highly-contextualised historical business knowledge will transform everyday operations into an engine for innovation.
Overcoming the unstructured data hurdle
If historical data can drastically improve AI’s outputs and create a real competitive advantage, why aren’t more businesses using it? Because most version history comes under the unstructured data umbrella - huge datasets that are scattered across systems and teams, often messy, unlabelled, inconsistent, or even irrelevant. It’s famously difficult to manage.
Feeding AI models with incomplete or poorly curated data undermines their usefulness. Too many organisations’ AI projects fail because the data feeding them is fragmented, duplicated, or ungoverned. What’s needed instead is clean, contextualised information. Preparing unstructured data for AI is the key to unlocking the value of version history. That means:
Giving AI the context it needs to succeed
Enterprises have a real opportunity to build more actionable, meaningful, and trustworthy AI by leveraging their institutional memory. This isn’t a single project, but a fundamental shift in how organisations understand and use their knowledge. By realising the power of time-stamped, historically rich data, businesses can create tailored training data that no public dataset can match.
As AI becomes embedded into processes, tools and decision-making, the organisations that succeed will be those that treat their history as an asset, not an archive. Context is now a key competitive differentiator, because AI doesn’t just need raw data; it requires understanding.
Andres Rodriguez is the Founder of Nasuni
Main image courtesy of iStockPhoto.com and tadamichi

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543