Dr Megha Kumar at CyXcel explains why corporate responsibility can no longer be optional

Artificial intelligence has moved beyond experimentation. It is now embedded across critical infrastructure, financial markets, healthcare systems, education platforms and public services. Yet despite its transformative potential, AI also carries risks that could destabilise economies and societies on a global scale, from environmental damage to systemic bias, and mass disinformation to acute water and energy crises.
We are racing to develop increasingly autonomous and powerful systems, including those that may approach superintelligence, without confidence that they will remain aligned with human interests. At the same time, too many organisations continue to treat AI primarily as an efficiency driver, a means to automate workflows, cut costs or gain competitive advantage, rather than as a force capable of shaping or shattering the future.
This mindset must change. Businesses are no longer passive adopters of a neutral tool. They are participants in a shared ecosystem with collective responsibilities. AI governance is not solely the domain of governments or technologists. Private enterprise must reimagine its role, and reassert its collective power, to ensure AI is safe, secure and accountable.
AI risk is systemic
The risks associated with AI are no longer hypothetical. Generative models are already amplifying disinformation at scale. Algorithmic systems continue to replicate and entrench historical biases in things such as lending and hiring. AI-enabled cyber-security capabilities are lowering the barrier to sophisticated attacks making it possible for individuals with little technical background to generate highly disruptive malware. And the immense computational power required to train AI systems is contributing to surging energy demand and water consumption, placing strain on fragile infrastructure.
These risks do not respect sectoral boundaries. A vulnerability in one domain can cascade across others. Financial markets rely on automated trading systems. Hospitals depend on predictive diagnostics. Utilities use AI to manage grids and water distribution. Consequently, a failure of safety or security in any one area could have ripple effects far beyond its origin.
The notion that AI governance can be compartmentalised within technical teams or compliance departments is therefore dangerously outdated. AI risk is systemic risk. And systemic risk demands collective management.
Every sector has a duty to collaborate
It is tempting for businesses outside the technology sector to view AI safety as someone else’s responsibility - perhaps that of major AI developers or regulators. This is a profound miscalculation.
Every sector that adopts AI inherits responsibility for how it shapes outcomes. That responsibility extends beyond internal risk management. Organisations must collaborate across industries to establish shared standards, conduct joint stress-testing and exchange intelligence on vulnerabilities and misuse.
Cross-sector coordination is therefore not optional - it is essential. Businesses must apply pressure within their own supply chains, demanding transparency from AI vendors regarding model training data, safety testing, alignment mechanisms and environmental impact. Procurement decisions can shape market incentives more powerfully than regulation alone.
Collectively, businesses have leverage. They must use it.
Business leaders as custodians of the future
Every executive overseeing AI adoption is more than a tactical operator optimising quarterly performance. They are, whether they acknowledge it or not, custodians of the future.
To treat AI purely as a productivity enhancer is wilful negligence. Boards should be asking “Can we deploy this?” but also “Should we?” and “What are the consequences if everyone does?”
Responsible AI governance is crucial, and corporate responsibility must translate into concrete action.
How to translate this into concrete action
Firstly, AI governance should sit alongside cyber-security, financial risk and regulatory compliance at the highest level of oversight. Boards must receive regular briefings on AI exposure, model performance, supply chain dependencies and emerging threats.
Similarly, before deployment, organisations should assess not only technical performance but societal, environmental and geopolitical implications. Scenario planning exercises can reveal unintended consequences and adversarial misuse.
Procurement teams should also request documentation on training data, bias mitigation strategies, security controls and environmental footprint. Contracts should also include clear accountability for failures.
AI systems must also be developed and integrated with cyber-security at their core. This includes secure coding practices, access controls, continuous monitoring and resilience against model manipulation or data poisoning.
Performance metrics should also reward responsible deployment, not just rapid adoption. Embedding accountability into executive compensation and project evaluation signals seriousness of intent.
The conversation must leave the boardroom
However, the scale of AI risk demands that businesses elevate the conversation to industry bodies, trade associations and international forums. AI safety must become a standing agenda item at sectoral gatherings and cross-industry coalitions. Standards for model evaluation, cyber-security resilience and responsible deployment must be harmonised rather than fragmented.
Global coordination is particularly urgent. AI development is transnational as supply chains span continents and data flows across borders. Without alignment between major economies, safety efforts will be undermined.
Forums such as the G20 and the United Nations could provide platforms for dialogue on shared principles and norms, with sustained engagement from the private sector. However, geopolitical discord is deepening and multilateral government-led institutions are weakening, not least due to the disruptive posture of the US administration and growing assertiveness of China. Businesses cannot afford to wait for geopolitical tensions to ease. Rather this is a moment for the private sector collectively to fill the leadership vacuum. Companies are well-placed to anticipate the pace of technological change or the practical realities of deployment.
Businesses should not wait to be regulated into responsibility. They should proactively advocate for coherent international frameworks, contribute expertise to policy discussions, and resist the temptation to exploit jurisdictional loopholes.
The alternative is a patchwork of conflicting rules that increases risk rather than reducing it.
Shaping or shattering the future?
The trajectory of AI development suggests that capabilities will continue to accelerate. The choice facing corporate leaders is stark. They can continue to treat AI as a siloed technical tool, optimised for efficiency and growth. Or they can recognise that they are stewards of a technology with global catastrophic potential - one that could either shape or shatter the future.
Private enterprise has historically been a driver of innovation and prosperity. In the age of AI, it must also become a guardian of stability and security. The future will not be shaped by technologists alone, nor by governments acting in isolation. It will be influenced by whether organisations across sectors accept that their responsibility extends beyond quarterly profit to the preservation of a safe and resilient global system.
To do anything less would be to gamble with a future none of us can afford to lose.
Dr Megha Kumar is Chief Product Officer and Head of Geopolitical Risk at CyXcel
Main image courtesy of iStockPhoto.com and imaginima

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543