On 18 November 2025, AI Talk host Kevin Craine was joined by Bryan Finster, Principal Engineer; Brian Greenberg, Chief Information Officer, RHR International; and Derek Ashmore, AI Enablement Principal, Asperitas Consulting.
Views on news
For your organization to fully capitalize on AI, you must first earn and maintain that trust. This requires a robust, proactive governance framework, built from the ground up to address the unique ethical and legal challenges of agentic systems. The absence of a robust, proactive governance framework leads to unchecked AI risk, which silently erodes value.
Organizations that fail to infuse transparency and security could see a 50 per cent decline in model adoption, business goal attainment and user acceptance. While AI governance is key, it’s not the same as trust, which is based on users’ experience regarding AI’s benefits in terms of productivity or software development efficiency.
Even when developers use AI tools to write code, the outcome will be their responsibility. However, platforms can play a central role in providing ecosystems where coding can happen in a safe environment protected with guardrails. Agentic AI is a very new technology and therefore requires caution in how its deployed by businesses at scale.
How agentic AI tools are advancing code writing
For developers, there has been a shift from writing code to concentrating on outcomes and spec-driven development – although developers who were good at the continuous delivery can fall into the Ai workflow with no problem at all. Available tools differ considerably in their accuracy and general performance. Also, while tools develop at light speed, corporate procurement models are too slow to keep up with this. So, while developing can get faster with AI with orders of magnitude, without efficient upstream and downstream workflows, it’ll only lead to a spike in the number of undelivered projects.
Agentic Ai is leveraged most broadly in cyber security and employee onboarding but accountability for an agent going rogue is still not clarified. While AI can take commodity coders’ jobs, demand for developers who build spec-driven systems and understand how to use Ai tools safely will remain. At this stage of technological development, it’s better to only ask agents for suggestions regarding how to make critical changes to an environment that can impact a lot of people and get human coders to execute those changes. To avoid responsibility issues, AI agents at least must be configured not to touch git.
As traditional QA processes are slow, quality must be moved left by incorporating it into the model building process. One warning signal that the code is not good enough is a surging defect rate. You should also monitor news for system failures that may have been caused by agentic AI, any proposals for new AI legislation, as well as the security questionnaires that your business gets from customers. To ensure that deployments remain safe at scale and over time, use automated testing and static and dynamic code analysis.
The panel’s advice

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543