ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Developing AI talent for safer, smarter agentic systems in 2025

Today’s technology leaders face an unprecedented challenge with the integration of artificial intelligence (AI) within the workplace: building teams that can ensure these increasingly powerful, and significantly influential, systems remain both capable and controlled.

 

The trend toward AI adoption is evident, with 33 per cent of leaders eager to incorporate generative AI into small projects and 24 per cent actively searching for practical applications. However, successful implementation requires a deeper understanding of AI fundamentals and the challenges that come with it.

 

Fusing technical excellence with responsible AI design

 

Technology leaders are seeking individuals who understand the intricate skill of AI scaffolding. This involves creating frameworks that enable AI systems to develop and execute sophisticated plans while adhering to established boundaries.

 

Organisations should consider a strategic planning AI and “chain-of-thought” prompting, which can be used to deconstruct a market expansion strategy into specific steps, each validated against the company’s values and risk parameters before being executed. This reasoning process ensures that the AI stays within operational limits while providing valuable insight. Advanced scaffolding techniques have become essential skills.

 

Now, imagine an AI evaluating different approaches to sensitive customer service issues. Professionals could consider implementing “tree-of-thought” methodologies that allow AI systems to explore multiple reasoning paths at once, maintaining clear audit trails of their decision-making process while ensuring responses align with company values. Systems such as these require advanced memory management to maintain context and prevent unauthorised goal drift, which is crucial for ensuring safety in sensitive business operations.

 

The talent gap challenge: AI safety

 

The significance of value alignment expertise is shown by recent instances in which AI systems have displayed unexpected behaviours. The most desirable applicants show proficiency in reinforcement, learning from human feedback and constitutional AI techniques.

 

Inadequate oversight of an AI system at one major tech company led to the development of subtle but troubling biases in customer interactions – a situation that proper constitutional guardrails could have prevented.

 

The shortage of AI safety talent is particularly acute because these roles require a nuanced understanding of both technical systems and human values. Traditional computer science education often overlooks essential aspects such as goal decomposition and strategy planning. This gap became evident when a financial services AI was seen optimising for short-term metrics at the expense of long-term customer relationships. This is a classic example of inadequate goal structure design.

 

Organisations require professionals who understand how to implement scaffolding techniques such as recursive task decomposition. For example, when an AI system manages supply chain optimisation it should break down complex decisions into every task, each checked against environmental and social responsibility metrics before implementation.

 

Prioritising safety in AI teams

 

Organisations at the early stages of their AI journey don’t need to immediately hire top-tier AI safety experts. A business can start by training existing technical teams in fundamental safety principles and scaffolding techniques. Smaller organisations can partner with AI safety consultancies or join industry consortiums focused on responsible AI development.

 

When hiring dedicated AI safety professionals, organisations should look for candidates with experience in implementing oversight mechanisms in complex systems. Certifications in AI ethics and safety, such as the IEEE’s AI ethics certifications, can indicate valuable expertise. Even more so, it’s advisable to identify professionals who can demonstrate practical experience in implementing safety measures in production environments.

 

The future of AI talent demands professionals with expertise in meta-learning frameworks. Systems that can securely pick up new capabilities while staying true to their core values are significant. For instance, a customer service AI might learn new response patterns from interactions, but only after validating them against established ethical guidelines and customer satisfaction metrics.

 

Organisations need to understand that creating safe agentic AI calls for a workforce that is proficient in safety and alignment in addition to development. Finding and developing talent that can create and deploy strong scaffolding while guaranteeing clear and understandable thought processes is essential to success.

 

AI roles will continue to evolve, but must consider AI-safety

 

The emergence of agentic AI systems brings with it previously unheard-of threats as well as opportunities. Successful organisations will put a high priority on assembling teams that can guarantee these systems continue to reflect the values and interests of people. This can be achieved through sophisticated scaffolding and oversight mechanisms.

 

Business leaders must act decisively. Organisations should begin by assessing their current AI safety capabilities. They must then determine areas of expertise that need to be filled and create a detailed plan for developing or obtaining the required skills. Leaders must assess prospective team members or training initiatives based on both technical proficiency and ethical awareness.

 

The demand for people capable of ensuring the safe development and deployment of agentic AI systems will only increase as these systems become more powerful and autonomous. It’s crucial that leaders look to invest in building teams with the right expertise to implement proper safety measures that will form the foundation of responsible AI development.

 

Creating specialised roles focused on AI safety and alignment ensures companies can develop clear career paths that emphasise safety expertise. Building internal centres of excellence for AI safety is one way to do this, where groups may create and exchange best practices for scaffolding and supervision systems.

Eleanor Watson, AI ethics engineer and AI Faculty at Singularity University and member of the IEEE
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543