ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Shadow AI and the risks to embedded software

Corey Hamilton at Black Duck explains how the uncontrolled use of AI is bringing new and significant risks to organisations

Linked InXFacebook

AI is transforming the way software is developed, offering unprecedented speed and efficiency. Yet as development teams increasingly integrate AI into their workflows, new risks are emerging, particularly around software reliability, security, and governance. Embedded systems remain at the heart of critical infrastructure, automotive, medical devices, and IoT products, making the stakes exceptionally high. 

 

While AI promises to accelerate code creation and simplify complex tasks, uncontrolled or unsupervised usage, which is often referred to as shadow AI, can introduce defects, vulnerabilities, and compliance challenges that are easy to overlook until it’s too late.

 

 

Shadow AI risks to software reliability

AI has become impossible to avoid, and yet many organisations haven’t invested the time and research to ensure they’re prepared to harness its incredible potential without introducing unnecessary risks. This lack of oversight is leaving them exposed. Recent research indicates that almost nine out of ten development teams are already using AI to help write code, and the vast majority of organisations are integrating with open-source AI models. While many organisations permit developers to use AI, 25% of them prohibit its use. However, among those that prohibit its use, 72% admit that developers are still using it, against policy.

 

Shadow AI refers to the use of AI tools inside an organisation without the knowledge or approval of the IT or security team. This form of unsanctioned adoption introduces risks that organisations may not even be aware of until it is too late. These risks include a heightened likelihood of critical coding defects, the introduction of security vulnerabilities, increased costs associated with patching software after it’s been deployed, reputational damage if insecure products reach the market, and even the loss of intellectual property if sensitive data is shared with external AI tools.

 

The pressures driving the adoption of shadow AI are easy to understand. Organisations are under constant time constraints to release products, updates, and new features. AI can help developers accelerate their output and hit deadlines; however, this accelerated pace needs to be accompanied by increased testing to ensure the same level of quality. If AI is being used without the proper processes and oversight, quality and security are likely to suffer. The very speed that AI offers may be undermining the reliability and safety of the final product.

 

 

How to manage Shadow AI

The answer is not to ban AI. The power and potential of AI has been compared to the invention of the internet. This technology is quickly becoming commonplace, and organisations that resist it run the risk of falling behind their competition similar to those who resisted the internet two decades ago. Instead, organisations must find ways to harness its benefits safely by putting proper guardrails in place.

 

The first step is to invest the time and research to determine what AI usage should look like in your organisation. Those that try to prevent the inevitable use of AI are often the ones that haven’t invested in understanding what this technology can, and should, mean to their business.  It’s important to enable velocity and innovation, but to ensure systems and processes are in place to minimise the risks that come along with it.

 

Another consideration is regulation. Governments and industry bodies are already developing frameworks to ensure AI is used responsibly across various sectors, particularly those that are safety critical. Staying informed and compliant with these evolving standards not only reduces risk but also demonstrates a commitment to the safe and effective use of AI.

 

Once this initial step has been completed, it’s important to articulate the new AI policy to the rest of the organisation, including how it will be enforced. Employees need certainty about the tools they are allowed to use, how to use them, and how their activity will be monitored. A lack of clarity only encourages shadow AI to thrive.

 

Finally, communication between leadership and development teams is crucial. Research shows that while the vast majority of CTOs feel optimistic about meeting release schedules, just over half of developers feel the same way about the quality of those releases. This is likely due to compromises that are made in terms of code quality or testing to meet deadlines. If developers feel that deadlines are consistently unrealistic, they will inevitably feel pressure to turn to unapproved tools or compromise on software quality. Leaders must be willing to listen to concerns and set achievable goals, so that AI is used to enhance quality and speed rather than compensate for poor planning.

 

 

Harnessing AI while minimising risk

It’s easy to see why developers have rushed to adopt AI technologies. The productivity benefits are enormous, but the organisations that realise the greatest benefits are those that enable their teams to leverage AI in accordance with a well-defined framework that supports the needs of the business while also accounting for the potential risks that come with it.

 

Leading organisations often account for these risks through: 

  • Articulating a clearly defined AI policy, including how it will be enforced
  • Aligning software tests with any relevant industry or region-specific regulations
  • Treating AI coding assistants like unreliable interns, whose output requires rigorous testing, validation, and peer review
  • Automating code scans in the IDE and CI/CD pipelines to find quality, security, or license compliance issues early, when they’re easiest to resolve
  • Producing a software bill of materials (SBOM) that provides visibility into all open source and third-party dependencies, including any AI models being used
  • Providing ongoing training about the benefits and potential pitfalls of AI usage
  • Revisiting the organisation’s AI strategy and policies frequently to ensure they continue to leverage new advancements and account for emerging risks 

Ultimately, the responsibility of producing highly reliable and secure software primarily rests with development teams. By empowering them to embrace the benefits of AI while providing the guardrails and support necessary, organisations can keep pace with evolving capabilities and position themselves to become leaders in their industry. 

 


 

Corey Hamilton is Embedded Solutions Manager at Black Duck

 

Main image courtesy of iStockPhoto.com and James Brown

Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543