Orla Daly at Skillsoft describes building quality and responsible AI use in business workflows

From automating processes to generating insights, AI promises speed and efficiency. However, when AI outputs lack rigour or relevance, they create what we call ‘workslop’, defined by Harvard Business Review as ‘AI-generated work content that masquerades as good work but lacks the substance to meaningfully advance a given task’.
As AI adoption increases, the scale of the issue is growing. A recent survey of US desk workers revealed that 40% of employees believe they’ve received workslop in the last month, and 53% admit that at least some of the work they themselves send may be workslop. These figures highlight the urgency for organisations to act.
To use AI effectively, organisations must do more than just adopt tools by setting clear guardrails, empowering employees to spot poor-quality outputs and building skills that allow humans and AI to work together seamlessly. Those that recognise that the common thread between AI and humans is skills, and that skills, human and AI, when deployed optimally, will not only accelerate transformation but also create a lasting competitive edge.
Why clear AI policies are non-negotiable
AI is transforming how businesses operate, but without well-defined policies, the risks can outweigh the rewards. Clear AI policies establish boundaries and expectations that ensure AI is used safely, consistently and in line with organisational goals. Without well-defined guidelines, employees may experiment in ways that unintentionally introduce legal, ethical or security risks. Strong policies are foundational to help prevent the misuse of sensitive data, support regulatory compliance and protect organisations from reputational harm.
Think of AI policies like brakes on a car – they’re not there to slow you down, but to help you move faster and with confidence. Strong operational guardrails aligned to AI policies prevent misuse of sensitive data, support regulatory compliance and protect reputations. However, policies and guardrails are only as good as how well they are understood and applied. Success lies in creating a culture of continuous learning that evolves alongside technology and regulation, turning AI from novelty into a competitive advantage.
Spotting AI ‘workslop’ before it slows you down
AI-generated content can be a productivity booster, but only if it’s high quality. Employees can spot AI ‘workslop’ by keeping the human in the loop and applying a critical eye for signs of low-quality machine-generated content.
One of the most common indicators, and relatively easy to spot, is vague, repetitive or generic language that doesn’t feel tailored to the situation. When writing relies on broad statements or avoids specifics, it’s often a clue that the text was generated quickly without real understanding. Repetition is also a red flag as AI workslop often reuses phrases, adds filler, or includes sentences that contribute little to the overall message.
Factual accuracy is another critical area. AI systems can confidently present incorrect or outdated information, so employees need to be vigilant and flag details that don’t look quite right. Fabricated quotes, sources or numbers are common issues and should always be verified. Approaching AI output with a critical and analytical mindset and human judgment will be essential to ensure the final work is accurate. Verification of facts and sources ensures that AI remains a trusted collaborator, not a liability.
The good news is that you can also use AI in monitoring for poor quality. Asking AI agents for opposing points of view or to qualify their sources can be an effective way to help validate outputs.
As AI continues to be a boardroom priority, the challenge isn’t about adding more tools but prioritising impact and building the human infrastructure to make AI meaningful. Organisations that validate outputs and invest in training connected to business outcomes will reduce the risk of creating workslop, and turn AI from experimentation into real, measurable advancements.
Avoiding slop, making AI output valuable
AI is here to stay, but its success depends on how responsibly we use it and how well we position it as a collaborator or teammate. To avoid workslop, organisations must set clear policies, train employees to critically assess AI outputs and prioritise quality over quantity. Responsible application means embedding AI into workflows with purpose, ensuring outputs are not only efficient but also accurate, relevant and aligned with strategic objectives. Similarly to how you partner with colleagues, working with AI requires active engagement and critical thinking.
A future-ready workforce depends on humans and intelligent systems working together. The goal is not simply to implement AI tools, but to create value that drives business outcomes. When AI delivers context-aware, trustworthy information, it becomes a true enabler and a powerful competitive advantage. Anything less adds noise, erodes confidence and slows progress.
Orla Daly is Chief Information Officer at Skillsoft
Main image courtesy of iStockPhoto.com and Ole_CNX

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543