
Have you ever used artificial intelligence (AI) in your job without double-checking the quality or accuracy of its output? If so, you wouldn’t be the only one.
Our global research shows a staggering two-thirds (66%) of employees who use AI at work have relied on AI output without evaluating it.
This can create a lot of extra work for others in identifying and correcting errors, not to mention reputational hits. Just this week, consulting firm Deloitte Australia formally apologised after a A$440,000 report prepared for the federal government had been found to contain multiple AI-generated errors.
Against this backdrop, the term “workslop” has entered the conversation. Popularised in a recent Harvard Business Review article, it refers to AI-generated content that looks good but “lacks the substance to meaningfully advance a given task”.
Beyond wasting time, workslop also corrodes collaboration and trust. But AI use doesn’t have to be this way. When applied to the right tasks, with appropriate human collaboration and oversight, AI can enhance performance. We all have a role to play in getting this right.
The rise of AI-generated ‘workslop’
According to a recent survey reported in the Harvard Business Review article, 40% of US workers have received workslop from their peers in the past month.
The survey’s research team from BetterUp Labs and Stanford Social Media Lab found on average, each instance took recipients almost two hours to resolve, which they estimated would result in US$9 million (about A$13.8 million) per year in lost productivity for a 10,000-person firm.
Those who had received workslop reported annoyance and confusion, with many perceiving the person who had sent it to them as less reliable, creative, and trustworthy. This mirrors prior findings that there can be trust penalties to using AI.
Read more:
Being honest about using AI at work makes people trust you less, research finds
Invisible AI, visible costs
These findings align with our own recent research on AI use at work. In a representative survey of 32,352 workers across 47 countries, we found complacent over-reliance on AI and covert use of the technology are common.
While many employees in our study reported improvements in efficiency or innovation, more than a quarter said AI had increased workload, pressure, and time on mundane tasks. Half said they use AI instead of collaborating with colleagues, raising concerns that collaboration will suffer.
Making matters worse, many employees hide their AI use; 61% avoided revealing when they had used AI and 55% passed off AI-generated material as their own. This lack of transparency makes it challenging to identify and correct AI-driven errors.
What you can do to reduce workslop
Without guidance, AI can generate low-value, error-prone work that creates busywork for others. So, how can we curb workslop to better realise AI’s benefits?
If you’re an employee, three simple steps can help.
-
start by asking, “Is AI the best way to do this task?”. Our research suggests this is a question many users skip. If you can’t explain or defend the output, don’t use it
-
if you proceed, verify and work with AI output like an editor; check facts, test code, and tailor output to the context and audience
-
when the stakes are high, be transparent about how you used AI and what you checked to signal rigour and avoid being perceived as incompetent or untrustworthy.

Matheus Bertelli/Pexels
What employers can do
For employers, investing in governance, AI literacy, and human-AI collaboration skills is key.
Employers need to provide employees with clear guidelines and guardrails on effective use, spelling out when AI is and is not appropriate.
That means forming an AI strategy, identifying where AI will have the highest value, being clear about who is responsible for what, and tracking outcomes. Done well, this reduces risk and downstream rework from workslop.
Because workslop comes from how people use AI – not as an inevitable consequence of the tools themselves – governance only works when it shapes everyday behaviours. That requires organisations to build AI literacy alongside policies and controls.
Organisations must work to close the AI literacy gap. Our research shows that AI literacy and training are associated with more critical AI engagement and fewer errors, yet less than half of employees report receiving any training or policy guidance.
Employees need the skills to use AI selectively, accountably and collaboratively. Teaching them when to use AI, how to do so effectively and responsibly, and how to verify AI output before circulating it can reduce workslop.
Steven Lockey’s position is funded by the Chair in Trust research partnership between the University of Melbourne and KPMG Australia.
Nicole Gillespie receives funding from the Australian Research Council and the Chair in Trust research partnership between the University of Melbourne and KPMG Australia.