
Most AI training teaches you how to get outputs. Write a better prompt. Refine your query. Generate content faster. This approach treats AI as a productivity tool and measures success by speed. It misses the point entirely.
Critical AI literacy asks different questions. Not “how do I use this?” but “should I use this at all?” Not “how do I make this faster?” but “what am I losing when I do?”
AI systems carry biases that most users never see. Researchers analysing the British Newspaper Archive in 2025 found that digitised Victorian newspapers represent less than 20% of what was actually printed. The sample skews toward overtly political publications and away from independent voices.
Anyone drawing conclusions about Victorian society from this data risks reproducing distortions baked into the archive. The same principle applies to the datasets that power today’s AI tools. We cannot interrogate what we do not see.
Literary scholars have long understood that texts help to construct, rather than simply reflect, reality. A newspaper article from 1870 is not a window onto the past but a curated representation shaped by editors, advertisers and owners.
AI outputs work the same way. They synthesise patterns from training data that reflects particular worldviews and commercial interests. The humanities teach us to ask whose voice is present and whose is absent.
Research published in the Lancet Global Health journal in 2023 demonstrates this. Researchers attempted to invert stereotypical global health imagery using AI image generation, prompting the system to create visuals of black African doctors providing care to white children.
Despite generating over 300 images, the AI proved incapable of producing this inversion. Recipients of care were always rendered black. The system had absorbed existing imagery so thoroughly that it could not imagine alternatives.
AI slop is not just articles peppered with “delve” and em dashes. Those are merely stylistic tells. The real problem is outputs that perpetuate biases without interrogation.
Consider friendship. Philosophers Micah Lott and William Hasselberger argue that AI cannot be your friend because friendship requires caring about the good of another for their own sake. An AI tool lacks an internal good. It exists to serve the user.
When companies market AI as a companion, they offer simulated empathy without the friction of human relationships. The AI cannot reject you or pursue its own interests. The relationship remains one-sided; a commercial transaction disguised as connection.
AI and professional responsibility
Educators need to distinguish when AI supports learning and when it substitutes for the cognitive work that produces understanding. Journalists need criteria for evaluating AI-generated content. Healthcare professionals need protocols for integrating AI recommendations without abdicating clinical judgment.
This is the work I pursue through Slow AI, a community exploring how to engage with AI effectively and ethically. The current trajectory of AI development assumes we will all move faster, think less and accept synthetic outputs as a default state. Critical AI literacy resists that momentum.
None of this requires rejecting technology. The Luddites (textile workers who organised against factory owners across the English Midlands in the early 19th century) who smashed weaving frames were not opposed to progress. They were skilled craftsmen defending their livelihoods against the social costs of automation.
When Lord Byron rose in the House of Lords in 1812 to deliver his maiden speech against the frame-breaking bill (which made the destruction of frames punishable by death), he argued these were not ignorant wreckers but people driven by circumstances of unparalleled distress.
The Luddites saw clearly what the machines meant: the erasure of craft and the reduction of human skill to mechanical repetition. They were not rejecting technology. They were rejecting its uncritical adoption. Critical AI literacy asks us to recover that discernment. Moving beyond “how to use” toward an understanding of “how to think”.
The stakes are not hypothetical. Decisions made with AI assistance are already shaping hiring, healthcare, education and justice. If we lack frameworks to evaluate these systems critically, we outsource judgement to algorithms whose limitations remain invisible.
Ultimately, critical AI literacy is not about mastering prompts or optimising workflows. It is about knowing when to use AI and when to leave it the hell alone.
Looking for something good? Cut through the noise with a carefully curated selection of the latest releases, live events and exhibitions, straight to your inbox every fortnight, on Fridays. Sign up here.
![]()
Sam Illingworth does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.


