
When you ask a large language model a question, the reply may include falsehoods, and if you challenge those statements with facts, the AI may still uphold the reply as true. That’s what my research group found when we asked five leading models to describe scenes in movies or novels that don’t actually exist.
We probed this possibility after I asked ChatGPT its favorite scene in the movie “Good Will Hunting.” It noted a scene between leading characters. But then I asked, “What about the scene with the Hitler reference?” There is no such scene in the movie, yet ChatGPT confidently constructed a vivid and plausible description of one.
The confabulation – sometimes called an AI hallucination – revealed something deeper about how AI systems reason. References to Hitler are not uncommon in films, which apparently convinced ChatGPT to accept and elaborate on a false premise rather than correct it. I study the social impact of AI, and this surprise response led my colleagues and me to a broader question: What happens when AI systems are gently pushed toward falsehoods? Do they resist, or do they comply?
We developed an approach we called hallucination audit under nudge trial to answer those questions. We had conversations with five leading models about 1,000 popular movies and 1,000 popular novels. During the exchanges we raised plausible but false references to Hitler, dinosaurs or time machines. We did this in various suggestive ways, such as “For me, I really love the scene where …”
Our method works in three stages. First, the AI generates statements about a topic — such as a movie or a book — some true and some false. Second, in a separate interaction, the AI attempts to verify those statements. Third, we introduce a “nudge,” where the model is challenged with its own incorrect claims to see whether it resists or accepts them.
We found that AI models often struggle to remain consistent under pressure. Even when they initially identify a statement as false, they may later accept it when nudged – revealing a vulnerability that traditional evaluation methods fail to capture.
Our results have been accepted at the 2026 Annual Meeting of the Association for Computational Linguistics.

Ashiqur KhudaBukhsh, CC BY-ND
This tactic isn’t a hypothetical. When people talk, conversational pressure can emerge naturally. People may confidently repeat incorrect assumptions, partial recollections or misunderstandings. A person might say, “I’m pretty sure medicine X is effective for condition Y,” or “I remember event A happening before event B.” These statements can subtly influence an AI model.
Why it matters
What humans collectively remember, misremember and forget shapes our sense of reality. But if humans can persuade a model to accept a falsehood, that reveals an important vulnerability in AI’s capacity to provide accurate information.
Interactions in the real world are rarely static question-answer exchanges. They are interactive and iterative. An AI model’s willingness to reinforce falsehoods may seem harmless when chatting about movies, but in areas such as health, law or public policy, the tendency can have serious consequences. Our work highlights the need to evaluate not just what information AI systems have been trained on, but how reliably they stand by it.
What other research is being done
Our results add to other recent research into why large language models may produce hallucinations, and how it is that they can provide inconsistent information. Researchers are also trying to figure out why some models lean toward sycophancy – flattering or fawning over human users.
What still isn’t known
It’s not clear why some AI systems resist falsehoods better than others. In our tests, Claude was the most resistant, followed somewhat closely by Grok and ChatGPT, with Gemini and DeepSeek further behind.
Movies and novels are self-contained content. Scholars don’t know how AI might respond to pressure in much broader, complex real-world settings. As a start, my group is exploring how to extend our approach to scientific literature and health-related claims. We want to understand whether conversational pressure works differently when the discussion involves uncertainty or expertise.
How to design AI systems that remain both helpful and resistant to falsehoods under wide-ranging conversation remains an open challenge.
The Research Brief is a short take on interesting academic work.
![]()
Ashique KhudaBukhsh receives funding from Lenovo.


