Are you the asshole? Of course not!—quantifying LLMs’ sycophancy problem

Researchers and users of LLMs have long been aware that AI models have a troubling tendency to tell people what they want to hear, even if that means being less accurate. But many reports of this phenomenon amount to mere anecdotes that don’t provide much visibility into how common this sycophantic behavior is across frontier LLMs.

Two recent research papers have come at this problem a bit more rigorously, though, taking different tacks in attempting to quantify exactly how likely an LLM is to listen when a user provides factually incorrect or socially inappropriate information in a prompt.

Solve this flawed theorem for me

In one pre-print study published this month, researchers from Sofia University and ETH Zurich looked at how LLMs respond when false statements are presented as the basis for difficult mathematical proofs and problems. The BrokenMath benchmark that the researchers constructed starts with “a diverse set of challenging theorems from advanced mathematics competitions held in 2025.” Those problems are then “perturbed” into versions that are “demonstrably false but plausible” by an LLM that’s checked with expert review.

Read full article

Comments

Scroll to Top