Bosses Are Using AI to Decide Who to Fire

An alarming new trend has emerged in the world of management: bosses using AI to decide who to promote, who to discipline, and who to fire.

Though most signs are telling us artificial intelligence isn’t taking anyone’s jobs, employers are still using the tech to justify layoffs, outsource work to the global South, and scare workers into submission. But that’s not all — a growing number of employers are using AI not just as an excuse to downsize, but are giving it the final say in who gets axed.

That’s according to a survey of 1,342 managers by ResumeBuilder.com, which runs a blog dedicated to HR. Of those surveyed, 6 out of 10 admitted to consulting a large language model (LLM) when deciding on major HR decisions affecting their employees.

Per the report, 78 percent said they consulted a chatbot to decide whether to award an employee a raise, while 77 percent said they used it to determine promotions.

And a staggering 66 percent said an LLM like ChatGPT helped them make decisions on layoffs; 64 percent said they’d turned to AI for advice on terminations.

To make things more unhinged, the survey recorded that nearly 1 in 5 managers frequently let their LLM have the final say on decisions — without human input.

Over half the managers in the survey used ChatGPT, with Microsoft’s Copilot and Google’s Gemini coming in second and third, respectively.

The numbers paint a grim picture, especially when you consider the LLM sycophancy problem — an issue where LLMs generate flattering responses that reinforce their user’s predispositions. OpenAI’s ChatGPT is notorious for its brown nosing, so much so that it was forced to address the problem with a special update.

Sycophancy is an especially glaring issue if ChatGPT alone is making the decision that could upend someone’s livelihood. Consider the scenario where a manager is seeking an excuse to fire an employee, allowing an LLM to confirm their prior notions and effectively pass the buck onto the chatbot.

AI brownnosing is already having some devastating social consequences. For example, some people who have become convinced that LLMs are truly sentient — which might have something to do with the “artificial intelligence” branding — have developed what’s being called “ChatGPT psychosis.”

Folks consumed by ChatGPT have experienced severe mental health crises, characterized by delusional breaks from reality. Though ChatGPT’s only been on the market for a little under three years, it’s already being blamed for causing divorces, job loss, homelessness, and in some cases, involuntary commitment in psychiatric care facilities.

And that’s all without mentioning LLMs’ knack for hallucinations — a not-so-minor problem where the chatbots spit out made-up gibberish in order to provide an answer, even if it’s totally wrong. As LLM chatbots consume more data, they also become more prone to these hallucinations, meaning the issue is likely only going to get worse as time goes on.

When it comes to potentially life-altering choices like who to fire and who to promote, you’d be better off rolling a dice — and unlike LLMs, at least you’ll know the odds.

More on LLMs: OpenAI Admits That Its New Model Still Hallucinates More Than a Third of the Time

The post Bosses Are Using AI to Decide Who to Fire appeared first on Futurism.

Scroll to Top