Is AI-led Vishing the Smartest Scam Yet?

A recent 2025 CrowdStrike report found that voice phishing, often called ‘vishing’, rose by 442% in 2024. Vishing is a cyberattack that uses phone calls or voice messages to manipulate users into providing sensitive information.

Shedding light on how vishing works in a blog post, Stephanie Carruthers, IBM’s global lead of cyber range and cyber crisis management, notes that as systems become more secure, attackers are shifting their focus to people. 

Unlike software, people can’t be patched or updated similarly. And it’s much harder to ignore a ringing phone than to delete a suspicious email.

It’s becoming effective, and AI is changing the game for attackers and defenders. 

Why Vishing Works So Well

Carruthers has run countless social engineering tests where her team calls a company’s help desk pretending to be an employee. According to her, they’ve succeeded every single time. These exercises are designed to help organisations find and fix weaknesses. 

Real attackers use the same technique to do real harm—stealing data, installing malware, or tricking employees into sending money. 

“If you look at a lot of major data breaches now, you’ll see that it was actually a phone call that started the breach,” Carruthers said. 

Once a scammer can access an employee’s account, they can easily move through a company’s systems unnoticed.

This risk worsens because many employees now use personal smartphones for work tasks. These devices often lack the same security controls as corporate systems, giving attackers more ways to get in.

AI, The Catalyst for Vishing Scams

Image by Dee from Pixabay

The arrival of AI tools is making vishing even more dangerous. Deepfake audio can now mimic authentic voices almost perfectly, allowing scammers to impersonate trusted figures—like a manager, a CEO or even a family member.

Sooraj Sathyanarayanan, a security researcher, told AIM, “I’ve personally tested the voice capabilities of ChatGPT, Gemini, and Grok. All of them are scary good. With just a few seconds of audio, these tools can clone a voice and hold a full conversation — tone, accent, emotions, etc.”

He added, “This is exactly what attackers are going to exploit. We’re not talking about robocalls anymore. We’re talking about AI-powered deepfake voices that can call your parents, your boss, your bank and sound exactly like you.”

Sathyanarayanan explained that the LLMs elevate the danger of vishing scams due to their ability to think or improvise on the fly. Their capacity to improvise, respond to inquiries, and guide discussions mirrors human interaction. When paired with stolen voice data and readily available personal details from social media, this creates an ideal tool for social engineering attacks.

This isn’t just theoretical. A security researcher from Palo Alto Networks was recently targeted with an AI-generated voice that sounded like his daughter.

As per IBM’s blog post, Carruthers herself went head-to-head with an AI-powered chatbot built to carry out vishing scams. She expected the bot to fumble—but it didn’t. It performed so well that she feared it might win.

“When I heard it start making calls, I was like, ‘Oh no,’” she mentioned. The chatbot used different voices and styles, and convinced people to take real-world actions, like visiting certain websites or sharing information.

What Can Be Done?

Sathyanarayanan advised AIM that individuals should establish passphrases with close contacts, something only the real receiver and the caller would know.

In addition, he advises users to verify identity through alternate channels before acting on requests that involve money, credentials, or other sensitive information.

He also mentioned that companies should train their employees and not rely on the caller ID system. Companies can also go a step ahead and build zero-trust into communication workflows, Sathyanarayanan noted.  

The post Is AI-led Vishing the Smartest Scam Yet? appeared first on Analytics India Magazine.

Scroll to Top