Most vendors are mislabeling their products as “agentic AI,” setting unrealistic expectations around tools that are essentially copilots or intelligent automation with a chat interface, according to new research from HFS.
This “agentic-washing” — the gap between what is marketed and what is actually sold — has become the next big trust issue in enterprise AI. Vendors are rebadging copilots as “agents” to imply autonomy and business impact, according to the research authored by Hansa Iyengar, practice leader (BFS & IT Services) at HFS Research.
A report by Research and Markets on AI Agents projected the AI Agents market to grow from $5.1 billion in 2024 to $47.1 billion in 2030, with a CAGR of 44.8% during 2024-2030.
Surveying over 1,300 professionals to “learn about the state of AI agents”, the report found that 51% of the respondents said they have already been using AI agents in production, 63% of mid-sized companies deployed agents in production, and 78% have active plans to integrate AI agents.
The HFS report said regulators on both sides of the Atlantic are already targeting false claims, setting up a collision between hype and compliance.
Gartner forecasts that 40% of enterprise applications are expected to feature task-specific AI agents by the end of 2026, up from less than 5% in 2025. These agents will evolve from AI assistants, currently embedded in most enterprise apps by end-2025, to autonomous, task-capable systems that enhance productivity, collaboration, and workflow orchestration.
Gartner predicts that agentic AI could account for around 30% of enterprise application software revenue by 2035, surpassing $450 billion.
“We are still seeing AI assistants being deployed which are agent washed,” Anushree Verma, senior director analyst, Gartner, told AIM.
She added that the rapid growth in popularity of Agentic AI in India is largely driven by hype, while adoption is very low for now with low ‘AI agency’ use cases. Early examples, according to her, take the form of virtual assistant software architectures which creates even further confusion.
“Customer service and knowledge management remain the top use cases which have advanced the level of ‘AI agency’ in these implementations. We do have some other emerging use cases, for example, SOC agents, Agents for SDLC, Simulation, etc,” she said.
Devil is in the Details
HFS clarifies the differences.
Copilots are assistants confined to a single app or workflow, triggered by a user, with limited memory and no autonomous planning or open tool choice.
AI agents are individual systems executing specific tasks with policies, telemetry, and rollback.
Agentic AI refers to orchestrated, autonomous systems that coordinate multiple agents, maintain context, and adapt dynamically to achieve broader business outcomes.
If a vendor’s AI can’t decompose goals, choose tools across systems, remember context, and recover from failure, HFS says they’re not selling agentic AI, but AI-assisted workflows.
The research referred to UiPath’s Autopilot and Automation Anywhere’s Co-Pilot to illustrate the rebadging trend.
Both products deliver productivity gains through text-to-automation or natural-language prompts, but they operate within bounded stacks, not open-world autonomy.
ServiceNow positions its AI Agents as skills-based orchestrators across IT and HR workflows, but again, scope is defined by policy guardrails and configured skills.
The three companies did not respond to AIM‘s queries.
Verma explained that Agentic AI refers to a class of system developed using various architectures, design patterns and frameworks, encompassing both single AI agent and multiagent designs. These systems are capable of performing unsupervised tasks, making decisions and executing end-to-end processes.
Whereas, AI agents are autonomous or semiautonomous software entities that use AI techniques to perceive, make decisions, take actions and achieve goals in their digital or physical environments.
“It effectively means that Agentic AI practice is used for creating AI agents,” she said.
Still an Aspiration
Most deployments today remain at Levels 1 and 2 of HFS’ “five levels of agentic maturity.” Copilots handle departmental tasks under human oversight. A smaller group reaches Level 3, where processes are coordinated across bounded systems.
Levels 4 and 5, where multi-agent systems own business outcomes and evolve with minimal human input, remain aspirational.
Roadmaps such as Intuit’s GenOS describe “done-for-you agentic experiences,” but HFS classifies them as emerging claims pending production-grade evidence.
The risks of overstatement are growing.
The US Federal Trade Commission launched “Operation AI Comply” in September 2024, warning that deceptive AI marketing falls under consumer-protection laws.
In parallel, the Council of Europe’s legally binding AI treaty requires lifecycle transparency, impact assessment, and oversight.
Enforcement has already begun. DoNotPay, which marketed itself as the “world’s first robot lawyer,” faces FTC action for deceptive autonomy claims and has been ordered to compensate customers.
Rytr, an AI writing assistant, enabled mass production of fabricated reviews, failing consumer-protection standards.
Delphia and Global Predictions, which claimed to be the “first regulated AI financial advisor,” paid $400,000 in penalties after regulators found their claims misleading.
Check Before Subscribing
HFS recommends CIOs use its “two-gate Agentic Reality test” before buying into vendors’ claims:
Gate one asks whether the system demonstrates agency, goal decomposition, tool use, memory, policy guardrails, and telemetry.
Gate two tests readiness to scale, requiring multi-agent coordination, API execution, fraud prevention, compliance hooks, and lifecycle support.
If two or more Gate 1 items fail, buyers are looking at an assisted workflow, not an agent.
CIOs should also enforce claims contractually — write “agent” into agreements, demand telemetry, set governance thresholds, define KPIs, require architecture disclosure, and link payments to performance.
“The bottom line: if a vendor wants a premium for agentic AI, they must earn it with evidence,” HFS said.
“If a product can’t plan, pick tools across systems, remember context, and recover from failure, it’s a copilot. Label it, limit it, and buy useful assistance at assistant rates.”
Ashish Kumar, the chief data scientist at Indium, had said that the tech works, but the skill gap is real. Agentic AI needs more than prompts and APIs. It requires thoughtful design, orchestration, modularity, and people who understand both software and business logic.
The post Enterprises Beware: Agent-Washing Clouds the Future of AI appeared first on Analytics India Magazine.