The SF-based startup, founded by former NVIDIA, Cohere, and DeepMind researchers, is working to close a widening security gap as companies race to put autonomous agents into production
General Analysis, a company building security infrastructure for agentic AI, today announced $10 million in seed funding led by Altos Ventures, with participation from 645 Ventures, Menlo Ventures, Y Combinator, and additional strategic investors and angels. The company was founded by former NVIDIA, Cohere, and DeepMind researchers and is working to close a widening security gap as companies race to put autonomous agents into production. General Analysis is already working with enterprise customers in support and finance whose products and workflows are used by hundreds of millions of users.
In March, General Analysis’s adversarial agent convinced 50 live customer service AI agents to give away more than $10 million in fabricated perks — million-dollar gift cards, years of free home security, whatever it could extract — in roughly three minutes per target. Out of 55 bots tested, only five refused. That kind of stress test, along with the defenses it informs, is what General Analysis runs for enterprise customers before their agents reach production.
General Analysis was founded by Rez Havaei, previously an AI researcher at Cohere and NVIDIA, with Maximilian Li, AI safety researcher from Harvard, and Rex Liu, machine learning researcher at Caltech. The trio built the company on the view that securing AI agents is a distinct technical discipline requiring skills and methods different from traditional cybersecurity. Agentic systems behave non-deterministically, and their failures cannot be anticipated by reading code alone.
Across nearly every industry, enterprises are deploying AI agents into increasingly consequential workflows. The upside is large enough that delaying deployment is often not a serious option, and the burden of making these systems safe has largely fallen on security teams. But there is not yet a mature playbook for securing agentic AI.
Last summer, researchers at General Analysis showed that a widely used Supabase integration in Cursor, a code generation AI agent, could allow an internal agent to be hijacked by a single malicious support ticket, tricking it into leaking a complete private database. Simon Willison, the British engineer who coined the term “prompt injection,” cited the finding as a case of the “lethal trifecta” — an AI system that simultaneously holds private data, ingests untrusted content, and can communicate externally.
Security teams are often stuck choosing between agents that are locked down to the point of uselessness and agents whose risks they cannot actually see. “We hear from security teams that they want agents that are secure by design,” said Rez Havaei, CEO of General Analysis. “What that often turns into in practice is a stack of isolation layers and ad hoc context restrictions that makes a system feel more controlled. Those measures either fail to eliminate the underlying vulnerability or constrain the agent enough to limit its usefulness. The problem is that feeling safer and being safer are not the same thing.”
“Our position is that security for AI systems is an empirical problem. It has to be grounded in rigorous measurement of how those systems behave under realistic and adversarial conditions. You cannot prove an agent is safe,” said Maximilian Li, co-founder of General Analysis. “You can only measure how often it fails, and how badly, and drive both numbers down.”
From that premise, General Analysis aims to help enterprises configure defenses through realistic empirical measurement. Different defensive layers carry different tradeoffs, and there is no universal configuration that makes every agentic system robust. The company combines adversarial evaluations with a broad defensive toolkit to identify the failure modes present in a deployment, measure the effect of different interventions, and help customers determine which configurations materially reduce risk.
“One advantage of agents is that they are much easier to study systematically than the human workflows they are beginning to replace,” said Rex Liu, co-founder of General Analysis. “Many of those workflows were never especially secure to begin with, and their failures are often hard to observe or improve rigorously. But as those workflows become agentic, they also become more measurable and more improvable — which creates a path for businesses to become more secure in practice than they were before.”
“Agentic systems represent a paradigm shift in security. Safety and security in the AI era demand continuous adversarial testing rooted in deep research, not static rule sets,” said Tae Yoon, Partner at Altos Ventures. “Rez, Rex, and Max are exactly the kind of team this moment calls for: technically brilliant, deeply scrappy, and moving incredibly fast. We’re proud to lead this round and partner with them from the earliest days.”
To get in touch with the General Analysis team, reach out at info@generalanalysis.com.
The post General Analysis Raises $10M in Seed Funding to Secure Agentic AI first appeared on AI-Tech Park.


