Aria Networks announces the general availability of its Deep Networking solution – Designed from the ground up for the AI factory era to maximize Model Flop Utilization and token efficiency, a fundamentally new approach combining hardened SONiC, end-to-end telemetry, and intelligent agents across every layer of the stack.
Aria Networks Raises $125M to Build Networks that Think – Backed by Sutter Hill Ventures, Atreides Management, Valor Equity Partners, and Eclipse Ventures.
Gavin Baker of Atreides Management joins Aria Networks’ board, alongside Stefan Dyckerhoff of Sutter Hill Ventures and the founding team.
Today, Aria Networks announces the general availability of the Networks that Think – the world’s first AI-native network built from the ground up to maximize Token Efficiency. At the core of the network is Deep Networking, a fundamentally different approach to how networks operate.
Token Efficiency is the defining metric of the AI factory era and the single best proxy for whether an AI cluster is delivering on its investment. Token Efficiency directly relates to Model Flop Utilization (MFU) and cost per token – improvements in either translate directly into improvements in revenue. And, as tokens become the currency of intelligence, we empower operators to become the lowest-cost producers in the market – turning infrastructure efficiency into a competitive advantage.
The network is at the center of this equation, not merely as a bottleneck but as a potential multiplier. When the network underperforms, it drags down every other component in the stack. When it is optimized, it lifts them all. While the network comprises only 10-15% of the total cluster cost, its impact is substantial. A mere 1% improvement in MFU recoups the entire cost of the network.
Suboptimal network performance prevents the full realization of gains from all other infrastructure investments: in training, it affects how quickly gradients are synchronized; in disaggregated inference, it affects how efficiently KV caches are transferred, and how seamlessly jobs are scheduled across thousands of xPUs. Inference clusters especially are getting larger and more complex, introducing bigger networking challenges – not just for the backend, but also for the frontend.
AI factories seek solutions that will enable them to produce tokens more efficiently, at the lowest cost – so that they can enable the fastest production, as well as cheapest consumption of intelligence.
Aria was built to unlock this leverage. Deep Networking is our answer, a fundamentally different approach that turns the network from a constraint into a competitive advantage.
Legacy networking solutions treat telemetry as an afterthought and rely on static configurations that were designed for a different era. Deep Networking changes that, and is built on five pillars, all of which must be present to deliver the desired outcome:
- AI Optimized Hardware, and Hardened SONiC. Aria’s switch platform, built from the ground up on AI-native SONiC, delivers leading 800GbE and 1.6T switching in liquid-cooled and air-cooled form factors.
- Fine-grained, end to end telemetry. 100–10,000x finer resolution than traditional tools, collected across switches, transceivers, and hosts in a single unified view.
- Intelligent agents at every layer. Specialized agents evaluate signals, extract insights, and take action at the appropriate resolution – from the switching ASIC all the way up to cloud orchestration.
- Networking expertise built in. Every agent and every decision is grounded in deep networking domain knowledge – the system doesn’t just see data, it understands what it means.
- Continuous updates. New capabilities are developed seamlessly and continuously, keeping the network at the forefront of performance for every new workload.
The combination of these five elements creates a flywheel: the more workloads the system sees, the smarter it gets – delivering a seamlessly optimized network.
Deep Networking is not just a technology architecture, it is a set of outcomes that operators experience from day one:
- Seamless, automatic network fine-tuning. The platform continuously fine-tunes every aspect of the networking fabric for the specific cluster it serves, without manual intervention – across routing, load balancing, congestion management, and failover – eliminating the manual, error-prone workflows that slow down traditional deployments.
- Intent-based configuration. Operators express what they need, and the platform configures the fabric accordingly.
- Real-time, adaptive performance optimization. The system doesn’t wait for a ticket or a threshold breach. It continuously evaluates network state and takes action in real time to keep accelerators productive and every token flowing. The network adapts to each workload, each topology, each failure condition, automatically.
- Agentic partnership with operators. Operators gain fine-grained telemetry data at their fingertips, are alerted to issues as they arise, can ask questions about any alert in natural language, and collaborate with Aria’s agents to devise strategies for resolving issues or optimizing performance. This is not a black box, it is a partnership.
- Embedded Field Deployment Engineers. Aria’s FDEs are not a professional services add-on. They are an extension of the Aria solution itself, embedded directly within the customer’s team, managing the full lifecycle from architecture to performance tuning, co-developing alongside them, and integrating the Aria network within the full AI factory stack.
These outcomes give operators a critical advantage. They enable them to be accelerator agnostic, and to extract more value from their accelerator investments. Critically, they enable operators to scale their cluster linearly while maintaining peak MFU.
Alongside today’s platform announcement, Aria is pleased to announce that Gavin Baker, Managing Partner and CIO at Atreides Management LP, has joined the company’s board of directors, reinforcing the caliber of conviction and strategic partnership behind Aria’s mission. Together with Mansour Karam, Subhachandra Chandra, and Stefan Dyckerhoff, they bring decades of networking expertise combined with a cutting edge AI infrastructure focus. Aria Networks is also pleased to announce that Atreides Management, Valor Equity Partners, and Eclipse Ventures join Sutter Hill Ventures as investors in Aria Networks.
Ethernet has become the dominant fabric for new AI back-end deployments, driven by its openness, ubiquity, and multi-vendor scalability. Liquid cooling adoption is projected to reach 76% of AI servers this year as rack densities quickly approach 1MW. And the transition to 1.6T is accelerating faster than 800G ever did, with over 22 million ports expected to ship by 2027. Aria’s switch platform, built from the ground up on AI-native SONiC, delivers leading 800GbE and 1.6T switching in liquid-cooled and air-cooled form factors with no vendor lock-in.
Aria Networks is poised to redefine how AI infrastructure is built, deployed, and optimized at scale. As demand for high-performance AI continues to accelerate, Aria Networks remains committed to pushing the boundaries of network intelligence by helping customers unlock greater efficiency, maximize accelerator performance, and drive down the cost of innovation in the AI factory era.
Aria Networks already has customer orders in hand and is actively deploying. For product inquiries, please contact sales@arianetworks.com.
The post Aria Networks Launches the Network that Thinks first appeared on AI-Tech Park.


