Cloud-based AI is fast becoming the backbone of digital transformation. However, a recent report from Tenable reveals a concerning pattern: Nearly 70% of cloud AI workloads carry at least one unremediated vulnerability. The rest aren’t necessarily safer; they just haven’t been properly audited yet.
Satnam Narang, senior staff research engineer at Tenable, drew attention to a quiet but systemic risk. He pointed to a widespread reliance on default service accounts in Google Vertex AI—77% of organisations continue to use these overprivileged Compute Engine identities. It’s not just a bad habit; it’s a risk multiplier.
Every AI service layered on top inherits this exposure, creating a cascading security debt few teams are equipped to handle.
And the problem doesn’t end with permissions. From misconfigured AI training buckets to vulnerable open-source components, the cloud AI stack is riddled with entry points for attackers and exit points for sensitive data.
When the Risk Is Built-In, Not Bolted On
Cloud AI environments are uniquely complex. They involve constantly shifting combinations of services, datasets, models, and access layers. But instead of adapting, many organisations still use legacy vulnerability management tools that rely on the Common Vulnerability Scoring System (CVSS). CVSS only measures technical severity, not the likelihood of real-world exploitation.
“Half the bugs it labels ‘High’ or ‘Critical’ rarely see real-world exploits,” Narang told AIM. Instead, he advocates for a risk-based model like Vulnerability Priority Rating (VPR), which combines threat intelligence, asset context, and exploit telemetry to predict which flaws are most likely to be weaponised.
He believes that a training data leak that compromises a customer-facing AI model is more devastating than a high-CVSS bug in an isolated dev environment.
“Prioritisation must be risk-based: focus first on data that powers safety-critical or customer-facing models and on vulnerabilities with active exploit code.”
He emphasised the importance of the risk context, as without it, the security teams might patch the wrong thing.
Identities and a Platform Approach To Save The Day
One of the most overlooked risks in cloud AI is identity sprawl. As human and machine accounts proliferate across on-prem and cloud systems, tracking who has access to what becomes almost impossible, until something breaks. Dormant accounts with admin privileges, machine identities with excessive entitlements—these are not bugs, they’re features of a rushed deployment strategy.
To tackle these challenges, Narang suggested, “Start by merging every human and machine identity on-prem and cloud into a single, authoritative directory so you can see exactly which accounts are federated, over-privileged, or lying dormant and enforce least-privilege access at scale.”
He added that organisations should implement AI-powered analytics on the network to assess the blast radius of each identity. Monitor entitlements, device health, authentication patterns, and misconfigurations. Then, identify and prioritise remediation actions such as adjusting roles, rotating keys, or enabling just-in-time elevation.
This approach empowers teams to address the most critical vulnerabilities swiftly, without disrupting business operations. Narang believes that zero-trust policies, conditional access, and real-time revocation become the safety rails. He warned that most companies still rely on patchwork solutions, using different tools for different clouds, leading to security blind spots.
Furthermore, he said, “Most often, organisations adopt a myriad of point solutions to address different security concerns in the cloud. This creates blindspots caused by data silos as different tools are being used to assess different cloud environments.”
“Instead, organisations need a platform approach to tackle the growing risks of cloud and AI,” he said.
The Fallout of a Missed Misconfiguration
Sometimes, it’s not sophisticated attackers, but simple oversights that lead to real-world breaches. Narang shared an incident from March 2023, when OpenAI disclosed a flaw in a Redis library (CVE-2023-28858) that allowed ChatGPT Plus users see fragments of other users’ conversation history and in some cases, payment data.
It wasn’t a breach by an external actor, but it did expose names, emails, credit card types, and expiry details of 1.2% of users.
This was caused by a low-level vulnerability in a widely used open-source component, combined with a lack of robust data isolation. In cloud AI, such scenarios are lessons to be learnt.
Narang stressed that even minor bugs in supporting infrastructure can trigger large-scale privacy incidents. The more integrated and automated AI becomes, the greater the blast radius of each oversight.
Securing the Pipeline, Not Just the Output
When AIM asked for insights on protecting training and testing data, Narang said that it requires a ground-up rethink. He suggested treating every notebook, model artefact, feature store, and dataset as a monitored asset under a single inventory. Furthermore, these must be classified as per sensitivity—PII, IP, safety-critical—and assigned protections accordingly.
Encryption in transit and at rest is non-negotiable. Data buckets should be private by default, with access gated by short-lived credentials and narrow IAM policies. The future, according to Narang, lies in platforms that combine CNAPP (Cloud-Native Application Protection Platform) with DSPM (Data Security Posture Management), giving teams real-time insight into which datasets are internet-exposed, which accounts are overprivileged, and which vulnerabilities matter now.
“To limit damage if a leak occurs, adopt privacy-preserving techniques—masked or synthetic data, differential privacy, strict versioning with immutable logs, and digital watermarking to prove provenance”, he said. More importantly, these controls need to be embedded in the MLOps toolchain itself—so security isn’t retrofitted, but inherited.
Cloud AI Doesn’t Just Need Speed; It Needs Safety
Cloud AI workloads are scaling fast, but many are running on insecure defaults, misconfigured identities, and blind trust in legacy tools.
As per Narang’s insights, cloud security must evolve alongside AI. Organisations that embed visibility, automation, and risk-based prioritisation into their cloud AI strategies will be better equipped to defend.
The post The Hidden Dangers Lurking in Cloud AI Infrastructure appeared first on Analytics India Magazine.