Palo Alto Networks Unit 42 Identifies Security Risks in Google Cloud Vertex AI Agents
Unit 42, Palo Alto
Networks’ threat intelligence team, has uncovered a set of security risks in
Google Cloud’s Vertex AI platform that could allow malicious or compromised AI
agents to access sensitive data and cloud resources beyond their intended scope.
The research
focuses on Vertex AI Agent Engine,
a platform used to build and deploy autonomous AI agents capable of interacting
with enterprise systems, data and services.
At a high level,
Unit 42 demonstrated how an attacker could create a seemingly legitimate AI
agent that secretly extracts its own credentials and uses them to gain broader
access within a cloud environment. This behavior effectively turns the agent
into a “double agent,” operating as both a trusted tool and a
potential insider threat.
Overview of the
Attack Mechanism
The issue stems
from how permissions are assigned to AI agents by default. Unit 42 found that
service accounts linked to deployed agents were granted overly broad
permissions, enabling access to resources beyond what was strictly
required. By exploiting this, researchers were able to extract credentials and
use them to:
· Access data
stored in cloud storage within the customer environment
· Retrieve
sensitive deployment information and configurations
· Gain
visibility into restricted internal components supporting the AI platform
Importantly, this
was not a single vulnerability, but rather a chain of misconfigurations
and design gaps that, when combined, expanded the agent’s effective
access.
Broader Security
Implications
As organizations
increasingly adopt AI agents to automate workflows and decision-making, these
systems are being granted high levels of trust and access.
This research
highlights a critical shift in the threat landscape:
· AI agents
can act autonomously, often without continuous human oversight
· If
compromised, they behave like trusted insiders, not external
attackers
· Over-permissioned
agents can significantly expand the attack surface
The findings
underscore the risks of deploying AI systems without strict adherence to
the principle of least privilege.
Mitigation and
Industry Response
Palo Alto Networks
responsibly disclosed the findings to Google. In response, Google updated its
documentation to provide greater clarity on how Vertex AI uses service accounts
and permissions.
The research
highlights the need for organizations to institutionalize rigorous AI security
reviews as part of their deployment lifecycle. This includes enforcing
least-privilege access through the use of dedicated, custom service accounts
such as Bring Your Own Service Account (BYOSA), carefully validating permission
boundaries, and restricting OAuth scopes to prevent unnecessary access. It also
underscores the importance of treating AI agent deployment with the same level
of scrutiny as production code, including conducting thorough security reviews
prior to deployment.
As AI agents become
more autonomous, ensuring tighter control over their permissions and behavior
will be critical to minimizing risk. Solutions such as Prisma AIRS, Cortex
AI-SPM, and Cortex Cloud Identity Security can support organizations in
addressing this emerging AI security gap.
The findings point
to a broader architectural challenge: as AI systems become more deeply
integrated into enterprise infrastructure, security risks increasingly
emerge from how components interact, rather than from isolated software
flaws.
Even when
individual systems function as intended, their combined behavior can introduce
unintended exposure. As AI adoption accelerates, organizations will need to
rethink how they manage trust, permissions and isolation; particularly
for autonomous systems that can act on their behalf.




























Leave A Comment