Palo Alto Networks has published new research that highlights a significant vulnerability in Vertex AI Agent Engine on Google Cloud Platform (GCP), which could allow attackers to access sensitive data and compromise the security of enterprise cloud environments.
Conducted through its Unit 42 research team, the study reveals that the artificial intelligence agents deployed in Vertex AI have excessively broad default permissions. This configuration can be exploited by malicious actors to turn these agents into “double agents”, capable of operating as insider threats and covertly exfiltrating information, putting the security of systems at risk.
A security challenge in the age of AI
The report warns that these types of risks occur in a context of accelerated adoption of AI agents, where security becomes a critical factor. Currently, 90% of organizations are already facing pressure to relax their access controls in order to implement AI-based automation. This scenario is giving rise to a new attack surface in which autonomous agents can become critical risk vectors if they are not managed under strict security principles.
The findings highlight that, although AI agents are key tools in digital transformation, their deployment without adequate controls can pose a significant security risk to organizations.
Among the most relevant discoveries of the Unit 42 investigation are:
• Privilege Escalation: Researchers managed to compromise a service agent (P4SA) by leveraging excessive default permissions, allowing expanded access within the cloud environment and impacting overall security.
• Unrestricted data access: The compromised agent could read all data stored in the customer’s Google Cloud Storage buckets, compromising data security.
• Internal resource exposure: The vulnerability also allowed access to restricted Google repositories and downloading key container images of the Vertex AI engine, revealing details of the internal infrastructure and weakening system security.
• Latent risk in Google Workspace: broad OAuth 2.0 permissions could, in certain scenarios, facilitate access to services such as Gmail or Drive, expanding the scope of the attack and affecting corporate security.
Mitigation and recommendations
Palo Alto Networks shared its findings with Google, which has updated the official Vertex AI documentation to offer greater transparency about the use of resources, accounts, and agents, thus strengthening security in the configuration and management of these environments. This change seeks to facilitate a better understanding of the risks associated with default configurations and the importance of applying proactive security measures.
Autonomous agents can become critical risk vectors if they are not managed under strict security principles
Faced with this scenario, Unit 42 recommends that organizations integrate specific security reviews for AI into their processes, including validation of permissions, restriction of access and risk assessment before deployment in production, with the aim of guaranteeing the security of digital environments.
