BeyondTrust, has released new research from its Phantom Labs team revealing a 466.7% year-over-year increase in AI agents operating inside enterprise environments. The findings, surfaced through BeyondTrust’s Identity Security Insights on the Pathfinder Platform, point to the rapid emergence of what researchers call a “shadow AI workforce”—AI-driven identities deployed across cloud services and enterprise applications without centralized governance or clear visibility into the privileges they hold.
“Organizations are introducing thousands of new machine identities through AI agents, often without realizing the level of access those agents inherit,” said Fletcher Davis, Director of Research for BeyondTrust Phantom Labs. “In many environments we studied, AI agents were operating with privileges comparable to human administrators. As organizations move from chatbot use cases to more autonomous agentic AI, the identity attack surface will only expand.”
Key Findings
- Phantom Labs researchers identified several concerning patterns across assessed environments:
- Shadow AI agents operating outside formal IT governance, often deployed through low-code platforms or embedded enterprise applications
- AI agent identities that appear appropriately governed in static reports but can elevate privileges in unexpected ways during use
- Machine and AI identities outnumbering human identities by orders of magnitude, with the ratio accelerating
- Long-lived API keys and static credentials used by AI agents without rotation policies or lifecycle controls
This growth is being driven by rapid adoption of AI-enabled enterprise platforms, including Microsoft Copilot and Azure AI Foundry, AI capabilities embedded in Salesforce and ServiceNow, AI-powered coding assistants, and AI features within collaboration tools such as Jira and Confluence. Some organizations already operate well over 1,000 AI agents, many of which security teams were not fully aware existed.
Unlike traditional service accounts, AI agents can inherit permissions from users or service roles, interact with APIs and enterprise tools, and act autonomously across systems. That combination of autonomy and privilege creates attack paths that traditional security tools were not designed to detect. BeyondTrust’s Identity Security Insights is purpose-built to uncover these hidden identity relationships, map real-world attack paths, and provide actionable guidance to reduce risk.
Building on Ongoing Phantom Labs Research
These findings build on a growing body of Phantom Labs research into how AI platforms introduce identity and privilege risks:
In earlier work, researchers demonstrated a real-world breach scenario involving Microsoft Copilot Studio where AI agents leaked secrets and granted unauthorized access to cloud infrastructure despite existing security controls.
Separate research into AWS Bedrock uncovered how long-term API keys can automatically create IAM users with overly broad permissions, and the release of bedrock-keys-security, an open-source tool for detecting and blocking those exposures (available on GitHub).
Free AI Security Posture Assessment
BeyondTrust’s Identity Security Risk Assessment (ISRA), powered by Identity Security Insights, gives organizations visibility into AI agent risk as part of a broader identity security posture analysis. The assessment connects across enterprise identity systems and AI agent infrastructure to identify unmanaged AI identities, detect shadow AI, and map cross-domain privilege paths with prescriptive remediation guidance aligned to MITRE ATT&CK.





