
Introduction
The rapid advancement of Large Language Models (LLMs) and Generative AI (GenAI) has ushered in a new era of technology. We see AI and LLMs being embedded in every product, part of every software product roadmap, and every industry analyst presentation. Now, the AI revolution is impacting not just the processing of information but also automation, where AI is no longer just a tool but an active participant in enterprise workflows. This shift is driven by Agentic AI—AI systems that can function autonomously, make decisions, retrieve real-time data, and execute complex actions across the enterprise environment. While these AI agents promise tremendous productivity gains, they also introduce significant identity security challenges that organizations must address proactively.
In this post, we explore the two primary flavors of AI agents that we expect to see in enterprises, their benefits and risks, and why a robust identity security framework is critical to managing them effectively.
Understanding AI Agents: Key Characteristics
AI agents differ from traditional LLM-based chatbots (like ChatGPT) in several key ways. AI agents have:
- Goal-driven autonomy: Unlike simple automation scripts that follow direct and explicit commands, AI agents pursue objectives independently, continuously adapting based on inputs and results at each stage.
- Real-world connectivity: These agents will integrate with multiple enterprise systems, retrieving, processing, and writing real-time data.
- Decision-making capabilities: AI agents analyze data, apply logic, and execute tasks without constant human oversight.
- Cross-application orchestration: Leveraging LLMs, they operate across multiple enterprise applications, blurring traditional application and system-specific security boundaries.
These characteristics make AI agents incredibly powerful but also exponentially more complex to secure.
The Rise of AI Agents in the Enterprise
Organizations are embedding AI agents into both customer-facing products and internal workforce-facing operations. We expect initial use cases to include:
- Software development: AI agents will generate, debug, optimize, and potentially deploy code automatically.
- Marketing and content creation: They can draft content, run A/B testing, optimize campaigns, and analyze audience engagement.
- Customer support: building on the already extensive use of LLM-driven chatbots, agentic AI will extend the workflow to making customer account changes, ordering replacement parts, processing refunds, and upselling subscriptions.
- Supply chain management: in addition to optimizing logistics and forecasting demand, agents will enable making orders with suppliers, checking inventory, and leveraging voice mode to enable automated connections to vendors without deep technical infrastructure.
Nevertheless, initial missteps in early deployments of LLMs in the enterprise tend to remain embedded in memory. Case in point: Air Canada’s AI customer service mishap. In this instance, an AI chatbot deployed by Air Canada mistakenly provided incorrect information about bereavement fares, leading to a customer dispute. Moreover, Air Canada tried to dispute the claim and not take responsibility for the incorrect information that the chatbot provided- and lost in small claims court. This incident certainly highlights the potential risks of such use, but perhaps more damaging would be for anyone who assumes that the technology isn’t ready for prime time. Technological advancement in LLMs is at a pace rarely seen- and the example from the case took place in 2022- an eternity in the evolutionary timelines of LLMs.
A common truism is that AI today is the worst it will ever be. The future is coming without doubt, and AI and AI agents will be a significant part of the enterprise landscape.
The Two Primary Flavors of Enterprise AI Agents
In thinking more deeply about the implications of AI agents, we should distinguish between two “flavors.”
1. Enterprise-Managed AI Agents
- These are typically top-down, organization-approved AI implementations.
- They connect via APIs and service accounts to integrate seamlessly with enterprise workflows.
- Examples include Google Agents, which automates enterprise decision-making across multiple applications, and Goldman Sachs’ “GS AI Assistant.”
2. Employee-Managed AI Agents
- These agents are individually adopted by employees, often without explicit organizational approval.
- They typically operate within a user’s browser session and leverage employee credentials for access.
- These agents will be able to automate with systems that require interactive MFA- typically a barrier to most API-based authentication.
- MFA services like Windows Phone Link and Okta Verify have modules that enable strong authentication actions directly from a desktop or laptop
- Examples include OpenAI Operator and Anthropic’s “Computer Use” mode that an employee downloads and deploys on their company or personal computer.
Of course, there will be many examples of hybrid solutions across these two types. We definitely expect the “Enterprise Personal Assistant” AI agents to be a common model- the enterprise-approved version to help each employee do their job.
The Productivity-Security Tradeoff
When considering how to treat AI agents in the enterprise, management teams are faced with a familiar tradeoff between productivity and security.
Productivity Gains
- AI agents enable employees and enterprises to automate complex tasks at unprecedented speed and increasingly reduce human effort. In some ways, AI Agents will start to displace enterprise applications- bypassing the need to code specific workflows, integrations, and granular policies. You can, in theory, tell the agent what the end goal is and allow it to figure out how to do it.
- Individual employees benefit from AI assistance, increasing their efficiency and ability to do their job. Individuals are quickly realizing that they need to leverage AI to maintain productivity with their peers and to keep their skills relevant in the workforce.
Security Risks
A number of specific security risks are front-of-mind with AI agents, including:
- Data confidentiality is front-and-center, as using company information in LLM training and inference is essential for getting good results
- Data integrity of key operational and compliance-governed metrics is a growing concern, as agents will increasingly have the ability to execute transactions, write and edit data in key systems
- Liability and governance of AI-driven decision-making introduce complexities in tracking and auditing actions and even ultimately define the division of responsibility and liability between the employee, the company, and the technology providers.
Ultimately, the promise and rapidly accelerating capabilities of AI agents will be too powerful to ignore and too advantageous not to adopt. The best path is to prepare early, manage, and guide how they are used.
Identity Security Challenges
More specifically, agentic AI brings into focus challenges for identity security- in different ways depending on the flavor.
Challenges with Enterprise-Managed AI Agents
- Least Privilege enforcement becomes complex – organizations will aspire and push to make AI agents as “general purpose” as is feasible rather than building up a set of fragmented tools. “General purpose” AI agents will require broad permissions across systems, as the use cases will be less well-defined. This makes defining “Least Privilege” difficult- as it is not exactly clear what the agents will be doing.
- Separation of Duties (SoD) concerns – similarly, when general-purpose AI agents have access to different roles for different business purposes across applications, it can lead to potential compliance and security loopholes in SoD.
- Dynamic nature of AI – The landscape and use cases for AI agents is changing fast and is only expected to accelerate. As LLMs evolve and expand, defining static security policies becomes impractical and difficult to enforce.
Different challenges emerge for the Employee-Managed AI Agents:
Challenges with Employee-Managed AI Agents
- Over-permissioning risks – Employees may grant AI assistants excessive access for convenience. In the world of Single Sign-On (SSO), solutions like Okta and Microsoft Entra ID, the easier way to enable an agent to do work on my behalf is to grant it access to that identity provider- granting it access to essentially EVERYTHING that I have as a member of the workforce. In the world of federated authentication, granting access to only an app or two is actually harder to do than giving access to everything.
- Unintended consequences of goal-driven behavior – simply setting a reasonable goal for an AI agent could take actions outside intended parameters. How do you specify and validate the proper set of goals for an agent, particularly one that may not be fully approved or even known about by the organization? For example, if an employee asked an AI Agent to “maximize the chance of getting me promoted, ” might it decide to pursue strategies around highlighting the largest failures of other likely candidates for the higher role? Would it mine the expense reporting system to look for questionable charges of these competitors?
- Persistent data access – AI agents, in order to effectively execute against longer-term goals, tend to retain and recall information over a longer term than simple queries of a chatbot. Enterprise data would likely persist and potentially be recalled in unexpected ways, raising legitimate data security concerns.
- Audit and compliance complexity – Even today, organizations struggle to differentiate between human and non-humans accessing different systems (the “NHI security” problem). With the adoption of AI agents- that operate from a user’s machine, via a browser, with a human’s enterprise credentials- the problem of differentiating human from non-human becomes orders of magnitude more difficult. When an auditor asks an organization to attest to the accuracy of an audit report showing “every AI agent that has touched customer data” (as they undoubtedly will), what will the response be?
We expect a wide range of risk-tolerance levels from different employees around leveraging AI agents independently. Some will inquire before acting, many will do their best to comply, and some will plan to beg forgiveness rather than ask permission. Security teams will be largely ill-equipped to see and control the use of these agents, although endpoint security solutions like Endpoint Detection and Response (EDR) will be useful for company-managed machines. For personal machines, this will be much more challenging. Mobile will be even more challenging, but OS-level restrictions to agentic AI operation will make this less of an issue, at least initially.
Future Implications & Recommendations
Anticipated AI Adoption Trends
An interesting trend to watch is how organizations decide to manage access of AI agents. One of the obvious answers is to fit these AI agents into existing policy and enforcement structures- based on the human organizational structure. In this paradigm, agents will evolve to do a single business function (e.g., Marketing or Customer Support) rather than consolidated “super-agents” with multiple functions. Then, the next evolution will be the emergence of the “AI native” organization- with policies and security controls designed and built to optimize the productivity and effectiveness of these agents rather than forcing them into a human organizational structure.
Necessary Security Restrictions
Certain actions should initially be restricted and only considered at a high level of organizational AI Agent maturity, including:
- Transactions in financial systems
- Deploying production code
- “Write” permissions on critical data systems
- Access to privileged accounts
The good news is that the core issues in identity security in the world of agentic AI come back to the ones we’ve struggled with for years: Least Privilege is the foundation. You need to understand what you have today to know where you want to go. Start small, learn quickly, and iterate. Unfortunately, these are problems that many organizations still struggle with, and are challenged by answering “who can, has, and should take what action on what resource?” As agentic AI comes, it simply accelerates the timelines. The organizations that start early will make the most progress- there is no option to not participate.
Conclusion
The future of enterprise AI is both exciting and complex. AI agents will drive efficiency and innovation, but without robust identity security, they pose significant risks. The most important starting point for organizations is to acknowledge the tremendous pull there is to adopt this technology, not just from top-down strategic company initiatives but also from individual employees trying to be productive, wanting to keep their professional skills relevant, and being simply excited about finding real-world applications for one of the most exciting technologies we’ve seen in decades.
Determining a strategy for the “security” of AI agents quickly expands to one about “trust.” How much capability and access you provide an AI Agent depends primarily on how much you trust it. How tight the endpoint security controls are reveals how much an organization trusts its employees.
There have been a lot of discussions about how AI will change the nature of work. Indeed, we can see that the arrival of AI agents will drive this forward. In a sense, to maximize productivity, it becomes more important for EVERY employee to become a manager- a manager of their AI agents- rather than actually doing the work that an agent can do instead. Now, the focus will be on how well employees understand the capabilities, strengths, and weaknesses of the agents they oversee. They will spend more time defining the roles, responsibilities, and constraints of their agents. Instead of weekly one-on-ones, they will be doing prompt and goal engineering and deciding how tactically to set the tasks they expect from their agents. Where are the checkpoints and reviews on progress made? How do you measure results and provide feedback loops for future development and improvement?
For all their current limitations, we can be sure that AI agents will be experts in fully exploring the world that we make accessible to them. Working 24 hours a day at superhuman speeds, we can be certain that they will find every accessible resource in a way that humans did not have the bandwidth to do. In this way, the scope of identity security expands even more widely in the world of Agentic AI- defining the boundaries of Least Privilege and what exactly IS the minimum needed for someone (or something) to do a job- will become everyone’s job and even more central to enterprise security.