Gartner Identifies Six Steps to Manage AI Agent Sprawl
Gartner, Inc., a business and technology insights company, has
identified six steps to help organisations reduce the risks of AI agent sprawl.
Gartner predicts that by 2028, an average
global Fortune 500 enterprise will have over 150,000 agents in use, up from
less than 15 in 2025, generating significant agent sprawl, IT complexity and
management challenges.
“As CIOs and IT leaders see an explosion
of AI agents across their organisations,
many are contending with an ungoverned sprawl of agents that expose their organisations to a
range of risks, including misinformation, oversharing and data loss,”
said Max Goss, Sr. Director Analyst at Gartner.
“Many organisations resort to blocking or restricting the use of AI
agents, but this is not a long-term solution. If employees are unable to work
in the sanctioned tools, they will likely go around the organisation’s controls
and start using shadow AI which presents far greater risks. Organisations need
to find a balance where they can govern agents and manage sprawl, but also
safely empower employees to innovate with these tools.”
Gartner identified six steps to help CIOs and IT leaders establish governance and
guardrails to reduce the risks of agent sprawl.
1. Establish
agent governance and policies: Set clear rules for when and
how agents are built, who can create and share them, and what connectors are
permitted.
2. Build centralized agent
inventory: Organisations can use AI trust, risk, and security management (AI TRiSM) tools to help discover
and categorize agents across applications, both from sanctioned tools, and from
shadow AI solutions. Once organisations have an agent inventory, they can start to
build adaptive controls to enforce the right policies based on the level of
risk the agent presents.
3. Define agent identity,
permissions and life cycle model: Manage the agent identity,
permission model and access controls, review, and retire redundant
agents to prevent uncontrolled sprawl.
4. Develop AI information
governance: Govern what information the AI tool or agent has access to and ensure
that there is a process in place to keep the data current, manage its
permissions to prevent oversharing, and archive the data when it is obsolete.
5. Monitor and remediate agent
behavior: Establish ongoing visibility into agent usage, ensure policy compliance,
detect anomalous behavior, and correct agents that exceed their intended scope
or risk tolerance.
6. Foster a culture of
responsible AI usage: Support the workforce with training programs and a community of
practice to drive adoption and amplify best practices on agent management
across the organization.
































Leave A Comment