By the end of 2025, approximately 87% of enterprise decision-makers were using AI-driven management software. Today, that ambition has shifted from mere adoption to complete autonomy of the value chain. We are no longer simply asking AI to predict outcomes. Now, we are tasking it with the authority to execute them. This transition from ‘advisory AI’ to ‘action-oriented intelligence’ creates a new frontier of productivity. What follows is a highly complex set of systemic vulnerabilities, too.
The projected economic footprint of this shift is immense. By 2029, agentic systems are expected to resolve 80% of common customer service interactions autonomously, cutting operational costs by nearly 30%.
However, as these systems gain the power to plan and delegate, the focus for leadership must move from model accuracy to the broader societal impact of AI.
Defining the agentic shift
The industry is moving beyond simple AI agents. We now have compelling evidence for the success of purpose-built tools for narrow tasks, defining the industry’s march towards ‘agentic AI’. These are Multi-Agent Systems (MAS) where specialised entities communicate, collaborate, and learn together to solve multi-layered problems without human intervention. By utilising Chain-of-Thought (CoT) reasoning and shared memory functions, these systems can decompose complex goals into logical sub-tasks, making them far more dynamic than the static automation of the past.
The complexity of agentic AI emergent behaviour risks
The primary challenge for modern governance is that autonomous agents are probabilistic, not deterministic. When multiple agents interact, they can develop agentic AI emergent behaviour risks. These occur when a system develops unintended objectives or preferences that diverge from the developer’s intent. As these systems scale, tracing the origin of a specific ‘rogue’ decision becomes technically intractable, potentially leading to failure cascades where one faulty output triggers a domino effect across the entire enterprise architecture.
Beyond internal logic errors, interacting agents are susceptible to specific security threats like the ‘confused deputy’ problem. In this scenario, a low-privilege agent manipulates a high-privilege counterpart into executing unauthorised actions, such as data exfiltration or fraudulent financial transfers. Because these agents often operate without a distinct, governed identity, they can bypass traditional security perimeters by masquerading as legitimate internal processes.
Systemic risks and societal consequences
Research into interacting AI risks reveals that failures often emerge not from individual model flaws, but from the structure of the system itself. When agents interact over time, feedback loops and shared signals can produce outcomes that destabilise entire technical or social infrastructures. These systemic risks manifest in several recurring patterns:
- Echo chambers: Agents reinforce limited information sets, isolating corrective signals and aligning behaviour around biased data.
- Collective quality deterioration: Systems that train on outputs generated by other agents experience a steady decline in information integrity over time.
- Sensitivity propagation: Minor changes in an individual agent’s parameters can ripple through a network, causing rapid, unforeseen shifts in market or social outcomes.
These dynamics are particularly critical in highly regulated sectors like energy and social welfare. For instance, in hierarchical smart grids, the interaction between agents at the household and national levels can influence market stability and pricing dynamics in ways that bypass traditional risk frameworks. Understanding these interactions in depth is essential for maintaining control over critical infrastructure.
Implementing robust AI governance and risk management
To mitigate these threats, organisations must adopt a ‘zero trust’ posture for Non-Human Identities (NHI). Effective AI governance and risk management requires treating every agent as a distinct entity with its own identity lifecycle and access permissions.
- Just Enough Access (JEA): Enforce the principle of least privilege by restricting an agent’s tool access (e.g., Azure Functions or APIs) to the minimum CRUD operations required for its specific task.
- Just-in-Time (JIT) Permissions: Use short-lived, context-dependent credentials that are provisioned only when a task is initiated and revoked immediately upon completion.
- Automated Identity Lifecycle: Governance platforms should automate the provisioning, periodic rotation, and decommissioning of agent identities to prevent ‘identity bloat’ and reduce the standing attack surface.
Case study: precision in autonomous inventory
A retail leader recently leveraged an agentic AI framework to overcome chronic stockouts and excess inventory. By deploying specialised agents for demand forecasting and supplier management, they achieved a 30% reduction in holding costs. The system autonomously adjusted to cultural shifts and market trends, yet remained secure through a well-defined compliance workflow. This success demonstrates that when autonomy is paired with proactive governance, it becomes a strategic lever for growth rather than a liability.
How can Infosys BPM help?
Infosys BPM harmonises innovation with integrity through our AI-first trust and safety solutions. We empower organisations to navigate the complexities of autonomous workflows, providing the automated oversight and compliance expertise necessary to prevent reputational damage. By implementing robust digital asset governance and proactive risk monitoring, we ensure your transition to agentic systems remains secure, compliant, and aligned with long-term brand values.
Frequently asked question
Agentic AI comprises Multi-Agent Systems (MAS) where specialised entities collaborate autonomously using Chain-of-Thought reasoning and shared memory to decompose complex goals into executable sub-tasks. Unlike single-purpose automation, these systems plan, delegate, and learn dynamically without constant human oversight.
Emergent risks occur when interacting agents develop unintended objectives or preferences diverging from developer intent, creating untraceable "rogue" decisions that cascade across enterprise systems. Probabilistic interactions make failure origins technically intractable, amplifying systemic vulnerabilities.
Low-privilege agents manipulate high-privilege counterparts into unauthorized actions like data exfiltration or fraudulent transfers by masquerading as legitimate processes. Without distinct governed identities, agents bypass traditional security perimeters through internal deception.
Risks include echo chambers reinforcing biased data, collective quality deterioration from agent-generated training data, and sensitivity propagation where minor parameter changes ripple unpredictably through networks. These destabilise critical infrastructure like smart grids or market pricing.
Zero-trust Non-Human Identity (NHI) management enforces Just Enough Access (JEA), Just-in-Time (JIT) permissions, and automated identity lifecycles with periodic credential rotation. These prevent identity bloat and standing attack surfaces while maintaining auditability.


