Skip to main content
Laptop, meeting and web developers in office with troubleshooting online problem. Computer, conversation and IT professionals with technology for website crash, mistake or error in workplace

AI agents of chaos? Not when trust is a priority

By Michael Bradshaw
Global Practice Leader, Applications, Data and AI
Ideas lab | 5/03/2026 | Read time: 1 min

By Michael Bradshaw, Global Practice Leader, Applications, Data and AI at Kyndryl

The age of always-on AI has arrived.

Complex tasks can now be completed by AI agents around the clock. Thorny workflows can be smoothed out into seamless operations in hours rather than months. And companies struggling to turn their AI investments into clear business value can finally close the gap.

While this may sound idyllic, the unfolding reality is far messier.

No matter the size or design of the technology environment, AI agents can go rogue. From misconstruing prompt instructions to deleting an entire live database, the potential for chaos is raising concerns around trust, compliance and risk, especially for enterprises in highly regulated industries.

However, foregoing AI adoption is not an option for enterprises that want to remain competitive, and more enterprises in critical sectors such as banking, healthcare and government will scale agentic AI in the coming year. In fact, some of the world’s largest banks have started deploying AI agents to assist employees. And Gartner predicts some 40% of enterprise applications will include task-specific AI agents by the end of 2026 — up from less than 5% in 2025.

When operations are mission-critical and compliance isn’t optional, enterprises are right to be concerned. There is little room for error: even one agent running amok can lead to serious data security breaches, compliance failures and dissatisfied customers. Without a strong foundation of trust and transparency, enterprises will not see the transformative benefits that agentic AI stands to deliver.

Policy as code offers an innovative solution to this urgent challenge.

For many organizations, building trust in technology has previously involved rigid rules-based systems, manual documentation and periodic audits. These traditional approaches can be costly, time-consuming and prone to human mistakes. And they were never designed for systems that rapidly adapt and act, at times unpredictably, all on their own.

With a policy as code approach, enterprises can translate organizational rules, regulatory requirements and operational controls into machine-readable code that restricts AI agents’ actions and provides strict, concrete guardrails. When applied to agentic AI, policy as code allows regulated enterprises to overcome one of the biggest barriers associated with scaling the technology: making agent actions consistent, traceable and explainable.

In essence, policy as code turns agents of chaos into careful and controlled collaborators. Because AI agents can only act on instructions explicitly defined in code, this approach dramatically lowers the risk of agents acting outside of approved boundaries. Automated guardrails immediately block unsanctioned behavior and greatly reduce the impact of hallucinations that can quickly lead agents astray.

This code-driven approach also helps enterprises consistently enforce policy consistently. It removes human error from the equation, which may result in governance gaps, uneven enforcement of rules and unconstrained agents with inappropriate access to sensitive data.

Additionally, many regulated enterprises have assumed — often correctly — that the efforts required to make AI systems trusted and compliant could swallow their value, or else end up creating even more work for risk teams that cannot match AI’s machine speed. But with policy as code, all agents’ decisions and actions are logged by design, creating transparent audit trails that not only improve compliance reporting but also solve the challenge of tracking agents’ otherwise opaque behavior across an entire technology footprint.

Perhaps most importantly, policy as code includes balanced human oversight — always crucial for any successful AI deployment. Teams need access to real-time dashboards and digital twins to observe agent behavior and intervene when needed.

“The paradox of autonomous intelligence: its value grows not when AI agents have free rein, but when we define and enforce the parameters that govern their actions and engineer trust into every workflow.”

Michael Bradshaw

Global Practice Leader, Applications, Data and AI

For enterprises that act now, significant benefits will follow. Use cases are already taking shape across industries. In banking, Know Your Customer workflows can be completed in minutes rather than weeks as AI agents gather data and assess risks with compliance embedded at every step. In manufacturing, policy as code can transform supply chain compliance from batch processing to real-time screening of supplier data against regulatory requirements, enabling companies to proactively manage risks long before they lead to operational disruptions or hefty fines.

To be sure, enterprises that pursue this approach will need both trusted expertise and technology to accurately and expeditiously translate their policies. And policy as code is just one building block required for AI to successfully scale. Enterprises must ensure the right strategy, skills and infrastructure are in place at every step to accelerate AI transformation.

In the year ahead, establishing the right guardrails will be fundamental to scaling agentic AI with confidence. As leaders do so, they’ll confront the paradox of autonomous intelligence: its value grows not when AI agents have free rein, but when people define and enforce the parameters that govern their actions and engineer trust into every workflow.

Michael Bradshaw

Global Practice Leader, Applications, Data and AI

Get insights in your inbox

Subscribe to the newsletter

Speak to our experts.

Have questions or want to learn more?