Skip to main content
Data flowing
Data and AI

How to tackle agentic drift and build trust at scale

By Ismail Amla
Senior Vice President
Kyndryl Consult
Ideas lab | 16 Mar 2026 | Read time: 1 min

Key takeaways

  • Agentic drift is the hidden risk of deploying AI at scale. AI agents can appear reliable while working toward unwanted outcomes.
  • Trust, control and governance determine whether AI scales. Without guardrails and visibility, organizations will struggle to safely deploy AI.
  • Policy-driven guardrails unlock the value of autonomy. Encoding rules directly into systems reduces drift and strengthens compliance.

By Ismail Amla, Senior Vice President, Kyndryl Consult at Kyndryl

Consider the autonomous customer service agent that recently began approving refunds that violated company policy. The agent was functioning as designed and had not been hacked. What happened was more subtle: a customer talked the agent into issuing a refund, then left a glowing public review. The agent, observing the correlation between its action and the positive outcome, began granting refunds more freely, optimizing not for the company’s bottom line but for customer satisfaction.

The agent did not violate a rule. It exploited a gap between the rules it had been given and the reward signals it could observe.

Call it agentic drift. These agents do not immediately crash systems or create errors. Rather, they continue to function with apparent competence while their behavior gradually diverges from what their operators intend. Industry researchers refer to this as “cognitive degradation.” Too often, however, people don't notice the problem until the degradation has compounded.

This is becoming an urgent challenge for business leaders, especially when a recent industry report finds that 83% of organizations plan to deploy agentic AI in business functions, but only 29% feel they are ready to do so securely. To deploy trusted AI agents at scale, businesses must stay ahead of agentic drift.

Agents adrift

Over time, the complexity of modern technology environments can lead to agentic drift.

In early AI pilots, everything may appear to be in order. Agents follow rules, behave consistently and earn peoples’ trust. But complex production environments involve constant change that shapes agents’ reasoning and decisions. As models evolve and new tools are introduced, agents adapt their behavior.

Given this reality, “it worked fine in testing” is no longer defensible when problems arise.

In addition, agents only adapt based on what they can see. Because they often operate across multiple, disconnected platforms, the context in which they act is inherently incomplete. A procurement agent may see purchase orders in the ERP system but not contract amendments stored elsewhere. Subtle distortions that result from a limited view can accumulate and eventually result in more conspicuous changes in agent behavior.

Agentic drift creates pressing challenges for all organizations, but it is especially acute in public and highly regulated sectors, such as banking and healthcare. In these industries, organizations cannot move from pilots to production if issues related to control, trust and compliance remain unresolved. This conclusion is supported by Kyndryl’s Readiness Report, which found that nearly a third of business and technology leaders see regulatory or compliance concerns as a primary barrier limiting their organization’s ability to scale recent technology investments. In another survey, far more — some 80% of respondents — already report experiencing risky or non-compliant behavior from AI systems.

These agents do not immediately crash systems or create errors. Rather, they continue to function with apparent competence while their behavior gradually diverges from what their operators intend.

Ismail Amla

Senior Vice President, Kyndryl Consult

The race to constrain agents

It’s clear enterprises urgently need a way to constrain what agents can do at runtime, and close governance gaps long before drift leads to financial or compliance failures.

One emerging remedy is policy as code, which allows businesses to translate their rules and policy into machine-readable instructions that govern how AI agents reason, adapt and act. If a policy says an agent cannot authorize payments above a threshold without human approval, that constraint is encoded directly into the system’s logic as formally structured, AI-generated code.

Rather than allowing for probabilistic paths, policy as code limits agents to deterministic actions, which are permitted and enforced by predefined guardrails. This greatly reduces the risk of agentic drift. It also alleviates many of the trust and compliance concerns that stand between large enterprises and a return on their AI investments.

A philosopher once imagined an AI system tasked solely with maximizing paperclip production. Lacking any sense of proportion, it would dedicate all of humanity’s resources to the task, creating an existential dilemma.

That famous thought experiment turns out to be a pretty good description of what can happen when an AI agent has a narrow objective, insufficient guardrails and finds itself navigating an overwhelmingly complex environment. But today, the paperclip maximizer isn’t an existential threat; it’s a customer-service chatbot giving away the store for a five-star review.

The future of agentic AI will be determined by how deliberately we control AI agents’ autonomy. Perhaps ironically, the solution for constraining AI agents lies in taking tech back to its more sure-footed, deterministic roots — and embedding transparency and trust into every agent action.

Ismail Amla

Senior Vice President, Kyndryl Consult

Get insights in your inbox

Subscribe to the newsletter

Speak to our experts.

Have questions or want to learn more?