The possibility of erroneous findings has always overshadowed the potential of consumer AI. Ask the wrong question of the free download on your phone, and it just might make stuff up. But there’s no room for those kinds of errors in the enterprise.
The difference between toys and tools is that the latter help organizations get things done better. Designed and deployed correctly, enterprise AI — including agentic AI that can autonomously execute a series of tasks under human oversight — can be engineered to operate without hallucinations. When people embed operational code that aligns with specific policies and regulations directly into agentic AI, they create the guardrails that keep its data analytics on track.
That’s what we mean by “policy as code.”
Here, Patrick Gormley, Kyndryl’s Global Data Science and AI Consult Lead, explains the ins and outs of policy as code and why it’s such a significant breakthrough.
What is policy as code, and why is it essential for agentic AI?
Patrick Gormley: Policy as code is the practice of converting an organization’s rules, policies and compliance requirements into machine-readable code so AI systems can follow them automatically. This breakthrough innovation directly addresses the top enterprise concern, especially for those in highly regulated industries, about AI: the organization’s ability to execute workflows that require regulatory compliance and maintain trust.
By developing code that prevents unauthorized actions — and by establishing guardrails within which AI can operate — policy as code helps organizations ensure consistent policy interpretations and provides traceable, explainable reasoning. People oversee all activities related to these processes. This makes policy as code particularly valuable in heavily regulated industries, such as financial services, healthcare and government. Policy as code helps enable these industries to realize the full benefits of AI and agentic AI by reducing the risk of the types of compliance failures that damage reputations and incur heavy financial penalties.
Do regulated industries face special challenges implementing AI?
Gormley: All industries require experts to collaborate on the design, implementation and maintenance of their AI-infused systems. But regulated industries face additional challenges related to compliance, governance and trust. According to the Kyndryl Readiness Report, 31% of organizations cite regulatory or compliance concerns as a primary barrier limiting their ability to scale recent technology investments — the second highest ranking of all IT modernization barriers. Policy as code can help public- and private-sector entities overcome some of the biggest obstacles to a better allocation of resources — compliance, governance, auditability and observability.
By enforcing programmatic rules at scale, policy as code helps eliminate the human error that can lead to granting inappropriate permissions to AI, interpreting rules and regulations inconsistently, and failing to document exceptions to standard operations. Policy as code can also help make AI agent behavior predictable — even as the large-language models (LLMs) the agents rely on evolve — by helping to ensure that agentic AI execution is consistent and strictly controlled.
Next, people must record all of the inputs, outputs and decisions related to AI. Policy as code creates the necessary operational logs that risk teams and regulators rely on. And finally, policy as code helps enable real-time human supervision of AI operations. Digital twins of complex environments — everything from mainframes to hybrid cloud systems — can give people the cross-platform visibility they need to resolve bottlenecks and help realize the positive ROI promise of AI without creating new risks to operations, compliance or security.
How does it work, and what’s the Kyndryl differentiator?
Gormley: Organizations typically implement policy as code through a combination of declarative policy languages and enforcement engines. In other words, they incorporate the appropriate regulations and operational rules into code that AI agents can read and must obey. If it’s in the code, the AI agent must execute. And if an instruction is not in the code, the AI agent cannot see or act upon it.
The people who architect the code rely on Policy Decision Points (PDPs) and Policy Enforcement Points (PEPs) to develop policy as code rules that determine whether an action should be allowed, and whether it violates policy. The bottom line is that an AI agent, by design, is unable to act outside the parameters of its allowed operations. And the beauty of the capability is that it also enables system observability and accurate record keeping.
The Kyndryl differentiator is that we embed our policy-as-code capability directly into the Kyndryl Agentic AI Framework. In the same way that all Kyndryl solutions are fit-for-purpose instead of off-the-shelf, our approach to policy as code governs every aspect of digital workflow — from initial data retrieval to final approval. By design, people supervise the system. They don’t just observe and report. As a result, Kyndryl’s approach to policy as code eliminates the impact of AI hallucinations, provides end-to-end oversight and auditing, and can enable faster deployment of agentic AI without jeopardizing safety, transparency or human control.