Why Kyndryl’s integrated AI delivery teams include a Human Systems Architect

AI is enabling what once seemed impossible — but its value extends far beyond automation. Before we can automate any process, we must understand it in context. We have to ask: how does work actually move through an organization, and what needs to change before AI can create value within it?

You don’t build on a foundation without assessing what it can bear. Organizations are stacking intelligence onto operational structures that were never designed to carry it, then wondering why things buckle. You have to assess load capacity before you deploy — how work flows, where decisions are made, what the workforce can absorb, and at what pace. That’s a human systems architecture problem. And it requires a Human Systems Architect (HSA) to solve it.

Here, Diana Wolfe, Ph.D., Vice President and Head of AI Research and Strategy at Kyndryl Consult, shares how the firm is adding this new role to its integrated AI solutioning teams to help customers close the gap between AI investment and real-world value.

Kyndryl recently introduced the Human Systems Architect role. Tell us about it.

Diana Wolfe: Every agentic AI deployment redesigns how people work. Most organizations discover this after the build, when adoption stalls, decisions break down, and accountability gaps emerge. The human side then becomes a retrofit problem. Kyndryl has created the Human Systems Architect (HSA) to recognize and solve these challenges during the system build, not after it.

The HSA is the practitioner who designs the collaboration layer between people and AI agents as a system is being created. It’s a new discipline built for an era where human systems demand the same rigor as technical systems. The role sits alongside our Forward Deployed Engineers (FDEs) and Industry SMEs as a core element of our AI delivery model.

In practice, HSAs do three things. They architect, integrate, and realize. Architecting entails mapping the knowledge, decisions, and workflows embedded in an organization’s teams and optimizing them for human-agent delivery. Integrating involves connecting the agent system to the people, decisions and collaboration patterns that make work real. And finally, Realizing is the stage that delivers measurable value to the organization, their teams, and every person who engages with the system. 

How do delivery teams need to operate as agentic AI introduces autonomous agents?

Wolfe: With generative AI, you’re deploying a tool that a person uses. Agentic AI agents are actors that make decisions, execute tasks and escalate exceptions. Before delivery teams deploy, they need to know which decisions the agent makes autonomously, which require human supervision, and where the escalation boundaries are. Having the HSA work alongside the FDE throughout the development process helps ensure that we address those questions holistically.

For example, Kyndryl’s policy as code capability gives us governance infrastructure — the machine-readable organizational rules, regulatory requirements, and operational controls that determine how agents execute. But a person must map the workflows, capture the tacit knowledge, classify agent tasks and human tasks, and calibrate agentic AI autonomy levels node-by-node. That person is the HSA. Policy as code defines what agents are allowed to do. HSAs define what agents should do, and how they work alongside the people who depend on them.

You don’t build on a foundation without assessing what it can bear. Organizations are stacking intelligence onto operational structures that were never designed to carry it, then wondering why things buckle.

Diana Wolfe

Ph.D., Vice President and Head of AI Research and Strategy, Kyndryl Consult

How does a delivery team address human-centered versus technology-centered issues?

Wolfe: The 2025 Kyndryl Readiness Report found that 48% of business leaders say resistant culture stifles innovation. In addition, Kyndryl’s 2025 People Readiness Report found that 71% of leaders say their workforces aren’t prepared for AI. Agentic AI deployment doesn’t happen in isolation, and resistance isn’t irrational. People resist what they don’t understand. When you deploy agentic AI into a workflow where the people inside have had no input into how their roles change, who makes the decisions, or what supervision looks like, resistance is the predictable outcome.

Organizations that build the human experience alongside the technical system see value from their AI investments. For example, in a customer engagement we ran for process discovery in a policy-as-code delivery, we found that people had made nearly 30% of critical decisions outside of any documented process. Without an HSA, such critical decisions risk being designed out of the system. HSAs surface patterns from each engagement, and document them to inform an organization’s AI-native conversations. Those insights compound over time to illuminate trends and can help organizations avoid friction in AI adoption.

The information that HSAs uncover helps leaders make enterprise-level decisions based on real-world data, not best guesses. Ultimately, these insights and subsequent actions strengthen organizations and position them to withstand the inevitability of change. 

Diana Wolfe

Ph.D., Vice President and Head of AI Research and Strategy, Kyndryl Consult

Tópicas