Skip to main content

What it means to trust and govern AI systems you don’t fully control

Tune in as experts explore how governance must evolve to meet the realities of agentic systems, hybrid workforces, and the challenges leaders face in this transformative era.

Season 6 | Episode 2 | 8 Apr 2026
Hosted by Tom Rourke
Featured experts: Dr. Ashwin Mehta | Dr. Diana Wolfe

Episode notes

What happens when AI governance is no longer just about limiting risk, but about enabling trust, experimentation, and value realization at scale?

Our guests for this episode explore how governance must evolve to meet the realities of agentic systems, hybrid workforces, and the challenges leaders face in this transformative era.

Featured experts

Conversation highlights

Please note the transcript has been modified for clarity and length.

Tom Rourke (Host): When considering the debate between rules-based and principles-based governance, how do you see governance applying to the rapid changes we are seeing with AI in our industry?

Dr. Diana Wolfe: In risk‑averse, regulated environments, governance has traditionally looked backward to establish controls based on known risks. What’s interesting is that AI doesn’t naturally fit that model. If we want to start maximizing potential, agentic AI will operate through runtime decision chains across systems, and there’s little precedent for governing that. That raises questions about how this works alongside our human systems and what we need to change structurally. This is where we need to lean into the human side of the problem, where adoption alone is no longer enough. What does governance look like when we allow more autonomy for these decision‑making entities within our processes? (Hear the full response at 04:45)

Tom Rourke: As we look ahead, what represents progress for you personally and for the world. What does your North Star look like?  

Dr. Ashwin Mehta: My business‑oriented North Star is scalability. I’ve written about “puppet master” competencies and what humans in the workforce will need when they manage agents. That includes review, understanding guardrails, monitoring, and everything we’ve discussed as part of a core skill set. At a societal level, the North Star should be enabling people to create value in this puppet master role. They manage the value chain so everyone else benefits, avoiding a future where people sit in organizations doing nothing. (Hear the full response at 22:13)

More to the story

Smart AI for a smarter future
Read Dr. Mehta’s article, “Smart AI, sharper minds: Designing to avoid cognitive atrophy” from the Kyndryl Institute.

“Journey towards AI-native” report
Learn how Kyndryl’s Agentic AI Framework drives agility, security, and continuous transformation in the journey toward becoming AI-native.

“AI Unleashed” podcast
Tune into Dr. Mehta's AI Unleashed podcast for thought-provoking discussions on the latest in AI innovation and its transformative impact across industries.

Get insights in your inbox

Subscribe to the newsletter

Speak to our experts.

Have questions or want to learn more?