In this article

Download article

AI investment is booming.

With multibillion-dollar deals1 stacking up and funding rounds hitting all-time highs2, global spending on AI infrastructure is on track to exceed $200 billion.3 But behind the hype, another pattern is emerging: most organizations are caught in pilot paralysis.

Nearly three-quarters of leaders say they have more pilots than they can realistically scale, according to The Kyndryl Readiness Report 2025,and over half admit innovation stalls out. An MIT report5 focused on generative AI found an even starker divide: Just 5% of integrated AI pilots are actually driving revenue.

As a digital policy consultant for enterprises around the world, I work closely with leaders navigating these challenges. They know inaction is a losing strategy, and they’ve eagerly launched into AI experimentation. The problem is that too many are finding themselves mid-air without knowing how to stick the landing.

What organizations need is a mechanism for experimenting and innovating, while measuring their trajectory to ensure it aligns with their business strategy.

This mechanism is sound governance, of which policy is a key component. The right policy is not a brake on innovation, but a framework that enables it by defining the boundaries that make governance actionable, clearly outlining who can experiment, with what data, under which conditions, and when success has been achieved.

In other words, governance creates the freedom for employees to harness AI and new technologies with clear success indicators. These indicators are absolutely critical in determining whether a pilot can and should scale across the wilderness of enterprise operations.

But governing can be a complex endeavor when AI continues to advance with remarkable speed and organizations need the freedom to experiment with emerging capabilities. Simply standing up an AI governance board or appointing a chief AI leader will not inherently solve the pilot problem. And writing policies that serve more as red tape than business catalysts will threaten the freedom organizations need, constraining rather than accelerating innovation.

For organizations looking to move beyond pilot paralysis, their surest escape lies in transforming policy and governance from red tape into a bridge that connects AI experimentation with undeniable business value.

Diagnosing the pilot problem

Dead-end pilots are often the result of not knowing how or when to exit the experimentation phase. Without a playbook for scaling innovation, even successful pilots fail to become standard operating procedure.

Across enterprises, I spot the same signs of friction.

Goals and markers of success are vague. The C-suite may be enthusiastic about AI, but there is no shared vision of what they hope to achieve. Measurements to gauge pilot success — such as ROI or innovation metrics — remain undefined.

Lines of accountability are similarly unclear. No one is responsible for recognizing that a pilot has proven business value and should be operationalized. Additional AI responsibilities may be siloed in standalone teams — or else fall to a chief AI officer and governance board focused more on safeguards and compliance than business outcomes.

To be sure, most companies have AI policies. But with a weak link to corporate governance and strategy, they often amount to little more than shelfware: neatly documented but never used. These policies fail to drive daily decision making, leading to confusion around the parameters that determine whether and how a pilot moves forward.

Paradoxically, too much freedom can sow chaos. Organizations may hesitate to proceed. And when they do, they’re more likely to govern too early, stifling innovation, or too late, exposing their business to greater risk.

For organizations looking to move beyond pilot paralysis, their surest escape lies in transforming policy and governance from red tape into a bridge that connects AI experimentation with undeniable business value.

Overcoming pilot paralysis

Recently, I worked with an organization running three different pilots in three markets that had the same goal: to target new customers using CRM data. The pilots were duplicative, siloed, and, perhaps unsurprisingly, they all stalled out.

Things started to change when the company deployed a governance framework and policy mandating AI pilot knowledge-sharing. More employees voluntarily participated, recognizing that building their AI skillsets made them more effective in daily work. And the organization started to see fewer but higher-quality pilots — with more making it into production.

This is just one example of how governance and policy create connective tissue that invites collaboration, data-sharing, and consistency.

I like to think of sound policy as a walled garden where innovation can thrive. While any number of potential hazards may exist outside of the garden, risks have been minimized within it, and safety precautions are in place. This garden is unique to each organization; its size and the strength of its walls depend on the appetite for risk. But like the right policies, it will always be structured enough to promote safety and flexible enough to make room for creativity.

Sound policy creates freedom within a framework so organizations can move from pilot to production. It can clarify who triggers pilot operationalization, for example, and what minimum criteria must be demonstrated and sustained throughout the AI lifecycle — such as audit completion, bias testing, or demonstration of ROI. At the same time, it gives teams license to safely experiment and uncover AI opportunities that are worth scaling.

Unleashing the power of policy

In my decades of experience helping companies unleash the power of digital policy, I’ve seen how good governance advances innovation. But true AI readiness requires far more than putting pen to paper. As organizations work to scale AI, there are four critical factors that can accelerate their success.

The culture factor

Good governance comes easier to some organizations than others. More than geographical or industry trends, the reason is often cultural.

Leaders operating in a strong governance culture prioritize precision, creative problem-solving, and adaptability. Their engineering mindset enables them to govern decisively while staying nimble enough to keep up with fast-evolving technology.

Their cultures are also defined by a spirit of inclusion that signals everyone has a role to play in the AI age. Leaders encourage open dialogue about business value and risk. And because organizations prioritize change management, employees trust in the company’s AI strategy, making them more likely to generate ideas that push innovation forward.

I saw the culture factor in action at one large organization where a country team proposed piloting a new AI-powered service. In other organizations, the project might have been deprioritized or never suggested at all because of its potential risks involving sensitive data.

This team’s culture, however, prized innovation and out-of-the-box thinking. The team piloted the service using synthetic data and found it was an overwhelming success. Because of their culture, they were able to quickly move past the pilot stage, confident that tackling the challenge of incorporating real data would pay off.

The business imperative

Every organization should ask a critical question as they develop policy: How does this connect to business strategy?

What’s governed is less important than having the right governance fundamentals in place — first and foremost business alignment. Organizations that built this muscle while governing older technology are now adapting to AI with greater ease. They’ll also be ready to pivot in a new direction when the next wave of technology arrives.

I recently advised a European financial institution that unified its AI policy with its business and enterprise risk policies. The AI policy became an extension of the organization’s broader governance, rather than a one-off invention.

This approach helped the organization assess each AI use case by weighing risk against business value — or, as I like to think of it, when to move forward with pilot operationalization.

By aligning governance thresholds with business objectives — like fraud reduction targets and customer-experience metrics — the institution stopped treating compliance as a bottleneck and began using it as a decision accelerant.

Within six months, the company scaled two AI models enterprise-wide. Their unified approach to governance gave the green light for innovation.

The shared framework

Most C-suite leaders expect AI to fundamentally change roles and responsibilities in the next year.4 Yet few can demonstrate how they’ll lead their workforce through that transformation.

Executives don’t need to “speak AI.” But they do need translators who close the gap between technical and business fluency, enabling diverse stakeholders to speak the same language of value, risk, and readiness.

The AI translator model centralizes accountability for policy interpretation and alignment, creating a shared framework for both business and technical leaders. This model works especially well in large organizations where pilots extend across business units and silos are inevitable.

Often part of an AI governance office, translators ensure pilots align with business strategy and adhere to shared principles for responsible use without constraining local innovation — meaning that for this model to be effective, organizations need policy templates that standardize decision-making while leaving room for local adaptation.

Translators also collaborate with executives to clarify intended AI outcomes — whether efficiency, risk reduction, customer growth, or others — and convert those into measurable goals and guiding principles. These in turn help teams understand where experimentation is encouraged, when governance begins, and how every AI initiative measures up against indicators of success.

The right timing

Successful enterprises know when to shift from free experimentation to governed operationalization.

An auto parts manufacturer I advised experienced this firsthand as they explored how to use AI to improve supply chain efficiency. In a break from tradition, leadership chose to democratize experimentation, offering stipends for the best pilot ideas across the enterprise.

One winning idea came from the maintenance team. They proposed a predictive maintenance model that used sensor data to anticipate equipment failures and automatically trigger parts orders. But rather than immediately formalize governance, the company waited until the pilot demonstrated measurable ROI, showing a 14% reduction in unplanned downtime within three months.

Only then did the company choose to codify specific operational policies, including model-monitoring standards, data-handling rules, and escalation procedures.

This approach gave the manufacturer a playbook for scaling similar AI use cases across its global plants. Rather than rush to govern everything at once, the organization learned to let policy evolve with maturity.

For other leaders, the manufacturer’s experience underscores a key lesson: policy shouldn’t precede innovation but follow it, allowing successful patterns to be formalized into repeatable practices.

By aligning governance thresholds with business objectives — like fraud reduction targets and customer-experience metrics — the institution stopped treating compliance as a bottleneck and began using it as a decision accelerant.

Looking to the future

As organizations work to break free from pilot paralysis, indicators of progress include pilot-to-production conversion rates, diverse use cases across geographies, and a workforce actively engaged in ideation.

Solving the pilot challenge means organizations can move closer to achieving that elusive return on their AI investments. But their readiness is also cumulative: leaders’ ability to operationalize AI today will inform how they adopt and capitalize on emerging technologies like 6G wireless, XR (Extended Reality), and quantum computing in the years ahead.

Policy will remain the constant that determines scalability. Organizations that lean into AI governance now will be ready to adopt and benefit from future technologies faster. Their success will be driven by turning policy into a launchpad that not only accelerates their journey, but prepares them to stick the landing with precision, control, and confidence.

Have questions or want to learn more?

Email us

Related articles

The CEO tipping point

Making AI pay beyond hard dollars

Share article

Download article

Explore more

Readiness Report 2025

Cyber risk management

Assessment: AI Maturity


You might also like...

View all articles