Skip to main content
Flowing data
Security and resiliency

Outpacing risk: Security readiness for the AI era

By Cory Musselman
Chief Information Security Officer
By Tony De Bos
Vice President of Security and Resiliency
Ideas lab | 9/04/2026 | Read time: 1 min

By Cory Musselman, Chief Information Security Officer at Kyndryl, and Tony De Bos, Vice President of Security and Resiliency at Kyndryl
 

A new era of digital security

This week, Anthropic revealed details about the startling power of its new AI model, Mythos, a development that may fundamentally reframe our relationships with the machines that surround us and alter the systems that run the global economy.

While we now only have glimpses of what the tool is capable of, what we’ve learned is at turns exhilarating and frightening. The model appears to be a step change beyond anything that has come before, with extraordinary new abilities to scrutinize and write code and solve complex problems.

With that power comes a profound shift in risk for IT and operational technology systems. The company says the new AI-powered model can “surpass all but the most skilled humans at finding and exploiting software vulnerabilities” and that it has already detected thousands of them. To respond, the company launched Project Glasswing, an assemblage of some of the world’s leading technology players, to proactively guard against the model’s exploitation by malevolent actors.

For global enterprises, this sets the stage of a new paradigm for cybersecurity.

Speed-to-protect will become the essential defensive capability for organizations facing supercharged AI attacks.

Implications for global enterprises

The ability to scrutinize old code faster and at scale means that the model has the potential to identify and exploit vulnerabilities that have slumbered for decades. And while human hackers might have found those cracks through weeks of labor, new AI tools can theoretically locate them and launch attacks much more frequently and quickly. Enterprises could once have taken comfort in the age and inaccessibility of their vital systems — the risk was always there, but opportunity costs were simply much higher for attackers. That calculation has now changed.

This suggests a truism of cybersecurity: AI doesn’t create new vulnerabilities but exposes existing ones faster and at scale.

We are entering a new era for security and resiliency, where old best practices will have to be radically updated to meet these new challenges. Timelines for patching code could be condensed to mere hours. And code for new software will have to pass a much higher standard, with no room for errors, which will be detected at machine speed.

Fighting fire with fire

AI developments like these pit the sheer power of new tools against enterprise readiness. Advantage shifts to organizations that can combine AI capabilities with deep, environment-wide observability and the ability to operationalize changes safely and quickly.

This is even more pronounced in mission-critical technologies — those that run banks, healthcare systems, utilities, insurance companies, travel and other essential services. Importantly, those systems cannot turn off while a threat is isolated. If a model can find issues quickly, but an enterprise cannot implement and validate remediation without breaking production, the risk remains.

While the terrain is still changing, there are some basic questions about AI readiness that organizations should be asking themselves:

How do we get faster?

Speed-to-protect will become the essential defensive capability for organizations facing supercharged AI attacks. Here, scale matters. Advanced AI, with the right guardrails in the right environment, can help organizations do this. AI-driven automated offensive testing across enterprise environments can help, but only if there is sufficient observability across the entire enterprise. This shifts cyber resilience from alerting and response to continuous, adaptive control. The winners will be the organizations that can deploy the newest tools faster than their adversaries, and those who have built their technology estates strategically to bolster resilience and agency.

How do we govern internal AI agents?

Agentic AI systems can help organizations detect vulnerabilities faster and reduce human bottlenecks. But those same systems also introduce a new category of internal risk: a misconfigured or compromised agent could cause as much harm as an external attacker. Organizations need an AI operating model that is explicit about identity, permissions, and separation of duties — what agents can access, what they can change, what requires human approval, and how actions are logged and audited.

What are the most critical assets I need to protect?

Enterprises are increasingly concerned about where and how their critical data is stored and processed, and how much control they have over the systems they use to operate. Few can exert maximum effort everywhere at once. Enterprises need to identify their “crown jewels” and prioritize their security. Strategies include vaulting and air-gapping data — physically separating it from vulnerable systems — while considering structures that diminish the “blast radius” of attacks.

This is just the most recent reminder of what technology executives already know: The advancement of artificial intelligence technology is accelerating, new AI models are ever-more powerful, and the pace of progress shows no sign of abating.

In a few years, AI has quickly evolved from a useful tool to an essential component of modern business and now, to a potentially existential threat. More than ever, enterprises need trusted partners to guide them into this unnerving new reality – partners who have experience in their mission-critical systems, with the tools for observability and the technology to keep up with the pace of change.

Cory Musselman

Chief Information Security Officer

Tony De Bos

Vice President of Security and Resiliency

Get insights in your inbox

Subscribe to the newsletter

Speak to our experts.

Have questions or want to learn more?