Key takeaways



By Kris Lovejoy, Global Security and Resiliency Practice Leader at Kyndryl
Some technologies slip quietly into use. Agentic AI is not one of them. From the moment it emerged, it carried bold promises of autonomy and action — systems that don’t just respond, but reason, plan, and execute.
That excitement is justified. Agentic AI is real, advancing quickly, and already beginning to transform how enterprises think about automation. But like any breakthrough, the story is still being written. Progress exists alongside hype, and the possibilities come with growing pains. What’s clear is that we’re at the start of something consequential — a shift from tools that answer questions to systems that get things done.
The emergence of goal-oriented automation
At its core, agentic AI is about creating systems that can "get things done" with a degree of independence. The real and tangible aspects of agentic AI today include the ability to automate complex, multi-step tasks that were previously beyond the reach of simple automation. For example, an agentic system could be tasked with "planning a business trip to New York," and it would then proceed to book flights, reserve a hotel, and add appointments to a calendar, all without step-by-step human intervention.
Modern agentic AI can also interact with other software and APIs, allowing it to pull information from a database, send an email, apply a patch, or interact with a website to complete its tasks. Additionally, agentic capabilities are being integrated with large language models (LLMs). This means that instead of just generating text, an LLM can now use its generated plan to take actions in the digital world.
In industry-specific settings, we are seeing real-world applications of agentic AI. In customer service, for example, AI agents can handle complex queries that require accessing and updating customer records. In finance, they can perform market analysis and execute trades based on predefined parameters. In cybersecurity, they can identify and respond to threats in real-time.
By 2028, 33% of enterprise software applications will incorporate agentic AI, up from less than 1% in 2024, allowing 15% of daily work decisions to be made autonomously.
The reality behind the excitement
Agentic AI has generated extraordinary buzz — and for good reason. These systems mark a leap beyond traditional models, moving from answering prompts to carrying out multi-step tasks with initiative. The potential is vast. But it’s worth distinguishing today’s capabilities from tomorrow’s vision.
Current agents excel when operating within clear guardrails and defined objectives. They can automate workflows, coordinate across tools, and adapt based on feedback — making them powerful partners in enterprise environments. What they don’t yet do is make open-ended, nuanced decisions without human oversight. The “reasoning” we see is reasoning-like: sophisticated simulations of thought built from algorithms and data, not consciousness.
That’s why governance is essential. Without intentional design, oversight, and accountability, even the best agents can loop, misinterpret, or escalate problems in unexpected ways. Enterprises that thrive will be those that pair agentic AI with clear frameworks for reliability, ethics, and human decision-making. Their greatest value today lies not in replacing humans, but in amplifying them — reducing cognitive load, accelerating tasks, and enabling people to focus on judgment, context, and strategy.
Cutting through the noise
Clarity matters. In the rush to capture attention, many products are branded “agentic” even when they are closer to traditional automation wrapped in a conversational interface. That gap between marketing and reality fuels confusion and risks eroding trust.
In that same vein, true autonomy — a “set it and forget it” approach — is rare, and in many cases undesirable. What makes current deployments successful is precisely the combination of human oversight, intervention, and judgment. Ignoring that truth doesn’t help adoption; it sets the stage for disappointment.
This is why transparency and governance are essential. Agentic AI can make errors, reflect bias, or be misused. We’ve already seen chatbots mislead customers and agents fabricate information to complete a task. These moments don’t negate the technology’s promise; they underscore the need for clear guardrails, accountability, and a commitment to responsible use.
The reality is powerful but limited automation that thrives when paired with human guidance. The opportunity now is not to oversell, but to build trust — by showing where agentic AI truly delivers value today while laying the foundation for its longer-term evolution.