Agentic AI: The reality behind the hype 

By Kris Lovejoy, Global Security and Resiliency Practice Leader at Kyndryl

Some technologies arrive quietly, almost invisibly, but that’s not the case with agentic AI. These are systems designed to act, and with the promise of autonomy.

The reality isn’t so clear. Agentic AI is certainly real, but it’s also restless, imperfect, and often oversold.

The technology itself refers to artificial intelligence systems that can act on their own to achieve goals. Unlike traditional AI models that simply respond to human prompts, agentic AI can independently plan, make decisions, and take multi-step actions to accomplish a given objective. While the concept holds immense promise, the current landscape is a mix of genuine progress, significant hype, and some claims that border on unrealistic.

What's real: The emergence of goal-oriented automation 

At its core, agentic AI is about creating systems that can "get things done" with a degree of independence. The real and tangible aspects of agentic AI today include the ability to automate complex, multi-step tasks that were previously beyond the reach of simple automation. For example, an agentic system could be tasked with "planning a business trip to New York," and it would then proceed to book flights, reserve a hotel, and add appointments to a calendar, all without step-by-step human intervention.

Modern agentic AI can also interact with other software and APIs, allowing it to pull information from a database, send an email, apply a patch, or interact with a website to complete its tasks.  Additionally, agentic capabilities are being integrated with large language models (LLMs). This means that instead of just generating text, an LLM can now use its generated plan to take actions in the digital world.

In industry-specific settings, we are seeing real-world applications of agentic AI. In customer service, for example, AI agents can handle complex queries that require accessing and updating customer records. In finance, they can perform market analysis and execute trades based on predefined parameters. In cybersecurity, they can identify and respond to threats in real-time.

By 2028, 33% of enterprise software applications will incorporate agentic AI, up from less than 1% in 2024, allowing 15% of daily work decisions to be made autonomously.

The hype: Where excitement outpaces capability 

The excitement surrounding agentic AI has led to some exaggerated claims and unrealistic expectations. While agentic AI can operate independently within a defined scope, the idea of a fully autonomous AI making complex, nuanced decisions without any human oversight is still science fiction. Current systems operate within carefully constructed guardrails and are often best suited for well-defined, repetitive tasks.

It’s important to remember that when we talk about AI, “thinking” and “reasoning” are analogies. While agentic AI can exhibit “reasoning-like” behavior by breaking down a problem into smaller steps, this is a product of sophisticated algorithms and large datasets, not genuine consciousness or understanding. The “thinking” is a simulation, and these systems can fail in unexpected and illogical ways when faced with novel situations.

Likewise, demonstrations of agentic AI often showcase a smooth, seamless process. However, in reality, these systems are prone to errors, can get stuck in loops, and may misinterpret instructions. Their reliability in complex, open-ended environments is still a major challenge. Consequently, the narrative of AI agents replacing human jobs on a large scale in the near future is overblown. While they can augment human capabilities and automate certain tasks, they lack the critical thinking, adaptability, and common-sense reasoning required for most professional roles. 

The hoax: Misleading claims and unrealistic expectations 

While there’s no widespread, orchestrated hoax, some claims about agentic AI are misleading and contribute to a distorted perception of its current capabilities. The so-called hoax lies within the significant gap between marketing and reality. 

Many companies are quick to label their products as “agentic AI” to capitalize on the hype, even if the underlying technology is more akin to traditional automation with a conversational interface. As a result, many discussions about agentic AI downplay the crucial role that human oversight, intervention, and correction play in their current successful deployments. True “set it and forget it” autonomy is rare and often risky.  
 
Agentic AI promoters are also guilty of minimizing its risks and limitations. The potential for agentic AI to make mistakes, exhibit biases present in its training data, or be used for malicious purposes is often understated in optimistic portrayals. Documented failures have shown that these systems can behave in unpredictable and undesirable ways. For instance, an airline was held legally responsible after its AI-powered chatbot provided a customer with incorrect information, and research has shown that agentic systems can sometimes “hallucinate” and invent facts to complete a task. 

It is clear that agentic AI is a real and rapidly developing field with the potential to significantly impact various industries.  However, it’s crucial to approach the topic with a healthy dose of skepticism. The current reality is one of powerful but limited automation that still heavily relies on human guidance and oversight.  The vision of truly autonomous, thinking machines remains a long-term goal, and the present is more about a gradual evolution of AI’s capabilities rather than an overnight revolution.  

Kris Lovejoy

Global Security and Resiliency Practice Leader