By Kris Lovejoy, Global Security and Resiliency Practice Leader at Kyndryl
The rise of agentic AI marks a monumental leap in technological capability and a fundamental challenge to cybersecurity paradigms.
Autonomous agents can reason, plan and execute complex tasks, enabling enterprises to tackle difficult problems, improve customer experiences and continuously optimize operations. However, the autonomy and adaptability that make these systems so powerful also introduce a new class of vulnerabilities beyond the reach of traditional security models.
For decades, “Secure by Design” has helped enterprises enable resilience by embedding security early. Yet the autonomous nature of agents is now revealing limitations these principles never anticipated.
Today’s leaders face an inflection point: They must evolve their strategies beyond legacy defenses and embrace a new blueprint built for autonomous intelligence. Their ability to realize the potential of agentic AI and mitigate its risks will depend on redefining security for the agentic era.
Why traditional security models fall short
Secure by Design principles were traditionally predicated on predictable, rule-based systems. They led enterprises to focus on hardening defined perimeters, validating known inputs and preventing exploits of specific code vulnerabilities.
Agentic AI shatters these assumptions. Its dynamic, adaptive and often opaque nature creates a fundamentally different attack surface that renders static defenses inadequate.
While traditional application security focuses on identifying code vulnerabilities, the most severe attacks on AI systems often target training data to corrupt outputs. Traditional security models are also designed for predictable inputs, meaning they’re ill-equipped to defend against adversarial prompts that use cleverly crafted natural language to manipulate agents and override or “jailbreak” safety restrictions.
Agentic AI further skirts traditional approaches by blurring trusted boundaries. Many advanced AI models act with unprecedented speed and agility, making it more difficult to detect when a system has been compromised or is following malicious instructions.
Treating security as a final checkpoint will fall short of securing agentic AI systems that operate across complex technology systems. Rather, agentic AI requires embracing a DevSecOps approach that integrates security throughout the entire development lifecycle, from model training to deployment. Legacy approval processes cannot accommodate the automated, continuous security validation that agentic systems require.
The growing risk of inaction
The potential consequences of failing to evolve security approaches are severe and multifaceted — inaction is not an option.
Risks now extend beyond traditional data breaches to the manipulation of autonomous systems that can interact with the physical world. An agent operating with broad permissions can be hijacked through subtle prompt manipulation, turning a helpful assistant into a malicious actor capable of exfiltrating data, executing unauthorized financial transactions or causing physical disruption.
Multiagent systems are also susceptible to chain reactions. A single compromised agent can misdirect other agents, leading to a domino effect of systemic failure, misinformation and unpredictable behavior. Compromised agents can enable malicious goals to rapidly spread across interconnected systems, breaching containment boundaries and amplifying harm.
Data poisoning and model theft present additional risks. Attackers may corrupt an agent's training data to introduce biases or hidden vulnerabilities. Sophisticated adversaries can also reverse-engineer proprietary models through repeated queries, compromising intellectual property.
The autonomous nature of AI agents also makes traditional compliance frameworks insufficient. Without proper enterprise controls, agentic AI systems that process sensitive data may expose organizations to compliance and regulatory lapses. Violating regulations like the General Data Protection Regulation (GDPR) can result in substantial fines, loss of certifications and reputational damage.
The Open Web Application Security Project (OWASP) Top 10, a list of the most critical security risks for large language models — which serve as the reasoning engine of agentic AI underscores many of these emerging threats, including prompt injection, training data poisoning, and excessive agency. Given these risks, leaders face an urgent imperative to adopt a new security blueprint.
A new blueprint for securing agentic AI
Government agencies and industry leaders have begun to formulate new frameworks to help enterprises advance agentic AI. Organizations like the Coalition for Secure AI (CoSAI), the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the National Institute of Standards and Technology (NIST) and major tech companies like Google and Microsoft have all contributed to a growing consensus on what a modern Secure by Design approach must entail.
Securing agentic AI requires a multilayered strategy that extends beyond traditional security to address the unique lifecycle and operational realities of autonomous intelligence. A unified framework for secure agentic AI involves four core pillars: foundational governance, secure development lifecycle, robust operational security and adaptive monitoring and response.
Conclusion
The age of agentic AI has arrived. Organizations now have a profound responsibility to develop and deploy these powerful systems securely. The Secure by Design principles that have served enterprises in the past are insufficient for this new reality. By embracing a holistic, lifecycle-based approach that prioritizes governance, secure development, robust operational controls, and adaptive monitoring, organizations can realize the immense potential of agentic AI without sacrificing safety or security. The call to action is clear: now is the time to recalibrate security for the agentic AI era.