Last updated at Mon, 09 Jun 2025 20:18:12 GMT

From writing assistance to intelligent summarization, generative AI has already transformed the way businesses work. But we’re now entering a new phase where AI doesn’t just generate content, but takes independent action on our behalf.

This next evolution is called ‘agentic AI’, and it’s moving fast. Amazon recently announced a dedicated R&D group focused on agentic systems. OpenAI is advancing its Codex Agent SDK to build more capable AI “workers.” And a growing number of businesses are actively experimenting with autonomous agents to handle everything from code generation to system orchestration.

While the potential is significant, so are the risks. These new systems bring fresh challenges for security teams, from unpredictable behavior and decision-making to new forms of supply chain exposure.

Here are five things every security leader needs to know right now.

1. Agentic AI is moving from research to reality

Unlike traditional generative AI, which responds to single prompts, agentic AI systems operate more autonomously, often over longer durations and with less human supervision. They can make decisions, learn from feedback, and complete multi-step tasks using reasoning and planning capabilities.

Some agents even have memory and goal-setting functions, enabling them to adapt to changing conditions and take initiative. This has huge implications for productivity but also opens the door to a new class of operational and security risks.

According to Forrester(1), agentic AI represents a shift “from words to actions,” with agents poised to become embedded across knowledge work, development, cloud operations, and customer-facing systems. Security teams must now consider not just what AI is generating, but what it’s doing.

2. Emerging use cases span development, robotics, and IT automation

Agentic AI has been surrounded by hype, but we’re already seeing practical use cases emerge across development, automation, and robotics.

  • Amazon’s new R&D group is focused on building AI agents for robotics and software orchestration, aiming to automate real-world tasks with physical and digital components.
  • OpenAI’s Codex Agent SDK is enabling developers to build custom agents that can interact with APIs, browse the web, and execute instructions without human involvement.
  • In enterprise IT, some early agentic tools are being used to generate and deploy scripts, configure systems, and resolve tickets across helpdesk platforms.

As these systems become more capable, they also become harder to predict. Agentic AI doesn’t just follow rules; it works toward outcomes. That makes it both valuable and volatile in enterprise environments.

3. The attack surface is expanding in new and subtle ways

One of the most critical risks that agentic AI introduces is decision unpredictability. These systems operate with a degree of autonomy, which means they can take action based on reasoning that isn’t always traceable or transparent. That creates blind spots for traditional controls.

Other risks include:

  • Prompt injection and manipulation, where attackers feed malicious instructions into agent workflows
  • Unintended lateral movement, especially when agents interact with APIs or third-party services
  • Supply chain exposure, as agents increasingly rely on external tools, plugins, and data sources to function

As noted at Infosecurity Europe, many of today’s AI threat models don’t yet account for agents that can generate, interpret, and act on instructions in dynamic environments. Traditional AppSec and identity controls will need to evolve to monitor not just access, but behavior over time.

4. Governance, observability, and containment are critical

As with earlier generations of AI, governance will define how successfully agentic systems can be adopted and secured.

Experts across MIT Sloan and Thoughtworks agree: organizations must rethink how they apply principles like least privilege, role-based access, and anomaly detection in an agentic context. That includes:

  • Observing how agents reason and make decisions
  • Restricting the actions they’re allowed to take (especially with sensitive data or infrastructure)
  • Implementing containment strategies that limit blast radius in case of failure or manipulation

Agent-based systems can’t be treated like static applications. Security teams need tools that provide ongoing insight into agent activity, and the ability to intervene when needed.

This is especially important when agents are integrated into security workflows themselves. If an agent is responsible for triaging alerts or executing playbooks, who’s accountable when it fails? And how do you audit its decisions?

5. Security teams have an opportunity to lead — but the window is narrow

We’re still in the early stages of agentic AI adoption, which gives security leaders a rare opportunity to influence how these systems are implemented from the outset. That includes building safe defaults, engaging with developers early, and applying threat modeling and testing before agents are deployed in production.

At Rapid7, we’ve already begun evaluating agent behavior through the lens of exposure, intent, and exploitability — the same principles that guide how we think about modern attack surfaces. Our goal is to help customers harness the speed and scale of AI without sacrificing visibility or control.

We’ve also introduced AI-powered application coverage in Exposure Command to help customers identify misconfigurations and application-layer weaknesses that could be exploited by or through autonomous tools.

Where security goes from here

Agentic AI represents the next wave of transformation. It’s not just generating output; it’s taking action. And while the business potential is huge, so is the responsibility to deploy it safely.

The attackers of 2025 are not just writing better phishing emails. They’re weaponizing automation, scaling social engineering, and skipping the learning curve. Security teams need to respond with visibility, control, and collaboration. Because when everyone has access to the same technology, it’s those who use it responsibly and defensively that come out ahead.

The time to prepare is now. Agentic AI is moving quickly…and it’s not waiting for security to catch up.


(1) Forrester (2025) With Agentic AI, Generative AI Is Evolving From Words to Actions. [online] Available at: https://reprint.forrester.com/reports/with-agentic-ai-generative-ai-is-evolving-from-words-to-actions-9c6cf2d9/index.html