Two numbers tell the story of AI agents in 2026:

  • 97 million, MCP SDK monthly downloads
  • 14.4%, AI agents deployed with full security approval

The first number is a triumph. The second is a disaster waiting to happen.


The Protocol That Won

In November 2024, Anthropic released the Model Context Protocol (MCP), a standard way for AI agents to connect to external tools. Databases. APIs. File systems. Calendars. MCP gave agents a universal interface to the digital world.

The adoption velocity was remarkable. By March 2025, OpenAI announced support across the Agents SDK, Responses API, and ChatGPT desktop. Sam Altman posted simply: “People love MCP and we are excited to add support across our products.” In April, Google DeepMind confirmed MCP support in Gemini models. By December, Anthropic donated the protocol to the newly formed Agentic AI Foundation under the Linux Foundation, with OpenAI, Block, AWS, Google, Microsoft, Cloudflare, and Bloomberg as founding members.

The numbers tell the story: 97 million monthly SDK downloads across Python and TypeScript. Over 10,000 active servers. First-class client support in Claude, ChatGPT, Cursor, Gemini, Microsoft Copilot, and Visual Studio Code.

One year. From internal tool to critical infrastructure.

Think of it as USB-C for AI. Before MCP, if you had 10 AI apps and 20 data sources, you needed 200 custom integrations. With MCP, you need 30.

The problem of “how does my agent access my stuff” was solved.

But MCP is only half the story.


The Other Half

MCP connects agents to tools. It doesn’t connect agents to each other.

Your calendar agent can read your schedule. Your travel agent can book flights. But they can’t coordinate. They don’t know each other exists. They’re strangers sharing the same computer.

At the personal level, this is annoying. My AI assistant manages my calendar, email, and smart home. Works great when I’m the one talking to it. But when my wife’s AI assistant needs to schedule something with mine? Radio silence. They literally cannot communicate.

At the enterprise level, this is a strategic gap. A Salesforce agent can’t coordinate with a SAP agent. A customer service bot can’t hand off to a logistics bot. Every cross-system workflow requires humans in the middle.

Google saw this coming. In April 2025, they launched the Agent2Agent (A2A) protocol with backing from Salesforce, SAP, PayPal, and over 150 companies. IBM launched a competing protocol in March, then merged it into A2A by August. The enterprise world is consolidating around a standard for agent-to-agent communication.

But here’s the problem: while the industry figures out how agents will talk to each other, security is falling dangerously behind.


The Security Time Bomb

The numbers are alarming.

Gravitee surveyed over 900 executives and technical practitioners for their State of AI Agent Security 2026 report. The findings confirm a massive shift: 80.9% of technical teams have moved past the planning phase into active testing or production. AI agents are no longer experimental. They are production infrastructure.

But only 14.4% report all AI agents going live with full security/IT approval.

The disconnect between executive perception and technical reality is dangerous. 82% of executives feel confident that existing policies protect them from unauthorized agent actions. The ground truth tells a different story:

  • On average, only 47.1% of an organization’s AI agents are actively monitored or secured
  • More than half of all agents operate without any security oversight or logging
  • 88% of organizations reported confirmed or suspected AI agent security incidents in the last year
  • In healthcare, that number jumps to 92.7%

These aren’t minor glitches. The report includes practitioner stories of agents gaining unauthorized write access to databases and attempting to exfiltrate sensitive information. The risk isn’t about hallucinations anymore. It’s about agents being too efficient at performing actions they were never intended to do.

The core problem is identity. Most organizations still treat agents as extensions of human users or generic service accounts:

  • Only 21.9% treat AI agents as independent, identity-bearing entities
  • 45.6% rely on shared API keys for agent-to-agent authentication
  • 27.2% have reverted to custom, hardcoded logic to manage authorization

When agents share credentials or use hardcoded logic, accountability breaks down. And here’s the kicker: 25.5% of deployed agents can create and task other agents. If Agent A creates Agent B, who’s responsible when Agent B goes wrong?

Security teams cannot protect what they cannot see. When agents interact with production data before they’re even vetted, “Shadow AI” becomes a backdoor into the enterprise.


The OWASP Response

The security community is scrambling to catch up. OWASP released their Top 10 Risks for Agentic Applications for 2026, covering everything from traditional threats to AI-specific nightmares.

The list reads like a preview of incidents waiting to happen:

ASI01: Agent Goal Hijack, Attackers manipulate an agent’s tasks by exploiting the model’s inability to distinguish legitimate instructions from external data. One documented example: hidden instructions embedded in a webpage triggered an export of the user’s browser history when parsed by an AI agent.

ASI02: Tool Misuse and Exploitation, Agents use legitimate tools in unsafe ways. A customer support chatbot with access to a financial API was manipulated into processing unauthorized refunds because its access wasn’t restricted to read-only.

ASI03: Identity and Privilege Abuse, Permissions get inherited in unexpected ways. An employee creates an agent using their personal credentials. When that agent is shared with coworkers, all requests execute with the creator’s elevated permissions.

ASI04: Agentic Supply Chain Vulnerabilities, Third-party models and tools may be compromised from the start. A coding assistant automatically installs a compromised package containing a backdoor, allowing attackers to scrape CI/CD tokens and SSH keys from the agent’s environment.

ASI05: Unexpected Code Execution, Agents generate and execute code in real-time, opening doors for malicious scripts. Through prompt injection, an agent can be tricked into downloading and executing attacker-provided commands.

These aren’t theoretical. Documented attacks targeting AI development agents are already in the wild.

And that’s just individual agents. What happens when they start coordinating?


Now Multiply That By Swarms

If individual agents are a security challenge, agent swarms are an order of magnitude worse.

On January 24, 2026, developer Mike Kelly discovered a hidden feature in Claude Code that Anthropic hadn’t announced. He found “Swarms”, a feature-flagged capability that transforms Claude Code from a single AI assistant into a team orchestrator. Instead of one AI writing code, multiple specialist agents work in parallel, coordinating via shared task boards and messaging each other to solve problems.

The discovery hit 281 points on Hacker News with over 200 developers debating whether this is the future of development or a dangerous step too far. Kelly had to create an unlock tool just to access it. Anthropic built this powerful capability, then hid it behind feature flags. No announcement, no documentation, no official release.

The architecture is sophisticated: a team lead that plans and delegates but doesn’t write code itself. It spawns specialists, frontend, backend, testing, documentation, who work simultaneously with fresh context windows per agent. A testing agent validates changes continuously. Workers coordinate amongst themselves, not just with the human.

Three days later, on January 27, China’s Moonshot AI released Kimi K2.5 as open-source. Its “Agent Swarm” feature deploys up to 100 AI agents working in parallel, achieving 4.5x speedup in execution time and handling up to 1,500 tool calls simultaneously.

Gartner reports a 1,445% surge in multi-agent system inquiries from Q1 2024 to Q2 2025. By the end of 2026, they project 40% of enterprise applications will include task-specific AI agents, up from less than 5% in 2025.

The attack surface implications are severe. As Kaspersky’s analysis of the OWASP framework notes: “A single error, caused by hallucination, a prompt injection, or any other glitch, can ripple through and amplify across a chain of autonomous agents.”

One compromised agent in a swarm of 100 isn’t a single point of failure. It’s a propagation vector. The attack surface isn’t linear. When agents coordinate, vulnerabilities compound.


The Missing Layer

So where does this leave us?

MCP solved the agent-to-tool problem. Near-universal adoption, clear benefits, security issues at least understood if not always addressed.

Agent-to-agent communication is next. Two main approaches are emerging:

Centralized (A2A): Google’s protocol, now under the Linux Foundation with 150+ member companies. Enterprise-focused, HTTP-based, uses “Agent Cards” for discovery. Clear governance, vendor support, accountability. But also: central registries, platform dependencies, and the risks that come with concentrated control.

Decentralized (ANTS, others): The ANTS Protocol takes a different approach. It’s building a full stack for agent interaction: identity, discovery, and communication, all without central control. Agents get human-readable handles (like @myagent), authenticate with Ed25519 cryptographic signatures, and communicate through a decentralized relay network built on Nostr infrastructure.

The security model is identity-first: every message is cryptographically signed at the source. No central registry decides who can participate. No platform can revoke your agent’s identity. If Agent A messages Agent B, both parties verify signatures directly, no intermediary required.

Neither approach has won. Both solve real problems. A2A offers enterprise governance and Fortune 500 backing. ANTS offers censorship resistance, self-sovereign identity, and infrastructure that no single company controls.

But regardless of which approach prevails, the security gap remains. The industry is building agent-to-agent infrastructure before figuring out:

  1. Identity: How do you verify who an agent is? Only 21.9% treat agents as identity-bearing entities.
  2. Trust: How do you know an agent is legitimate? 45.6% still use shared API keys.
  3. Audit: How do you track what an agent did? More than half operate without logging.
  4. Boundaries: How do you limit what an agent can do? 25.5% can spawn other agents.

MCP focused on functionality first, security second. It worked because tool access is relatively contained. The damage a rogue agent can do through a calendar API is limited.

Agent-to-agent communication is different. When agents can coordinate, delegate, and act on each other’s behalf, the blast radius expands dramatically. A compromised agent isn’t just accessing tools. It’s potentially influencing other agents, propagating through networks, executing coordinated actions.

We have maybe 12–18 months before agent swarms go mainstream. The Claude Code Swarms feature is already built, just not enabled. Kimi K2.5 is shipping now. The capability is here.

The security frameworks are not.


What Happens Next

For builders deploying agents today:

  • Don’t skip security review. The 14.4% approval rate exists because organizations move fast. But shadow agents create unquantified risk.
  • Use MCP for tool access. It’s the standard. Don’t reinvent it.
  • Watch A2A and decentralized alternatives like ANTS. Agent-to-agent is coming. Your architecture decisions now will affect your options later.
  • Treat agents as identity-bearing entities. Not extensions of users. Not generic service accounts. Independent principals with their own credentials and audit trails.
  • Plan for swarms. Even if you’re not using them yet, your vendors and partners will be.

For the industry:

  • Security frameworks must catch up to adoption. The current gap is unsustainable.
  • Agent identity needs standardization. Not just “who deployed this agent” but “how do I verify this agent is who it claims to be.”
  • Audit trails need to work across agent boundaries. When Agent A tells Agent B to do something, both actions need to be traceable.
  • We need circuit breakers. When something goes wrong in an agent swarm, there must be ways to stop propagation.

The Easy Problem and the Hard One

MCP proved we can standardize agent infrastructure quickly. 97 million downloads in 14 months. A protocol that started at one company became an industry standard, now governed by a foundation with the biggest names in AI as founding members.

That was the easy problem. Agents talking to tools is relatively safe. Tools are passive. They do what they’re told. The risk surface is manageable.

Agents talking to agents is the hard problem. Agents are active. They make decisions. They can be manipulated. And when they coordinate, small failures become large ones. 100 agents working in parallel, 1,500 tool calls simultaneously, decisions cascading through networks of autonomous systems.

The 14.4% approval rate should concern everyone building in this space. Because when agent swarms become mainstream, that’s not individual agents going rogue. That’s coordinated, autonomous systems operating at scale, mostly without security oversight.

88% of organizations already report security incidents. We haven’t even started deploying swarms at scale yet.

We solved half the problem. The other half is harder, riskier, and coming faster than most organizations are prepared for.

The infrastructure is being built right now. The question is whether security will be part of the foundation, or an afterthought bolted on later.

We built speed. We skipped trust.

The 14.4% number tells you which way we’re headed. The next major breach won’t be an exposed database or a leaked API key. It’ll be an ungoverned agent that nobody knew existed, making decisions nobody approved, in a swarm nobody can audit.

If you’re building agents, start designing for identity and governance now. Not after your first incident.


Sources:

  • Gravitee, “State of AI Agent Security 2026 Report” (2026)
  • OWASP, “Top 10 Risks for Agentic Applications 2026”
  • Pento, “A Year of MCP: From Internal Experiment to Industry Standard”
  • Kaspersky, “AI agents in your organization: managing the risks”
  • ByteIota, “Claude Code Swarms: Hidden Multi-Agent Feature Discovered”
  • Moonshot AI, Kimi K2.5 documentation
  • Gartner, multi-agent system adoption projections