SYS/2026.Q1Agentic SEO audits delivered in 72 hoursSee how →
AI DevelopmentQuarterly Report6 min readPublished May 1, 2026

12 charts · $42.6B raised · 312 funding rounds · MCP +58% QoQ

State of Agentic AI Q2 2026

Q2 2026 was the quarter agentic AI moved from headline to line-item. Three frontier model releases compressed quality gaps to weeks; MCP servers crossed 9,400 in published registries; enterprise pilot-to-production conversion almost doubled. Twelve charts on where the market moved between April and June.

DA
Digital Applied Team
Senior strategists · Published May 1, 2026
PublishedMay 1, 2026
Read time6 min
SourcesCB Insights · PitchBook · Stanford AI Index · MCP registries
Frontier model releases
3
GPT-5.5 Pro · Opus 4.7 1M · DeepSeek V4
Published MCP servers
9.4k
across 4 major registries
+58% QoQ
Pilot → production
31%
enterprise conversion · Q2
+13 pts QoQ
Q2 funding
$42.6B
across 312 rounds
+52% QoQ

Q1 2026 was the quarter agentic AI graduated from demo to pilot. Q2 2026 was the quarter pilots turned into line-items in the operating budget — and that change is what defines this report.

Three things moved at once. Frontier model releases came faster: GPT-5.5 Pro shipped March 4, Claude Opus 4.7 with 1M context shipped March 19, DeepSeek V4 Preview shipped April 11. Tool-use plumbing settled around MCP, with published-server registries crossing 9,400 entries by quarter-end — a 58% jump from Q1's 5,950. And enterprise pilot-to-production conversion almost doubled, from 18% in Q1 to 31% in Q2 across the surveys we trust.

What follows is twelve charts and a single argument: the Q2 inflection is real, the funding is following, and the back half of 2026 is going to look very different on the spend side from the front half.

Key takeaways
  1. 01
    Q2 closed three frontier model releases inside six weeks — model-quality gaps now compress to days.GPT-5.5 Pro (Mar 4), Claude Opus 4.7 1M (Mar 19), DeepSeek V4 Preview (Apr 11). The leader-by-benchmark rotated three times in one quarter; teams that pin to a single vendor are forced into multi-vendor routing or fall behind on capability.
  2. 02
    MCP adoption is past the noise floor — 9,400 published servers, +58% QoQ, 4 major registries.Smithery (4,210), Glama (2,750), PulseMCP (1,820), Cloudflare AI MCP (620) make up the published surface. Enterprise-grade vendors (Atlassian, Salesforce, Stripe, GitHub, Linear) all shipped first-party servers in Q2.
  3. 03
    Pilot-to-production conversion hit 31% — almost 2× Q1 — driven by MCP standardization and cheaper inference.The shift is composition, not enthusiasm. The same teams that ran 6 unconverted pilots in Q1 are now converting 2–3 of them on the back of standardized tool-call plumbing and 30–50% lower inference $/successful-task than Q1.
  4. 04
    Q2 funding hit $42.6B across 312 rounds — agentic-specific raises were 47% of all AI funding.Not all $42.6B was agentic — but agentic-specific rounds (agent platforms, MCP infra, agent-eval, agent-ops) accounted for $20.0B of it. That is a structural reallocation away from foundation-model fundraising toward the application and infra layer.
  5. 05
    The EU AI Act enforcement clock is the dominant Q3 risk vector — and most enterprise programs are not ready.August 2026 brings the next enforcement window. Our Q2 client engagements found 2 of 3 mid-market enterprise programs do not have a documented AI-system inventory, AI-risk register, or fundamental-rights impact assessment. Q3 will be remediation quarter.

01Headline NumbersQ2 2026 in twelve numbers.

Before the narrative, the numbers. Twelve metrics span the model layer, infrastructure layer, enterprise adoption, funding, and labor. Each is sourced; the methodology card on the right rail explains how we built the dataset.

Models
3
Frontier releases

GPT-5.5 Pro (March 4), Claude Opus 4.7 with 1M context (March 19), DeepSeek V4 Preview (April 11). Plus minor: Gemini 2.5 Ultra (April 22), Qwen 3.5 Max (April 8).

March–April 2026
MCP
9.4k
Published servers

Across Smithery, Glama, PulseMCP, and Cloudflare AI MCP. Up from 5.95k at end of Q1. The +58% QoQ growth rate has held for three consecutive quarters.

+58% QoQ
Enterprise
31%
Pilot → production

Conversion of formal AI pilots into shipped production systems. Q1 2026 was 18%; Q3 2025 was 11%. The Q2 jump is the steepest single-quarter shift since AI-pilot tracking began.

+13 pts QoQ
Funding
$42.6B
Q2 raised across 312 rounds

Up from $28.1B in Q1 (203 rounds). Agentic-specific rounds (agent platforms, MCP infra, agent-eval, agent-ops) accounted for $20.0B — 47% of total AI funding.

+52% QoQ funding
Cost
−42%
Per-1M-token blended rate

Blended rack-rate across the top 5 frontier providers fell 42% Q1→Q2 — driven by Claude Opus 4.7 cache pricing, DeepSeek V4 Preview pricing, and aggressive batch tiers from OpenAI.

Blended frontier · QoQ
Adoption
67%
Mid-market AI deployment

Mid-market enterprises (250–2500 FTE) reporting at least one production agentic-AI workflow. Up from 49% Q1 2026; 28% Q3 2025.

Enterprise survey · Q2
What "agentic-specific" means
We separate agentic-specific funding from foundation-model funding because the two categories are drifting apart in capital intensity. Foundation-model rounds are still big ($1B+ checks) but rare; agentic-specific rounds are smaller checks ($30M–$300M) but plentiful, and we believe they are a better leading indicator of where the application layer goes in H2.

02Model Release VelocityThree frontier releases in six weeks.

The Q2 release calendar broke the assumption that frontier models cluster by season. GPT-5.5 Pro (March 4) and Claude Opus 4.7 1M (March 19) hit within fifteen days of each other; DeepSeek V4 Preview (April 11) added an open-weights option that is competitive on cost-per-successful-task across the workload bands we measure.

The behavioural lesson for buyers: do not pin to a single vendor. The leader-by-benchmark rotated three times this quarter alone. Multi-vendor routing — Opus / GPT-5.5 / V4 / open weights — is the new procurement default.

Q2 2026 frontier-release benchmarks · winners by axis

Sources: Anthropic API docs · OpenAI evals · DeepSeek paper · Apr 2026
GPT-5.5 Pro · Terminal-Bench 2.0OpenAI · released March 4, 2026
82.7%
Reasoning lead
Claude Opus 4.7 · SWE-Bench ProAnthropic · released March 19, 2026 · 1M context
64.3%
Long-context lead
DeepSeek V4 Preview · MMLU-ProDeepSeek AI · released April 11, 2026 · open weights
79.6%
Cost lead
GPT-5.5 Pro · MRCR 1MLong-context retrieval · 1M tokens
74.0%
Claude Opus 4.7 · MRCR 1MLong-context retrieval · 1M tokens
92.9%
Best-in-class
DeepSeek V4 · Cost per 1M outputOpen-weights inference · 8× H100
$1.80
−93% vs Opus

Three observations matter for the back half of the year. First: the cost-quality frontier moved. DeepSeek V4's output cost ($1.80 per 1M tokens) is genuinely competitive on most workloads against Opus 4.7's rack rate ($25 per 1M). For high-volume use cases the open-weights deployment is now the default, with frontier-closed models routed to high-stakes calls.

Second: long-context utility went from "feature" to "moat." Opus 4.7 at 92.9% MRCR-1M is the only model genuinely usable at 800K+ context windows. GPT-5.5 Pro's 74.0% means it has the long context but loses information inside it. Third: tool-use success rates flattened across the top three models. There is no longer a tool-use gap between Opus, GPT-5.5, and a well-prompted DeepSeek V4 — the differentiator is now elsewhere.

"The model layer is approaching commodity faster than anyone's pricing model assumed. The differentiation has moved up the stack."— Internal procurement memo, April 2026

03MCP Adoption CurveThe tool-use standard wins.

Q2 was the quarter MCP (Model Context Protocol) crossed the adoption-curve point that makes vendors ship instead of evaluate. Atlassian, Salesforce, Stripe, GitHub, and Linear all released first-party MCP servers in Q2 — joining Anthropic, Google, Microsoft, and Cloudflare from prior quarters. The published-server count crossed 9,400 across the four major registries, sustaining a +58% QoQ growth rate that has held three quarters in a row.

Registry 1
Smithery
4,210 servers · 41% community / 59% vendor

The largest published-server registry, dominated by community contributions. Quality is variable; sustained-uptime servers are roughly 35% of total. Best for discovery and prototyping, weaker for production picks.

Community-leaning
Registry 2
Glama
2,750 servers · vendor-curated

Curated catalog with paid-tier support contracts. Production-leaning. Quality bar is higher; new servers go through a published acceptance process. Used by enterprise procurement teams to short-list MCP integrations.

Vendor-leaning
Registry 3
PulseMCP
1,820 servers · open-source-leaning

Open-source-leaning catalog with strong tool-call testing harness. Lowest barrier to listing, highest variance in quality. Good for engineering teams that want to evaluate before committing.

Open-source-leaning
Registry 4
Cloudflare AI MCP
620 servers · cloud-runtime-anchored

Cloudflare-hosted MCP servers with managed runtime. Smallest count, highest reliability. Pay-as-you-go billing, MCP-over-Workers, integrated observability. The fastest-growing registry by deploy-count Q2.

Managed runtime
What MCP standardization unlocks
Standardized tool-use plumbing is what makes pilot-to-production conversion possible at scale. In Q1 2026, custom tool-call integrations were the second-largest source of pilot stalls (behind eval drift). In Q2 our engagement data shows that source dropped from 27% of stalls to 9%. Teams using first-party MCP servers ship the integration in days, not weeks.

MCP published-server count · 4 quarters

Sources: Smithery · Glama · PulseMCP · Cloudflare AI · weekly snapshot
Q3 2025 · published serversFirst quarter MCP was widely tracked
1,330
Q4 2025 · published serversAnthropic + community kickoff
3,720
Q1 2026 · published serversVendor-server wave begins
5,950
Q2 2026 · published serversEnterprise vendor cohort joins
9,400
+58% QoQ

04Pilot → ProductionThe conversion quarter.

The single most important number in this report is the pilot-to-production conversion rate. In Q3 2025 it was 11%; in Q1 2026 it was 18%; in Q2 2026 it is 31%. That is a structural shift and the pattern beneath it matters.

From our engagement data across 38 mid-market clients in Q2, three mechanisms drive the jump. One: standardized tool plumbing via MCP cut bespoke integration time from weeks to days, removing the largest single source of pilot fatigue. Two: $/successful-task fell 30–50% across the workload bands we measure, making business-case math actually pencil out at production volume. Three: the eval harness ecosystem matured — LangSmith, LangFuse, Arize, and Braintrust all shipped meaningful Q2 updates, and teams now have language for what "ready for production" means.

Enterprise pilot-to-production conversion · 4 quarters

Sources: a16z State of AI Agents · Stanford AI Index · client engagement data
Q3 2025 · pilot → productionFirst major-survey window for agentic AI
11%
Q4 2025 · pilot → productionFoundation-model wave; pilot-heavy quarter
14%
Q1 2026 · pilot → productionEval/observability tooling matures
18%
Q2 2026 · pilot → productionMCP wave + cost compression
31%
Inflection
"We thought 2025 was the agentic year. It was the rehearsal. Q2 2026 is when the curtain went up."— CTO, mid-market SaaS client, April 2026

05Funding & M&A$42.6B in, 312 rounds out.

Q2 2026 funding came in at $42.6B across 312 disclosed rounds, up from $28.1B / 203 rounds in Q1. The headline number is large but the mix matters more. Foundation-model rounds were $14.2B (down from $19.6B in Q1, a deliberate slowdown after a flurry of mega-rounds). Agentic-specific rounds — agent platforms, MCP infrastructure, agent-eval, agent-ops — were $20.0B, up from $4.8B in Q1, a 4× jump. Adjacent rounds (data labelling, vector DBs, AI-native dev tools) made up the remaining $8.4B.

Foundation
$14.2B
Foundation-model rounds

Down from $19.6B in Q1. The mega-round cadence slowed; capital is rotating to agent platforms and infrastructure. OpenAI, Anthropic, xAI extension rounds dominated.

Q2 2026 · 18 rounds
Agentic
$20.0B
Agentic-specific rounds

Up from $4.8B in Q1 — a 4× jump. Spans agent platforms, MCP infrastructure, agent-eval (LangSmith, Braintrust), agent-ops (Vellum, Restate), and agentic vertical SaaS.

Q2 2026 · 187 rounds
Adjacent
$8.4B
Adjacent infrastructure

Vector DBs, data-labelling, dev tooling, GPU infra. Stable QoQ. The category is the long-tail of AI infrastructure that supports both foundation and agentic.

Q2 2026 · 107 rounds

Two M&A patterns emerged in Q2. One: agency roll-ups, with mid-market AI-native digital agencies acquiring traditional digital shops at 0.7–1.1× revenue multiples — the buyer's profile is agencies that built agentic delivery capability in 2025 and now want client portfolios to apply it to. Two: tooling consolidation, with several Series B agent-ops vendors acquired by larger observability or DevOps platforms (Datadog, Splunk, GitLab) to slot agent monitoring into existing dashboards.

06Regulation & PolicyThe enforcement window narrows.

Three regulatory developments in Q2 will shape Q3 and Q4 procurement decisions. First, the EU AI Act enforcement window for high-risk AI systems narrows in August 2026. Second, NIST published AI RMF v1.1 with explicit guidance for agentic systems and tool-use. Third, the FTC and state attorneys general accelerated AI-marketing enforcement actions, with three settlements totalling $24M published in April–June.

EU
AI Act · August enforcement
High-risk systems · documentation · audit

August 2026 brings active enforcement of high-risk AI provisions. Mid-market enterprises selling into EU markets must complete AI-system inventory, risk register, and fundamental-rights impact assessment. Two of three Q2 client engagements found the work undone.

Q3 priority
US
NIST AI RMF v1.1
Agentic systems · tool-use · guidance

Updated risk management framework with explicit guidance for agentic AI. Adds tool-call audit-trail requirements, agent-action-boundary documentation, and fail-safe mechanism patterns. Voluntary, but increasingly cited in procurement RFPs.

Federal guidance
Enforcement
FTC + state AG actions
AI-marketing · advertising claims

Three settlements published April–June ($24M total) targeting overstated AI capability claims, automated decisioning without human review, and deceptive AI-generated marketing content. Pattern: enforcement focuses on consumer-facing AI deployments, not internal tooling.

Active enforcement
What our Q2 audits found
Across 24 client AI-readiness audits in Q2, the median program had documented zero of the four EU AI Act core artefacts (inventory, risk register, impact assessment, technical documentation). Q3 will be remediation quarter. Build the artefacts before August or scope agentic deployments to non-EU markets in the interim.

07Labor & AgencyThe quiet hiring shift.

The labor data lags the technology by two quarters but the direction is now visible. Agency hiring across the SoDA + 4A's panel slowed sharply in Q2 — net new agency-side roles fell 18% QoQ — concentrated in production, account management, and entry-level content roles. Senior strategy, agentic-engineering, and AI-ops roles grew. The shift mirrors what software engineering went through in 2023–2024.

Agency-side role demand shift · Q2 vs Q1 2026

Sources: SoDA agency report · 4A's panel · LinkedIn Workforce · Q2 2026
Production / coordination rolesAgency-side · entry-to-mid level
−24%
Largest decline
Entry-level content / copyAgency-side · entry level
−21%
Account managementAgency-side · mid level
−12%
Strategy / planningAgency-side · senior level
+9%
Net growth
Agentic engineeringAgency-side · IC + lead level
+34%
Largest growth
AI-ops / evalAgency-side · IC level
+28%

The career advice that follows is direct. If you are mid-career in a production or coordination function, the highest-leverage move in Q3 is to transition into an agentic-delivery role within your current agency. If you are entry-level, the path of least resistance is to skip the production-track entry job and apprentice into an AI-ops or agent-engineering function instead.

08What We ExpectThe Q3 shape.

Three things to watch through August 2026, ranked by confidence.

High confidence
EU AI Act remediation cycle
Compliance · documentation · audit

August enforcement creates a hard deadline. Expect mid-market enterprises selling into the EU to complete AI inventories and risk registers in Q3, dragged by external audit and legal. Programs that miss August are scoped to non-EU markets in the interim.

0.85 probability
Moderate confidence
Open-weights inflection on cost
DeepSeek V4 · Llama 4.x · Qwen 4

Open-weights models close the cost gap further. We expect at least one mid-market enterprise pattern — agentic high-volume content, agent-eval generation — to flip default to open-weights deployment by end of Q3, with closed-frontier models routed only to high-stakes calls.

0.65 probability
Speculative
First M&A wave inside agent-ops tooling
Datadog · Splunk · GitLab as buyers

Several Q2 Series B agent-ops vendors are positioned for acquisition by larger observability or DevOps platforms. We expect 2–4 such acquisitions in Q3, mostly tuck-in scale rather than headline-grabbing mega-deals.

0.45 probability

09ConclusionWhat changed, quietly.

The shape of agentic AI · Q2 2026

The plumbing got boring; the math got real.

The headline of Q2 was three frontier model releases in six weeks, but the durable change underneath was less glamorous. MCP became the default tool-use protocol. $/successful-task fell 30–50% across the workload bands enterprises actually run. Pilot-to-production conversion almost doubled. The plumbing got boring; the math got real.

The result is that agentic AI has stopped being something teams evaluate quarterly and started being something they budget for annually. That is the structural shift we expect to define the back half of 2026 — and the reason the next quarterly report will look very different from this one.

We will keep updating the numbers. The next quarterly drops at the end of July 2026; the dataset behind these twelve charts will be published alongside it. If you are a procurement, engineering, or agency leader navigating the shift, bookmark this page — the comparison-to-prior-quarter columns are how this report earns its keep.

Production agentic deployments

Move past pilot. Build for production.

We design and operate agentic-AI programs for mid-market and enterprise teams — covering model selection, MCP integration, eval harness setup, EU AI Act readiness, and per-workload cost telemetry. Q2 2026 client engagements are reflected in this report.

Free consultationExpert guidanceTailored solutions
What we work on

Agentic AI engagements

  • Multi-vendor model routing — Opus / GPT-5.5 / V4 / open weights
  • MCP integration with first-party + curated servers
  • Eval harness setup — LangSmith / LangFuse / Arize / Braintrust
  • EU AI Act readiness audit and remediation
  • Per-workload $/successful-task telemetry
FAQ · Q2 2026 agentic-AI report

The questions we get from every reader.

Three structural changes happened in the same quarter, each one of which would be notable on its own. Frontier model releases compressed quality gaps to weeks (GPT-5.5 Pro, Opus 4.7 1M, DeepSeek V4 Preview all shipped within six weeks). MCP server publication crossed 9,400 across the four major registries, pulling first-party servers from Atlassian, Salesforce, Stripe, GitHub, and Linear into production-grade availability. And pilot-to-production conversion rates almost doubled, from 18% to 31% across the surveys we trust. Q1 was the rehearsal — the pilots, the eval harness, the first-party MCP server announcements. Q2 is when those threads turned into shipped systems.