If your search program is already dialed in and CPAs are creeping up, AI Mode ads won’t “beat” your best keywords. The real job is incremental pipeline—measured with guardrails, not vibes.

If your search program is already dialed in and CPAs are creeping up, AI Mode ads won’t “beat” your best keywords. That’s the wrong bar. The real question is whether they add incremental conversions (or qualified pipeline) you weren’t getting anyway—without wrecking unit economics.

Early signals are mixed, which is exactly why this needs an ops-grade test plan, not a dashboard victory lap. In an analysis of 250+ campaigns, Mike Ryan (SMEC) reported AI Max delivered a 13% lift in conversion value, alongside higher CPA and variable ROAS (source: expert opinions compilation in the research brief). That’s not “awareness only.” It’s also not “plug-and-print efficiency.”

And here’s the 2026 wrinkle: measurement itself is getting fuzzier. The same brief flags that Consent Mode 2.0 modeled conversions can overestimate by ~10–20% in small accounts, with a recommendation to validate against CRM outcomes (source: latest developments compilation). If the numerator might be inflated, arguing about CPA deltas to the second decimal is theater.

So what matters now? Google’s AI Mode launched in early 2026 and is pushing ads into conversational journeys—plus shopping ads and “Direct Offers” for retailers—while AI Max reduces reliance on keywords by using Gemini to interpret landing pages and generate/optimize ad components (source: latest developments compilation). That changes where intent shows up, how queries get formed, and where conversions may happen.

AI Mode isn’t competing with your best keywords

The most common misread is to benchmark AI Mode traffic against the most polished part of the account: branded search and bottom-funnel non-brand that’s been tuned for years. That comparison makes any new surface look bad. Of course the legacy campaigns are efficient. They’ve been sanded down by query mining, bid adjustments, negative lists, landing page tests, and budget triage.

AI Mode (and AI Max as the matching layer) is built for coverage you probably don’t have: longer, exploratory, multi-step searches where users refine intent through follow-ups. Expert commentary in the brief frames this as “assisted discovery” for informational queries, with commercial intent captured as the conversation tightens (source: expert opinions compilation). Different job. Different economics.

But the data tells a different story than “it’s just awareness.” The ClickUp case study cited in the brief attributes outcomes to AI-driven search improvements including 20% incremental conversions, 16% ROAS increase, 22% lower CPA, and a 15% higher conversion rate (source: expert opinions compilation). That’s lower-funnel impact—when the landing pages and intent matching line up.

Mike Ryan (SMEC) found AI Max delivered a 13% lift in conversion value across 250+ campaigns, but with higher CPA and variable ROAS.

The tension is the point: AI Mode can drive conversion value while also making efficiency less predictable. That’s not a contradiction. It’s what happens when you trade tight keyword control for broader intent capture.

One move: prove incrementality with a holdout (not last-click)

If you only change one thing, change this: stop trying to “win” the AI Mode debate in-platform. Run an incrementality test with a real baseline, a real holdout, and CRM-validated outcomes.

The hypothesis (make it falsifiable): If we run AI Max / AI Mode-style expansion against a defined holdout, then qualified pipeline (or conversion value) will increase versus baseline because the system will match us to incremental long-tail and conversational intent we aren’t capturing with our current keyword set.

Seen from the other side, here’s what could also be true: you get more conversions in Google Ads reporting, but they’re modeled, inflated, or cannibalized from existing non-brand. The research brief explicitly warns that modeled conversions can overestimate performance in smaller accounts and recommends CRM validation (source: latest developments compilation). So the test has to be built to survive that.

Run it this week: a conversion-proof AI Mode test

Here’s the 5-minute version you can run this week:

Setup

Launch

Readout

Next test

If you see lift but efficiency softens, don’t panic. That’s consistent with Mike Ryan’s finding: higher conversion value with higher CPA and variable ROAS (source: expert opinions compilation). The next iteration is usually landing page and intent alignment, because AI Max is described as using Gemini to analyze landing pages and match user intent, including automated headline/optimization behavior (source: latest developments compilation). If the page doesn’t speak the language of the query, the system can’t rescue you.

The trade-off: more coverage, less certainty

AI Mode ads can be both awareness and conversion. That’s not fence-sitting; it’s how conversational journeys work. Calvin Scharffs (Direct Digital Holdings) argues AI Mode’s “fan-out” subtopic breakdown can place relevant ads into those journeys, shifting emphasis from raw volume to quality engagement and lifetime value in high-consideration categories (source: expert opinions compilation). That’s a B2B-friendly framing.

The risk is also real. Automation can create performance cliffs. The research brief cites scenarios where Smart Bidding/PMax can underperform manual by 30–50% in failure cases, with a recommendation for quarterly CRM validation (source: latest developments compilation). So no, this isn’t “set it and forget it.” It’s “set it, measure it like an adult, and kill it if it doesn’t earn its keep.”

Circle back to the original constraint: a mature search program doesn’t need another awareness channel pretending to be direct response. It needs incremental pipeline. AI Mode might deliver it—or it might just reshuffle credit inside modeled attribution. The only honest way to tell is a holdout tied to CRM reality. Everything else is just a nicer chart.