If your organic clicks and form-fills are sliding 20–30% while Sales says “inbound feels fine,” you’re not crazy—you’re watching AI search remove the evidence your dashboards depend on. That’s the real risk in 2026: not that marketing stops working, but that the proof marketing relies on gets harder to capture and easier to dismiss.
Forrester has been blunt about the direction of travel. As buyers use answer engines and AI-generated results, visible interactions decline, and the engagement-based accountability model starts to wobble. The same research notes that many teams are already seeing 20–30% declines in web traffic and demand volume as zero-click behavior rises.
Here’s the pattern interrupt: Forrester also reports that 90% of B2B marketing leaders say AI visibility is an investment-level priority. Marketing leaders are prioritizing visibility in systems that—by design—can reduce clicks. That’s not a contradiction. It’s the new job.
One clear move: Replace “engagement proof” with an AI Visibility + Sales Telemetry Scorecard that can survive fewer clicks. Not a rebrand of SEO reporting. An accountability reset built for answer engines.
The accountability model is built on engagement. Forrester says so.
For two decades, B2B marketing has cashed the same check: if systems can show buyers engaged with marketing, then marketing must be contributing. It’s why marketing-sourced pipeline, marketing-influenced revenue, lead volume, MQLs, and website engagement became board-safe language.
Forrester’s research frames how deep that dependency runs: eight of the top 12 criteria used to evaluate B2B marketing rely on proof of engagement. That’s the foundation. And it’s brittle.
The uncomfortable part is that this bargain was always a little shaky. Engagement is measurable, yes. It’s also incomplete. Buyers talk to peers. They build shortlists in private. They forward PDFs. They sit in internal meetings marketing never sees. Engagement metrics were a proxy, not reality. Convenient. Sometimes useful. Often overstated.
But the context, however, is more complex now. AI search doesn’t just change where discovery happens; it changes what gets recorded. When the research journey compresses into an answer, the “evidence trail” disappears.
AI search removes clicks—and with them, your attribution receipts
Answer engines and AI-enhanced SERPs shift optimization away from keyword rankings and toward Answer Engine Optimization (AEO): content designed to be retrieved, cited, and summarized by systems like Perplexity and Copilot (as described across sources cited in the research brief, including Forrester and MarketingProfs).
That shift matters because the buyer’s interaction is increasingly with the model, not the site. The research brief captures the behavioral change in a few sharp data points:
- Google AI Overviews are associated with a 34.5% reduction in website clicks (as cited in the research brief from an unspecified study referenced in search results).
- Organic CTR can drop as much as 61% when AI Overviews appear (also cited in the research brief from an unspecified publisher in recent news search results).
- At the same time, 90% of B2B buyers who use AI Overviews click through to at least one cited source (research brief, unspecified publisher). So it’s not “no clicks.” It’s different clicks.
Now add the second-order effect: buyers ask longer, more specific questions in AI tools—10–11 words versus 2–3 words on Google (as cited in the research brief from unspecified expert-opinion sources). That changes what “top-of-funnel” even looks like. It’s less browsing. More decision-shaped queries.
And there’s another twist. One cited finding in the brief says visits from AI-powered search convert at 4.4x the rate of traditional organic traffic (unspecified publisher). Lower volume, higher intent. Great for pipeline quality. Terrible for teams still graded on raw lead count and last-click attribution.
The mismatch: what the business needs won’t show up in legacy metrics
Forrester frames the core mismatch clearly: the work marketers must prioritize now—buyer preference and visibility in generative AI search—often won’t appear in legacy engagement metrics. So marketing can help the business while looking like it’s underperforming.
That’s the credibility trap for a VP of Demand Gen in 2026. The dashboard says traffic is down. Leads are down. Multi-touch attribution shows less “influence.” Meanwhile, buyers are still getting answers—just not on your site.
Perplexity CEO Aravind Srinivas adds a practical constraint from the research brief: 90% of buyers fact-check AI citations. So the model’s answer isn’t the finish line. The citations are the battleground. If your brand isn’t cited, you’re not merely losing traffic; you’re losing the chance to be verified.
And if you’re assuming “we’ll just be cited because we rank,” don’t get comfortable. Another cited stat in the brief says 76% of AI Overview citations come from top-10 organic results, and 26% of brands are absent entirely (unspecified publisher). AI visibility can reinforce incumbents. It can also erase you.
Run it this week: an AI Visibility + Sales Telemetry scorecard (directional, auditable)
This is not a tooling project. It’s a measurement design change. The goal is simple: keep accountability intact when clicks decline.
Hypothesis (make it falsifiable): If we track AI answer visibility (presence + citations) alongside first-party intent and Sales stage conversion, then we’ll see stable or improving qualified pipeline even when web engagement falls, because AI search is shifting discovery into zero-click environments while sending fewer, higher-intent visits.
Setup
- Audience: 10–20 high-intent buyer questions tied to your ICP (role + use case). Use the longer-query reality: aim for question formats, not head terms.
- Owners: Demand Gen (scorecard owner), SEO/Content (AEO inputs), RevOps (pipeline and stage definitions), Sales (stage hygiene).
- Tools: Whatever you already use for CRM + web analytics; add a lightweight process for AI visibility checks (manual is fine for week one). Tool choice matters less than consistent sampling.
- Timeline: 14 days to baseline, then weekly readouts.
Launch
- Step 1 — Baseline AI visibility: For each priority question, check whether your brand is present in AI answers and whether you’re cited. Record: present/not present, cited/not cited, which URL is cited (if any). Keep screenshots. Make it auditable.
- Step 2 — Map citations to downstream proof: For cited URLs, track what happens next: demo requests, contact sales, trial starts, pricing page visits—whatever “hand-raise” exists in your motion. Don’t over-interpret last-click. Just capture direction.
- Step 3 — Tie to Sales telemetry: In CRM, create one field or tag for “AI-sourced/AI-influenced discovery” based on self-report or SDR capture when it’s explicitly mentioned. It will be incomplete. That’s fine. Directional beats imaginary precision.
Readout
- Success = stable or improving qualified pipeline per week while AI visibility (presence/citations) increases on priority questions.
- Guardrails = lead-to-opportunity conversion rate and sales cycle progression (stage-to-stage). If volume drops but conversion improves, that’s a trade-off worth debating, not a failure.
- Stop-loss = if qualified pipeline drops for two consecutive weeks and AI visibility is flat or declining, pause content production aimed at AEO and diagnose (ranking, citation eligibility, credibility signals, or question selection).
Next test
- Pick the 3 questions where you’re absent (remember the “26% absent entirely” risk) and rewrite one existing page per question to be citation-friendly: concise answer up top, proof points that can be lifted, and clean structure. Keep SEO fundamentals intact—citations still skew toward top-10 results.
The trade-off (say it out loud): This will reduce your reliance on easy-to-report engagement metrics before Finance is emotionally ready. Expect friction. The payoff is a measurement story that survives fewer clicks.
Forrester’s warning isn’t that marketing is doomed. It’s that the old accountability model can’t survive a world where buyers get answers without leaving the results page. In 2026, the teams that keep credibility won’t be the ones with the cleanest attribution dashboards. They’ll be the ones who can show, with receipts, that visibility in AI answers correlates with sales-stage movement—even when the click counters go quiet.