If your dashboards pull day-level Google Ads data older than ~3 years, they’re on a clock. On June 1, 2026, Google’s new retention rules will make those queries fail—or quietly lose detail.

If your dashboards pull day-level Google Ads data older than ~3 years, they’re on a clock. Starting June 1, 2026, Google Ads will cap hourly/daily/weekly reporting to a 37-month rolling window—and anything outside it won’t be available in the UI or via APIs. That’s not a strategy problem. It’s an ops problem with a very predictable failure mode.

Meanwhile, Google will keep monthly/quarterly/annual (aggregated) reporting for 11 years, and reach and frequency metrics get 3 years. Same platform, different clocks. (Google Ads Developer Blog)

Here’s the uncomfortable part: most teams won’t notice until a board deck, a QBR, or a pipeline post-mortem is due—and the Looker/Power BI view starts throwing errors or, worse, backfills stop matching last quarter’s numbers.

Primary tactic: build a “retention-ready” archive pipeline now so your team owns the last 37+ months of granular performance data outside Google Ads.

What’s changing (and where it breaks first)

Google announced a tiered retention policy on May 1, 2026. The key line: hourly/daily/weekly data (for periods shorter than one month) will only be available for 37 months, starting June 1, 2026. (Google Ads Developer Blog; PPC Land)

That change doesn’t stay contained inside the Google Ads UI. PPC Land reports it applies across common access paths: Google Ads API, Google Ads Scripts, Google Analytics Data API, BigQuery Data Transfer Service, and the BI dashboards that query those sources.

When a query asks for a date range outside the allowed window, it can return a DateRangeError. (PPC Land) That’s the clean failure mode. The messier one is partial loads and missing granularity that nobody spots until a KPI definition gets challenged.

One more detail matters for architecture: DV360/CM360 APIs are reported as unaffected by this specific Google Ads retention change. (PPC Land) So if the paid media mix includes programmatic alongside Google Ads, the data retention problem may be uneven across channels—which is exactly how reporting arguments start.

Why this matters now: your “long-term” analysis just got shorter

Thirty-seven months sounds generous until the day-level work shows up: weekday seasonality, promo windows, budget pacing, creative fatigue, and those “why did qualified pipeline dip for two weeks?” investigations. Monthly aggregates can keep the storyline alive for 11 years, but they won’t tell you where inside the month things broke.

Search Engine Journal and PPC Land both make the practical point: if a team wants multi-year day-level analysis, it needs to warehouse/export that data before it ages out. Google has also recommended advertisers archive/export older data for long-term analysis needs. (Search Engine Journal)

There’s also a second-order impact that’s easy to miss: Google Cloud release notes indicate that BigQuery Data Transfer Service connectors for Google Ads (and related connectors) will stop populating data for backfill runs earlier than 37 months from the current date, effective June 1, 2026. That means “we’ll just backfill later” stops being a plan.

And yes, people are already annoyed. In the Search Engine Journal write-up, a response quoted from X captures the sentiment bluntly:

“We have franchise data for 15 years. Utilize this to manage risk when making changes, comparing seasonal trends, testing, testing, and testing… Now you’re withholding the thing that matters?”

That frustration is real. But the fix isn’t arguing with the policy. It’s changing what the system depends on.

The one move: build a retention-ready archive (so your dashboards don’t lie)

Here’s the 5-minute version you can run this week: stop treating Google Ads as your long-term database. Keep using it for activation and short-horizon optimization. But for historical truth, move the source of record to your warehouse (or at least to controlled storage) on a schedule you own.

The hypothesis (make it falsifiable): If we export and store granular Google Ads reporting data daily/weekly into our own warehouse, then our YoY and seasonality analyses will stay consistent after June 1, 2026, because our reporting layer will no longer depend on Google’s rolling 37-month window.

Step 1 — Inventory: list every dashboard, scheduled extract, and script that queries Google Ads by day/week/hour. Include Looker Studio/Power BI models and any ad-hoc notebooks that the team treats as “the truth.” This is the boring part. It’s also where most breakage hides.

Step 2 — Decide what granularity you actually need: keep monthly in-platform for long-range exec trend views (11 years stays available). (Google Ads Developer Blog) Move daily/weekly into your own storage for the windows where you need diagnostic power. This is a trade-off: warehousing everything is expensive and noisy; warehousing nothing is reckless.

Step 3 — Export on a schedule and store it: Google explicitly recommends archiving/exporting older data ahead of retention cutoffs. (Search Engine Journal) Do it via the Google Ads API for automation, or downloads for a stopgap. If the org already uses BigQuery, be careful with “backfill later” assumptions because connector behavior changes after June 1, 2026.

Step 4 — Add failure-proofing: update queries/ETL to handle DateRangeError cleanly and alert on it. (PPC Land) A pipeline that fails loudly is annoying; a pipeline that fails quietly rewrites history.

Step 5 — Redesign the executive view: keep long-range reporting at monthly/quarterly to reduce noise and avoid false precision. Then let the operator layer drill into your archived daily/weekly table when something looks off. Two layers. Two jobs.

Run it this week (setup, owners, metrics, guardrails)

Setup: one core account (or MCC) first; pick the 2–3 dashboards that leadership uses plus the one the paid team lives in. Tools can be whatever the RevOps stack supports (API pull + warehouse, or interim exports), but the workflow must be scheduled and owned.

Owners: Demand Gen (requirements + validation), Marketing Ops/RevOps (pipeline + definitions), Data/Analytics (storage + monitoring). Nobody has to love it. Someone has to own it.

Budget range: keep it small at first—this is mostly engineering time, not media spend. The cost shows up in data storage and maintenance, so start with only the fields and segments that matter for decisions.

Timeline: 5 business days for inventory + a first export job; 2–3 weeks to harden monitoring and refactor dashboards to read from the warehouse for historical views.

Success = the same YoY chart (daily or weekly) matches within an agreed tolerance when run from the archive vs. run from in-platform for overlapping periods.

Guardrails = (1) data freshness lag stays under 24 hours for daily tables, (2) schema changes trigger alerts, not silent nulls.

Stop-loss = if the archive pipeline has more than 2 failed loads in a week or any “silent partial” loads, pause dashboard migration until monitoring is fixed.

One important “when this is wrong” clause: if the motion genuinely doesn’t need multi-year day-level analysis—short sales cycles, constant positioning changes, minimal seasonality—then a 37-month window may be fine. PPC Land and Search Engine Journal both hint at this nuance. The bigger risk is still operational: a report you rely on shouldn’t break because a platform changed retention rules.

Google’s new policy doesn’t remove history. It changes who’s responsible for it. After June 1, 2026, teams that treated the ad platform as a database will lose detail by default; teams that built an archive will keep their baseline—and keep arguing about the business, not the dashboard.