If your team is “using AI” but the output still feels like random acts of marketing, the constraint isn’t prompting. It’s context.
That’s why Emily Kramer wrote that MKT1 ran its first-ever Buildathon in Claude Code on Friday—and that 1,400+ marketers RSVPed to build a marketing-strategy skill inside Claude. Not a content calendar. Not “50 LinkedIn hooks.” A strategy layer that gets referenced every time the team asks the model to do work.
That RSVP count matters because it shows where the pain is showing up in 2026: marketers aren’t struggling to generate words. They’re struggling to generate specific work that compounds into qualified pipeline.
The adoption numbers are high. The integration gap is the story.
Most marketing orgs aren’t debating whether to use AI anymore. The research brief puts it bluntly: 94% of marketers have adopted AI in workflows, and 88% use AI for daily tasks (Search results synthesis, AI Adoption and Implementation Rates). That’s not early adoption. That’s default behavior.
And yet, plenty of teams are still stuck in the “experiment” zone: 43% are experimenting with AI while only 32% say they’ve fully implemented it (Search results synthesis, AI Adoption and Implementation Rates). Same tools. Very different outcomes.
Here’s the tension worth sitting with: if almost everyone is using AI daily, why does so much AI-assisted marketing still look interchangeable?
Because most usage clusters around tasks where generative models are already good at producing volume. The brief lists common use cases like content creation (50%), content optimization (51%), brainstorming (45%), and automating repetitive tasks (43%) (Search results synthesis, Top AI Use Cases). Another dataset in the same brief says marketers report AI most impacts content creation/copywriting (64.5%), plus SEO/content optimization and brainstorming (both 43.9%) (Search results synthesis, Task table: AI Impact).
All useful. None of it guarantees coherent strategy.
That’s the pattern interrupt in Kramer’s point: “AI adoption” can easily become “content acceleration.” And content acceleration without constraints is just noise—faster.
“Random acts of marketing” is what happens when AI has no ICP, no narrative, no goals.
Kramer’s framing is unusually operational. In the transcript attached to her post, she calls out “random acts of marketing” as “checking the boxes” and “doing the cool thing you saw [a] competitor” that has nothing to do with your company. Then she connects that failure mode directly to AI: “as we use AI more and more AI gives us random generic outputs not specific to our company.”
But the fix she proposes isn’t “write better prompts.” It’s giving Claude context about “your specific company and your specific marketing strategy, including things like your ICP’s and your storylines, your positioning and your goals.”
In other words: the model can’t respect constraints it can’t see.
This is also where expert consensus in the research brief lines up. Multiple sources summarized there argue that marketing strategy is what makes AI-generated content effective; without strategy and oversight, AI tends to produce generic output that doesn’t differentiate brands or drive ROI (Search results synthesis, expert opinions on the importance of marketing strategy in AI content generation).
So the Buildathon’s premise is less “teach people Claude Code” and more “force the uncomfortable work upstream.” Define the ICP. Decide the storylines. Write down positioning in plain language. Clarify goals. Then encode that into a reusable skill so the next 200 requests aren’t reinventing the wheel.
And yes, that’s slower than asking for 30 ad headlines. It’s also where incrementality starts to become possible, because you can finally hold the inputs steady long enough to measure lift.
One tactic: build a “context layer” skill before you let AI touch production work
Here’s the 5-minute version you can run this week: treat your AI strategy layer like a product spec. Build it once, then iterate as you learn. Kramer describes it as “a context layer… in Claude to make every other output that you do better,” instead of “a million ‘strategy’ docs no one ever looks at again.”
Primary tactic: Create a single, versioned “Marketing Strategy Skill” in Claude that every marketer uses as the default starting point for briefs, campaigns, and content.
The hypothesis (make it falsifiable): If we require all AI-assisted marketing work to reference a shared strategy skill (ICP, positioning, narratives, goals), then revision cycles will drop and conversion rates on priority motions will improve because the model will stop generating off-strategy variants that create rework and inconsistent messaging.
When this is wrong: If the company doesn’t have real positioning clarity—or Sales and Product disagree on ICP—encoding “strategy” into a skill will just formalize the confusion. The output will be consistent. It will also be consistently wrong.
Run it this week (setup, launch, readout, next test)
Setup (90 minutes): Assign one owner from Demand Gen and one from PMM. Their job isn’t to write a manifesto. It’s to produce a usable constraint set. Keep it tight: ICP definition, disqualifiers, positioning, 3 narrative pillars, proof points the team is allowed to use, and the current quarter’s pipeline goal (directional is fine).
Tools: Claude (and Claude Code if the team is already working that way). A shared doc for version history. Nothing else required.
Launch (this week): Pick one workflow where AI is already used heavily—like ad concepting, landing page drafts, outbound email variants, or webinar abstracts. Make a simple guardrail: no AI output ships unless the prompt references the strategy skill.
What to measure (and what not to over-interpret):
- Primary metric: time-to-usable-draft (from request to approved v1). This captures rework.
- Secondary metrics: (1) internal revision count per asset, (2) message consistency score in peer review (simple 1–5 rubric).
- Guardrails: watch volume and speed separately. Faster drafts that reduce quality are a trap.
- Stop-loss threshold: if revision cycles increase for two consecutive sprints, pause and audit the skill. The constraint set is probably missing disqualifiers or clear proof points.
Readout (end of week): Pull 10 samples. Compare “with skill” vs “without skill” outputs. Don’t claim causality from a dashboard. Just look for the operational signal: did the team spend less time fixing basic strategic misalignment?
Next test (week two): Add one missing piece of context that tends to cause churn—like pricing boundaries, implementation constraints, or the “why now” narrative—and see if revision count drops again.
The real outcome isn’t better copy. It’s fewer unforced errors.
The Buildathon format is telling, too. In the research brief, “buildathons” show up mostly as platform-led programs (Replit, Airtable, Babson sponsorships) where the emphasis is building something concrete with milestones—not a one-off hack (Search results synthesis, buildathons query results). Kramer’s use of that format for marketing strategy is a quiet reframing: treat strategy as a build artifact, not a slide deck.
And the comments under her post underline the same theme from different angles. Anish Kapoor calls it “the single biggest failure point in AI adoption today: treating LLMs as content mills rather than strategy partners,” pointing to “context engineering” as the missing foundational work. Jon Itkin adds that skill-building “forces us to look at the logic underlying our work and see if it actually works/scales.” Those aren’t hype lines. They’re operator complaints.
So the open loop from the lede closes here: 1,400 RSVPs isn’t just excitement about Claude Code. It’s a referendum on what’s broken in most AI marketing workflows.
Speed was never the hard part. The hard part is deciding what “good” looks like—then making it hard for the team (and the model) to drift. A shared context layer doesn’t make marketing easy. It makes it less random.