Skip to content
← Back to Thinking

April 2026

The myth of the multi-month discovery

Most discovery phases waste 80% of their time on artefacts no one decides from. Here's what 5 days of validation actually looks like — and why agencies are afraid to scope it.

Design SprintsStrategyProduct

Engineering software company. They'd been building for two years. Real money in. Real team. Real roadmap. We ran one design thinking workshop. Five days. A user walks into the session with slides ready. UI mockups. Specific feature requests. "If you built this, it would save us a week per month."

They'd been trying to tell the company for years. Nobody was listening.

That's the data point that should bother anyone who sells, scopes, or signs off on a "discovery phase" longer than two weeks. The customers usually know. They've been telling someone. The constraint isn't research — it's the format and forum in which research gets converted into a decision.

I'm going to make a contrarian claim in this article and back it with three concrete engagements. Here it is:

Most multi-month discovery phases are not about finding answers. They are about producing artefacts that survive a procurement process. They generate confidence in the buyer that they spent enough money. They rarely produce a decision the team executes on.

Five days of structured validation, run correctly, outperforms three months of conventional discovery for the actual job to be done: deciding what to build next with enough conviction to actually build it.

This is going to sound like consultant self-promotion. It isn't — I've sold multi-month discoveries plenty of times when the budget required it. I'm telling you what the evidence I've collected over 26 years actually says.

What the data looks like

Three engagements I can talk about publicly:

Parliament of Finland. Brief: a clear directional answer for a critical service decision. Conventional scope: 8–12 weeks of stakeholder interviews, surveys, journey mapping. We delivered the validated direction in 24 hours. Not the polish — the decision-grade artefact. The system worked because the question was scoped before the clock started.

Admicom. Real ERP company, real strategic question. Conventional scope: a quarter-long discovery, probably more. We ran a 5-day sprint: validated direction, prototyped, tested with five real users, decision memo. The quote we still use: a user arrived with the answer in their head, slides ready, mockups drawn — they'd been waiting years to be heard. The 5 days wasn't compressing research; it was building the forum where the research could surface.

Engineering company. Two years building before we ran the workshop. After: the user with slides. The lesson: time spent in conventional discovery doesn't add up to clarity. The org didn't lack data. It lacked a structured place for a customer to say what they already knew.

These three aren't outliers. They're the rule when sprints are run correctly. The outliers are the multi-month discoveries that do produce a decision — and when I dig into those, the decision usually got made in week one or two of the engagement, and the rest of the time was spent generating artefacts to defend it internally.

Why discovery phases stretch

It's not because the work is hard. It's because the incentive structure pulls toward length:

  • The agency's incentive. Longer scope = larger contract = better margin. Discovery phases are easier to sell at six figures than five-figure sprints, even when the sprint is what the client actually needs.
  • The client's procurement. Procurement teams trust duration as a proxy for rigor. A 12-week deliverable looks like "real work." A 5-day deliverable looks like a workshop with snacks.
  • The internal sponsor's risk profile. A long discovery is easier to defend internally if the recommendation is unpopular. "We did three months of research" beats "we ran a sprint."
  • The fear of the answer. Most discoveries that take three months take three months because the team doesn't want to commit to a direction yet. The discovery becomes a decision-deferral mechanism.

None of these incentives are about the actual quality of the decision. They're about the politics around it. Recognize that, and the multi-month discovery starts to look like what it usually is: an expensive way to delay an inevitable answer.

What 5 days actually looks like

A real 5-day validation sprint, run correctly, has roughly this shape:

  • Day 1: Frame the bet. Translate the strategic question into a single user-facing question worth testing. Lock the decision criteria. Pre-write the post-sprint memo so everyone knows what evidence they're hunting for.
  • Day 2: Map what's known. Two-hour expert interviews with the people inside the company who already know the answer (and have usually been ignored). Surface every assumption that, if wrong, kills the bet.
  • Day 3: Build. AI-augmented prototype production. Smallest believable artefact that tests the top three assumptions. Recruit five real users for tomorrow.
  • Day 4: Test. Five sessions, 45 minutes each. Same prototype, different brains. Capture verbatim quotes against assumptions, not against the UI.
  • Day 5: Decide. Fill in the pre-written memo with evidence. One-page recommendation: build, don't build, or test further. Hand off to engineering with prototype + recordings + memo.

Five days. End-to-end. Decision-grade.

The skeptical read on this is "but where's the rigor?" The honest read is: the rigor is in who you put in the room and what question you scoped before the clock started. Most discoveries fail because they start without a sharp question and try to find one in week six.

The detailed playbook — agendas, prompts, decision artefact templates, agency framing language — lives in the playbook I send out. Free.

When the long discovery is right

I want to be honest about where this generalizes and where it doesn't. The 5-day version works when:

  • The strategic question can be expressed as a user-facing prototype.
  • You can recruit five users from the actual target segment in a week.
  • The decision-makers are willing to be in the room on Day 5.

It does not work — and you genuinely need a longer engagement — when:

  • You're entering a regulated industry with compliance constraints that need legal cycles.
  • The "user" is an enterprise procurement committee with a 90-day cycle.
  • The work is service redesign across multiple operational teams (this is what the transformation engagement exists for).
  • The decision requires quantitative validation at scale (in which case you don't need discovery — you need an experimentation platform).

If you're outside those edge cases, the 5-day version probably outperforms whatever you've been quoted.

The agency framing problem

If you run an agency and you've read this far thinking "fine, but I can't sell a 5-day sprint at the margin we need" — that's the right honest objection. The answer most of the agencies I work with land on is: don't sell the sprint. Sell the year.

A 5-day sprint that produces a real decision opens the door to the implementation work that follows. The economics of "sprint-then-build" beats "discovery-then-maybe-build." Fewer dead engagements. Faster client value. Better case studies. Higher renewal rate. The first sprint is often run at near-cost; the engagement that follows pays for it ten times over.

This is what I help agency partners scope when we work together. The methodology is portable. The framing language is the harder part.

What to do Monday

  • If you're a buyer: any agency that quotes a 12-week discovery for a question that fits on a sticky note is selling you their cost structure, not your answer. Push back. Ask what the 5-day version of the same question looks like.
  • If you're an agency: scope a 5-day sprint as the entry point on your next pitch. You don't have to drop the long discovery offering — just lead with the version that produces the fastest decision.
  • If you're an in-house team: stop calling it "discovery." Call it "validation." That single word change reframes the budget conversation, the timeline expectations, and the success criteria.

26 years of pattern recognition tells me one thing about this work: the customers usually know. The job is to build a structured place for them to say what they've been waiting to say. That place takes a week, not a quarter.


If you want the templates, agendas, and decision-memo formats I use, the playbook is free. If you have a specific question and want to see whether it fits the 5-day shape, the 2-minute quiz routes you to the right starting point.

Working on something similar?

Most engagements start with a 20-minute call.

You leave with a clearer read on the problem — even if we don't end up working together. No deck, no pitch.