// 01 · what it does

Learn before committing.

discovery-plan maps the assumptions behind a problem, ranks them by cost-of-being-wrong, selects the right research method for each assumption, defines evidence thresholds before any research begins (what "validated" and "invalidated" look like in practice), and sequences the work with explicit dependencies. The output is not a solution — it's a map of what you need to learn before you're ready to commit to one.

PMs skip discovery when they feel they already know the answer, or when the schedule doesn't leave room for it. discovery-plan is designed for exactly that situation: it makes the assumptions explicit, surfaces which ones are load-bearing, and sizes the research investment against the cost of being wrong. The pre-defined evidence threshold is the most important output — you decide what "good enough to proceed" means before you run the study, not after the data comes back and you're tempted to rationalize.

Day 6 opens Week 2 because evidence comes before analysis. Before synthesizing feedback (Day 7), interpreting data (Day 8), or mapping the competitive landscape (Day 9), you need a framework for what you're trying to learn. The discovery plan defines the question; the rest of the week builds the answer.

// 02 · sample prompts

Two ways in.

prompt.basic.txt
/discovery-plan

I want to understand why users abandon our checkout flow. We have data showing drop-off at the payment step but we don't know if it's a trust issue, a price-sensitivity issue, or a UX problem. Help me plan what to learn before we start building solutions.
prompt.advanced.txt
/discovery-plan

Problem: guide activation drop-off. 38% of guides who register never publish an experience within 30 days of signup. We don't know if the root cause is setup friction (listing form is too complex), uncertainty about time-to-first-booking (guides don't know if anyone will book them), pricing anxiety (guides don't know what to charge), or something we haven't identified yet.

Decision to inform: whether and how to invest in listing setup guidance, pricing transparency tooling, or a milestone email sequence. These are the three candidate solutions the team is considering. I need to know which problem is actually load-bearing before committing engineering resources.

What's known:
- Exit survey verbatims point to listing setup friction and pricing uncertainty (anecdotal, n≈40 over 6 months)
- Zendesk tickets from guides who never published show "didn't know what to put" as a common theme
- No activation funnel data exists — we don't know where in the listing setup process guides abandon
- Jordan Lee's squad owns the listing object; discovery findings will need to be shared with them before any build decision

What's unknown but assumed:
- The relative weight of friction vs. uncertainty vs. pricing anxiety
- Whether guides who abandon early are qualitatively different from those who complete setup
- Whether the problem is worse in specific experience categories (e.g., guides listing high-risk activities like alpine climbing vs. surf lessons)

Timeline: decision needed before Q3 planning in 6 weeks. Research budget: 4 interviews/week, no paid research panels. Quantitative data requests through Fernando Lopez (Data Lead) take 2–3 days.

Please produce a discovery plan: map assumptions, rank by cost-of-being-wrong, select research methods with pre-defined evidence thresholds, and sequence the work.
// 03 · reflection

Three questions.

  1. 01Which assumption did the skill rank as highest risk — and do you agree that being wrong about that one would be the most expensive mistake?
  2. 02What evidence threshold did it set for "validated" — is that bar high enough to justify a build decision, or is it too easy to clear?
  3. 03What research method did the skill recommend that you wouldn't have reached for — and what would it tell you that your default method wouldn't?