// 01 · what it does

Patterns, not action items.

retro-synthesis processes retrospective notes — one or many — and produces a pattern analysis rather than a summary. Each recurring pattern is named, mapped to the sprints where it appeared, and classified as improving, stable, or getting worse. Action items are tracked across retros: resolved items are acknowledged; items that appear in multiple sprints without resolution are flagged with a count. With a single retro, the skill surfaces themes. With four or more, it surfaces systemic problems.

Individual retros produce action items that get written down and forgotten. retro-synthesis with multiple retros produces patterns — which are different in kind. A problem that appeared in three sprints isn't something to action-item; it's something to understand as a structural failure. The skill forces that distinction, and the follow-through tracker makes it impossible to pretend that repeated inaction is just an oversight.

Day 15 closes the strategy week because strategy is only as good as the process that executes it. The PM spent a week prioritizing, deciding, and aligning. Today they look back at what their team's process is actually producing and what it keeps failing to fix.

// 02 · sample prompts

Two ways in.

prompt.basic.txt
/retro-synthesis

Here are notes from two recent retrospectives. Find patterns and track action item follow-through:

Retro 1 — Sprint 4: Went well: deployment pipeline is much faster. Didn't go well: we merged a PR without a review and it caused a production incident. Action: enforce required reviewers in GitHub. Owner: Dev Lead.

Retro 2 — Sprint 5: Went well: no production incidents. Didn't go well: the required reviewers rule was set up but people are bypassing it for "small" changes. Action: team agreement on what counts as "small." Owner: Dev Lead. Status: not started (same action as last sprint).
prompt.advanced.txt
/retro-synthesis

Synthesize these five retrospectives. Find recurring patterns with trend classification (improving / stable / getting worse). Track action item follow-through across all five. Identify the 1–2 focus areas that most need attention next quarter.

---

RETRO: Instant Book Beta — Sprint 14
Went well: conversion lift was clear in early data; guide interviews surfaced concrete adoption blockers.
Didn't go well: analytics events were added after launch, so early funnel data is incomplete and we can't measure the first two weeks of the beta properly. Support team learned about the Instant Book rollout from customer tickets — not from us.
Action items:
- Define instrumentation requirements before beta launch, not after. Owner: Fernando Lopez (Data). Status: not started.
- Create internal comms checklist for rollout coordination with support. Owner: PM (me). Status: in progress.

---

RETRO: Android GA — Sprint 15
Went well: crash rate improved significantly; QA caught payment issues before public launch.
Didn't go well: scope changed mid-sprint when leadership asked for push notification parity with iOS. We had to drop a planned story to accommodate. Payment edge cases were more complex than estimated — the team underestimated by roughly 4 points.
Action items:
- Create launch-readiness checklist for mobile releases. Owner: Chris Okafor (EM). Status: in progress.
- Establish scope freeze protocol: no new scope in sprint after Day 3. Owner: Chris Okafor (EM). Status: not started.

---

RETRO: Expanded Guide Pro Analytics Dashboard — Sprint 16
Went well: Guide Pro beta users liked the listing view data; no significant bugs in QA.
Didn't go well: dashboard shipped with metrics but no recommendations, so guides don't know what action to take. PM and design disagreed late in the sprint on whether the analytics should be prescriptive — we should have resolved this in the PRD. Analytics events on the dashboard itself weren't instrumented at launch.
Action items:
- Define "actionable analytics" standard before adding new dashboard modules. Owner: Product. Status: not started.
- Analytics instrumentation checklist: add dashboard event requirements to Definition of Done. Owner: Fernando Lopez (Data). Status: not started (Fernando now has two open instrumentation action items).

---

RETRO: Instant Book Rollout Expansion — Sprint 17
Went well: guide education modal shipped on schedule; guide opt-in improved from 22% to 27% after modal launch. Data from the modal funnel was clean.
Didn't go well: IB-089 (opt-in funnel analytics events) fired inconsistently on Android — the Android implementation had a timing bug that caused some events to drop. We didn't catch this in QA because Android test coverage for analytics events is thin. Pattern: analytics gaps appearing in every launch.
Action items:
- IB-087 (Guide opt-in analytics) — reopened; still unresolved from Sprint 14. Owner: Nina W. Status: reopened.
- Add analytics event testing to Android QA checklist. Owner: Elena T. Status: not started.
- AND-142 (Payment failure retry state) — closed. Resolved by Nina W. in Sprint 17.

---

RETRO: Android GA Prep — Sprint 18
Went well: payment edge-case regression suite is green; push notification reliability at 98%; Android GA looks solid.
Didn't go well: scope of Android GA grew mid-sprint when Priya Anand (Marketing) requested a push notification campaign flow not in the original GA spec. We accommodated it but dropped a monitoring improvement ticket. The launch-readiness checklist (action item from Sprint 15, owner Chris) was created but wasn't actually used during GA planning — the team didn't know it existed.
Action items:
- Launch checklist adoption: the artifact exists but isn't operationalized. Third sprint this pattern appears in some form. Owner: Product. Status: not started.
- Scope intake process: mid-sprint additions from stakeholders need a triage step before acceptance. Owner: Chris Okafor. Status: not started (same as Sprint 15 scope freeze protocol — different framing, same underlying problem).
- Priya Anand comms process: establish a pre-sprint window for marketing to surface launch requirements. Owner: PM + Priya. Status: not started.
// 03 · reflection

Three questions.

  1. 01Which pattern did the skill classify as "getting worse" — and do you agree with that read based on the evidence across all five sprints?
  2. 02What unaddressed action item has the highest compounding cost if it goes unfixed another quarter — and why hasn't it been fixed?
  3. 03Where in your real team's retro process would this synthesis change what gets prioritized — and what's getting in the way of doing it this way now?