// 01 · what it is

A system, not a chatbot.

The PM Agent Kit encodes the accumulated judgment of a senior product manager as a set of portable, invocable skills that run in Claude Code. It’s not a collection of prompts or templates — it’s a system: each skill reads your company context, applies PM judgment from a shared reference library, and produces structured, reviewable artifacts.

The kit is designed to travel. Everything in .claude/skills/ and references/ is company-agnostic — it encodes how a strong PM thinks, not what any specific company does. Company-specific context lives in company/ and gets rebuilt at each new role. Your accumulated work lives in knowledge/ and follows you too.

The workbook is the structured practice companion. Each day introduces one skill against a consistent fictional product — Terrain — so you can see exactly how the kit performs at full capacity before applying it to your real work.

// 02 · the skills

Eighteen invocable skills. One for every PM job.

Week 01 Writing & thinking
01Writing
/prd-draft
Turn any rough input into a complete, structured PRD
02Writing
/doc-review
Evaluate a PM document against expert quality criteria
03Writing
/generate-tasks
Break a spec into user stories with acceptance criteria
04Writing
/sprint-plan
Assess backlog health and optionally draft a sprint plan
05Writing
/status-update
Synthesize delivery health into a stakeholder-ready update
Week 02 Evidence & discovery
06Evidence
/discovery-plan
Map assumptions by risk and plan the cheapest credible validation
07Evidence
/user-feedback
Cluster customer feedback into themes, severity, and implications
08Evidence
/data-analysis
Interpret product metrics in context and generate visualizations
09Evidence
/competitive-intel
Read the competitive landscape and translate signals into implications
10Evidence
/business-case
Build the investment argument: problem, impact, cost, risks
Week 03 Strategy & alignment
11Strategy
/roadmap-prioritization
Sequence initiatives and produce a defensible rationale
12Strategy
/decision-log
Capture decisions with context and reasoning that stays findable
13Strategy
/alignment-memo
Codify team operating norms and escalation paths
14Strategy
/meeting-brief
Prepare for a specific meeting with context, agenda, and decisions
15Strategy
/retro-synthesis
Find patterns in retrospective feedback before they become permanent
Week 04 Comms & integration
16Comms
/launch-checklist
Generate a complete checklist covering all launch readiness dimensions
17Comms
/one-pager
Compress a complex initiative into a single-page executive summary
18Comms
/presentation-deck
Build a narrative deck and export to HTML, PDF, or PPTX
// 03 · reference library

26 files of encoded PM judgment.

These aren’t documentation — they’re the judgment that runs underneath every skill. pm-philosophy.md is a good example: it encodes the reasoning behind every quality criterion in the system, and skills across all four weeks load it to understand why a standard exists, not just what it is. The same file shapes how a PRD gets drafted, how a retro gets synthesized, and how a business case gets structured.
FileWhat it encodes
PM Judgment
pm-philosophy.mdThe reasoning behind every quality criterion in the system
pm-smell-test.mdRed flags that signal an artifact isn’t ready to move forward
decision-frameworks.mdHow to structure decisions that stay legible at six months
prioritization-judgment.mdHow to sequence competing initiatives with transparent reasoning
pushback-and-negotiation.mdHow to protect scope, navigate disagreement, and escalate when needed
Quality Criteria
quality-criteria-prd.mdWhat a strong PRD must satisfy across problem, solution, and data dimensions
quality-criteria-tech-spec.mdWhat a good engineering spec must surface for PM review
quality-criteria-ticket.mdWhat a good user story or ticket must contain
quality-criteria-project-brief.mdQuality bar for project briefs calibrated to maturity stage
quality-criteria-general-document.mdSix universal quality dimensions for any PM document
acceptance-criteria.mdAC standards written for AI coding agent implementers
Communication & Structure
communication-quality.mdWhat makes PM communication worth reading across all formats
narrative-structure.mdHow to build a presentation arc where each slide earns the next
story-structure.mdHow to scope and organize stories for AI coding agent implementers
audience-registers.mdBehavioral profiles for adjusting depth and tone by audience type
slide-design.mdHow to compose slides for hierarchy, density, and visual clarity
visualization-standards.mdHow to select chart types and render findings that carry the insight themselves
branding-guidelines.mdWhat brand consistency looks like in presentations and how to apply it
Research & Analysis
discovery-methods.mdHow to identify assumptions, rank by risk, and plan credible validation
user-feedback-analysis.mdHow to synthesize feedback into what customers need, not just what they said
data-interpretation.mdHow to turn metrics into product decisions, not reports
competitive-analysis.mdHow to translate market signals into product implications
business-case-standards.mdHow to argue whether to invest in an initiative and at what level
launch-readiness.mdWhat a complete launch covers across all readiness dimensions
sprint-planning.mdWhat makes a sprint plan and backlog ready for the team
Output Format
agent-readable-output.mdShared vocabulary for structured output blocks across all skills
// 04 · knowledge

Artifacts accumulate. The kit gets richer over time.

Every skill that produces a document saves its output to a typed folder inside knowledge/. The folder isn’t just storage — it’s context. When /doc-review evaluates a PRD, it can read prior PRDs in knowledge/prds/ to understand your norms. When /sprint-plan drafts a new plan, it can read prior plans to calibrate velocity assumptions. The kit learns from its own output.

DirectoryWritten by
knowledge/prds//prd-draft
knowledge/tasks//generate-tasks
knowledge/sprint-plans//sprint-plan (Draft mode)
knowledge/status-updates//status-update (Draft mode)
knowledge/decisions//decision-log
knowledge/meeting-briefs//meeting-brief
knowledge/memos//alignment-memo
knowledge/discovery-plans//discovery-plan
knowledge/user-feedback//user-feedback
knowledge/data-analyses//data-analysis
knowledge/competition//competitive-intel
knowledge/business-cases//business-case
knowledge/roadmaps//roadmap-prioritization
knowledge/retros//retro-synthesis
knowledge/launch-checklists//launch-checklist
knowledge/one-pagers//one-pager
knowledge/presentations//presentation-deck
// 05 · company context

Portable by design.

The company/ directory holds everything that changes when you change jobs: product facts, team structure, sprint process, tool configurations. Skills read these files to produce output that’s specific to your context rather than generic.

company/facts/Product areas, customers, team structure, glossary
company/norms/Sprint process, decision-making, communication patterns
company/interfaces/Tool configs — Jira, Slack, data sources, branding

When context files are missing or still contain placeholder text, each skill applies a declared degradation rule — either proceeding with a visible caveat or stopping to tell you what it needs. Output quality scales directly with context depth. The workbook’s Terrain files give every skill real, substantive context from Day 0.

// 06 · how skills work

Every skill follows the same execution model.

step.01load context

Read company context and references

The skill reads any files it declared as context-required or context-optional. If a required file is missing or still contains placeholder text, the skill applies its degradation rule — proceeding with a caveat or stopping to explain what’s needed. It then loads the relevant files from references/ as its judgment criteria.

step.02run intake

Assess input and adapt

Before executing, the skill assesses how much signal your input provides. Rich input (problem, audience, and constraints are clear): restates understanding and proceeds. Moderate input (some gaps): asks up to three targeted questions with specific options. Thin input (a sentence or a vague ask): presents a structured interpretation of what it’s inferring and asks you to confirm before continuing. The cap is four questions.

step.03execute

Apply reference judgment and produce the artifact

The skill executes against your input using the loaded reference files as its standard. A /prd-draft generates toward quality-criteria-prd.md. A /doc-review evaluates against the same file. The same judgment that catches problems in a review is what prevents them during generation.

step.04draft-confirm

You review before anything moves

All skills start at draft-confirm: the agent produces output, you review it before using it. Nothing is sent, shared, or published automatically. Autonomy is earned through consistent output quality over time — it’s graduated, not assumed.