| name | reply-learning |
|---|---|
| description | Self-improving loop for community reply quality. Use this skill after receiving feedback on draft replies, after reviewing examples of good/bad replies, or when asked to "learn from this", "update the reply skill", or "what patterns do you see". This skill teaches how to extract durable patterns from specific feedback and codify them into the draft-warp-reply skill. |
Extract durable patterns from feedback and examples, then codify them into the draft-warp-reply skill. The goal is always mental models and concepts, never reactive rules tied to a single situation.
- You receive feedback on a draft reply ("this is too brand-y", "you're hallucinating", "this isn't right")
- You're asked to review examples of past replies or triage decisions
- You're asked to look at reaction data or community engagement patterns
- You're explicitly asked to improve the reply skill
Start from the specific feedback or example. What happened? Be concrete:
- "I fabricated a known issue to sound helpful"
- "I conceded the competitor's framing of Warp as heavy"
- "The tone sounded like a community manager, not a builder"
The specific failure is a symptom. Find the underlying cause:
- Fabricating context → tendency to fill knowledge gaps with plausible filler
- Conceding framing → not confident enough in Warp's positioning
- Brand-y tone → optimizing for "sounding helpful" instead of actually being helpful
Check: would this principle apply to situations beyond the one that triggered it? If it only applies to one narrow case, it's too specific. If it applies broadly, it's a pattern worth codifying.
Good pattern: "If you don't know, say less, not more" — applies to bug reports, feature questions, roadmap inquiries, anything where you might be tempted to fill a gap.
Too specific: "Don't say 'we're tracking this' on bug reports" — this is just one manifestation of the broader principle.
Test: Can you think of 3+ different situations where this principle would change behavior? If yes, it's a pattern. If not, it's a rule.
Read .agents/skills/draft-warp-reply/SKILL.md before adding anything. Often the right move is to sharpen an existing principle rather than add a new one. The skill should stay lean — every line needs to justify its context window cost.
Ask:
- Does an existing principle already cover this? → Sharpen its wording
- Does an existing principle contradict what you've learned? → Edit or remove it
- Is something in the skill actively leading to worse replies? → Delete it
- Is this a genuinely new dimension not yet covered? → Add it in the right section
- Is this a specific topic that keeps coming up? → Add to Topic-specific notes (but only if the pattern itself isn't general enough to live in Principles)
Adding, editing, and deleting are equally valid. A wrong principle does more damage than a missing one, but a missing one still leaves a gap. Match the action to what you've learned.
Principles describe how to think. Rules describe what to do. Prefer principles.
Principle: "Own Warp's frame — don't let someone else's comparison define what Warp is." → This changes thinking across all comparison scenarios.
Rule: "When someone compares Warp to Ghostty, lead with Warp's AI features." → This only helps in one situation and might be wrong in the next.
- Attitude — how to think and feel about the interaction (confidence, empathy, honesty)
- Behavior — how to act on those attitudes (be precise, say less, find hooks)
- Hard constraints — bright lines that should never be crossed
- Topic-specific notes — only for genuinely recurring topics with non-obvious correct answers (e.g., subscription redirect)
Most learnings belong in Attitude or Behavior.
Update .agents/skills/draft-warp-reply/SKILL.md with the change. Keep it tight — if the skill is getting long, look for principles that overlap and can be merged.
When reviewing reaction data, triage patterns, or engagement analytics:
- Look for what the data says the team actually does vs. what the skill currently says to do. Mismatches are the highest-value finds.
- Quantify but don't over-index — percentages change, but the underlying reasoning usually doesn't. Codify the reasoning, not the numbers.
- Look for surprising skips and surprising replies — the edge cases reveal judgment calls that the skill might not yet capture.
- Check for consistent patterns across team members — if everyone makes the same call, it's probably a principle worth writing down. If they disagree, it's a judgment call that may not need codifying.
- Adding a rule for every mistake — the skill should have fewer, sharper principles, not an ever-growing list of don'ts.
- Codifying preferences as principles — if it's a stylistic choice that could reasonably go either way, it's not a principle.
- Keeping stale patterns — if a principle no longer matches how the team actually behaves, update or remove it.
- Over-engineering the skill — the draft-warp-reply skill should stay under ~200 lines. If it's growing past that, consolidate.