See Boris’s post: these tips come from the Claude Code team, and there’s no single “right” setup—experiment and keep what works.
- Run 3–5 Claude sessions at once, one per task.
- The team’s preferred approach is
git worktreeso each session has its own isolated working directory. - Some folks also keep a dedicated “analysis” worktree for log reading / BigQuery-style investigation.
- For anything non-trivial, spend effort on a solid plan first, then let Claude implement from it.
- A pattern: have one Claude draft the plan, then have another Claude review the plan (as a “staff engineer” reviewer).
- If things go sideways, stop pushing forward—switch back to plan mode and re-plan (including verification steps).
- Treat your
CLAUDE.mdas a living set of rules and project norms Claude should follow. - After correcting Claude, explicitly ask it to update
CLAUDE.mdso it won’t repeat the mistake. - Ruthlessly iterate on this file over time until you see the mistake rate drop.
- One workflow: maintain a notes directory per project/task, updated after each PR, and point Claude at it.
- If you do something more than once a day, turn it into a skill or slash command.
- Examples from the team:
- A
/techdebtcommand that runs at the end of sessions to find and remove duplication. - A command that syncs recent context (e.g., last 7 days of Slack/GDrive/Asana/GitHub) into a single dump.
- “Analytics-engineer” agents that write dbt models, do code review, and test changes in dev.
- A
- Wire in Slack MCP and paste a bug thread, then simply tell Claude to fix it.
- Or delegate broadly: “Go fix the failing CI tests” (avoid micromanaging the method).
- For distributed systems, point Claude at Docker logs and let it troubleshoot.
- Challenge Claude: make it justify changes and act like a reviewer (“prove this works”, compare main vs your branch, etc.).
- If a fix is mediocre, ask for a fresh rewrite given what it learned (“scrap it and implement the elegant solution”).
- Write detailed specs and remove ambiguity before handing off work—specificity improves autonomy.
- The team likes Ghostty (fast rendering, good color/unicode support).
- Use
/statuslineso you always see context usage and current git branch. - Consider naming/color-coding tabs (often one tab per task/worktree), sometimes with tmux.
- Use voice dictation to produce longer, richer prompts faster than typing.
- If you want more “compute” on a task, explicitly say to use subagents.
- Offload subtasks to subagents to keep your main agent’s context clean and focused.
- More advanced: route sensitive permission checks to a stronger model via hooks (e.g., for security scanning/auto-approval patterns).
- Ask Claude Code to use a database CLI (e.g.,
bq) to pull metrics and analyze them inline. - The team keeps a BigQuery skill in-repo and uses it routinely for analytics inside Claude Code.
- The same idea generalizes to any datastore with a CLI, MCP server, or API.
- Enable a Learning/Explanatory output style in
/configso Claude explains why it made changes. - Ask for a visual HTML presentation to explain unfamiliar code (Claude can produce surprisingly good “slides”).
- Request ASCII diagrams for protocols/codebases.
- Build a spaced-repetition skill: you explain your understanding, Claude asks follow-ups, and it stores the results for review.