Skip to content

Instantly share code, notes, and snippets.

@jarodtaylor
Last active March 25, 2026 13:13
Show Gist options
  • Select an option

  • Save jarodtaylor/3f9ab15c62a83dadbc72898ae598d0ad to your computer and use it in GitHub Desktop.

Select an option

Save jarodtaylor/3f9ab15c62a83dadbc72898ae598d0ad to your computer and use it in GitHub Desktop.

Claude Code Setup Audit Skill

A Claude Code skill that performs a comprehensive audit of your entire Claude Code configuration — surfacing dead weight, conflicts, stale rules, and wasted context tokens so you can keep your setup lean and effective.

Why This Exists

Claude Code configurations grow organically. You add a rule to fix a bad output, paste in a convention from a blog post, duplicate a preference across files without realizing it. Over time, your CLAUDE.md, skills, and settings accumulate cruft that eats context tokens without improving results.

This skill gives you a structured way to periodically audit everything and get an honest assessment of what to cut, merge, reword, or relocate.

What It Audits

  • CLAUDE.md (both global and project-level)
  • settings.json (both global and project-level)
  • All installed skills and their bundled references
  • Slash commands and hooks
  • MCP configurations (.mcp.json and ~/.claude/mcp.json)
  • Any other instruction files it can find (.cursorrules, .windsurfrules, etc.)

What It Checks For

Every rule, instruction, and preference is evaluated against six questions:

  1. Already default behavior? — Claude does this without being told. Dead weight.
  2. Conflicts with another rule? — Two rules contradict each other, especially across global vs. project layers.
  3. Redundant / duplicate? — Same intent covered elsewhere in different words.
  4. Bandaid fix? — Added to fix one specific bad output rather than improve things generally.
  5. Too vague? — So ambiguous it gets interpreted differently every time (e.g., "be more natural").
  6. Stale / orphaned? — References tools, paths, or frameworks that no longer exist in the project.

It also estimates the token cost of your full configuration and flags how much of your context budget is being spent on non-actionable rules.

What You Get Back

  • A summary of your setup's size and token footprint
  • Every conflict found between files
  • A prioritized list of recommended cuts (sorted by impact)
  • Rules that should be merged into single, cleaner versions
  • Rewrites for rules with the right intent but poor wording
  • Global vs. project layer misplacements
  • A diff-style changelist for your CLAUDE.md (not a full rewrite — surgical changes you can review and apply selectively)

Installation

Copy the SKILL.md file into your global skills directory:

~/.claude/skills/audit-setup/SKILL.md

That's it. No dependencies, no bundled scripts.

Usage

Inside any Claude Code session:

/audit-setup

The skill is set to disable-model-invocation: true, meaning Claude will never run it automatically. It only fires when you explicitly invoke it. This is intentional — you don't want an audit kicking off mid-task because you casually mentioned your config.

Recommended Cadence

Run it whenever you feel like your setup has drifted, or on a regular schedule — monthly works well if you're actively adding rules. It's also worth running after you install a new batch of skills or make significant changes to your CLAUDE.md.

Credit

Inspired by this prompt from @itsolelehmann, adapted and extended into a reusable Claude Code skill with additional checks for stale rules, token cost analysis, layer conflicts, and diff-style output.

name audit-setup
description Comprehensive audit of your Claude Code configuration — CLAUDE.md, skills, context files, settings, MCP configs, and hooks. Use when you want to clean up, deduplicate, or optimize your Claude Code setup. Invoke this skill whenever you mention auditing, reviewing, cleaning up, or optimizing your Claude Code configuration, memory files, or instruction files.
disable-model-invocation true

Claude Code Setup Audit

You are performing a comprehensive audit of the user's Claude Code configuration. The goal is to eliminate bloat, resolve conflicts, surface stale rules, and produce a prioritized changelist the user can act on.

Phase 1: Discovery

Read everything before forming any opinions. Do not respond until you have completed this full scan.

Scan these locations (in order):

  1. Project-level config (.claude/ in the current project root)

    • CLAUDE.md (project instructions)
    • settings.json
    • skills/ (every SKILL.md and any bundled references)
    • commands/ (slash commands)
    • hooks/
    • Any other .md or config files in .claude/
  2. Global config (~/.claude/)

    • CLAUDE.md (global instructions)
    • settings.json
    • skills/ (every SKILL.md and any bundled references)
    • commands/
    • hooks/
  3. MCP configuration

    • .mcp.json in the project root
    • ~/.claude/mcp.json (global MCP config)
  4. Other instruction files

    • .cursorrules, .windsurfrules, or similar if present
    • Any README.md sections that appear to contain agent instructions
    • Any other *.md files in the project root that look like they contain rules or preferences

Build a mental inventory of every rule, instruction, convention, and preference you find across all files before proceeding.

Phase 2: Analysis

For each rule, instruction, or preference you found, evaluate it against these six questions:

Q1 — Already Default Behavior?

Is this something Claude already does without being told? If Claude's base behavior already covers it, the rule is dead weight consuming context tokens for no benefit.

Q2 — Conflicts With Another Rule?

Does this contradict or tension with another rule elsewhere in the setup? Pay special attention to conflicts between layers (global vs. project-level) since project-level should override global, but contradictions still cause confusion.

Q3 — Redundant / Duplicate?

Does this repeat something already covered by a different rule or file? Look for both exact duplicates and semantic duplicates (same intent, different wording).

Q4 — Bandaid Fix?

Does this read like it was added to fix one specific bad output rather than improve outputs overall? These tend to be overly narrow, reference a specific incident, or micromanage a single behavior.

Q5 — Too Vague to Be Actionable?

Is this so vague that it would be interpreted differently on every invocation? Examples: "be more natural," "use a good tone," "write clean code," "be thorough." If you can't objectively verify compliance, the rule is noise.

Q6 — Stale or Orphaned?

Does this reference tools, file paths, frameworks, APIs, dependencies, or workflows that no longer exist in the project? Check whether referenced paths actually exist and whether mentioned tools are still configured.

Phase 3: Token Cost Assessment

Estimate the approximate token cost of the user's full configuration (all loaded CLAUDE.md content, skill descriptions, and always-loaded instruction files). Note:

  • What percentage of the setup is actionable vs. dead weight
  • Which files or sections are the biggest offenders
  • Whether the skill description budget (~2% of context window) is being pressured

Phase 4: Output

Present your findings in this exact structure:

1. Setup Summary

A brief overview of what you found: how many files, how many distinct rules/instructions, and the estimated token footprint.

2. Conflicts Between Files

List every conflict found between any two files. For each conflict:

  • File A: [path] — the rule
  • File B: [path] — the contradicting rule
  • Recommendation: which to keep and why

3. Recommended Cuts

A numbered list of everything you'd remove. For each item:

  • Location: file path and the specific rule
  • Reason: one-line explanation citing which question (Q1–Q6) it fails
  • Impact: low / medium / high (how much context is being wasted)

Sort by impact (high first).

4. Recommended Merges

Rules that aren't individually wrong but should be consolidated. For each:

  • Rules to merge: list the overlapping rules and their locations
  • Proposed single rule: the merged version

5. Recommended Rewrites

Rules that have the right intent but are too vague, too narrow, or poorly worded. For each:

  • Current: the rule as-is
  • Proposed: the improved version
  • Why: what's better about the new version

6. Global vs. Project Layer Issues

Any rules that are in the wrong layer (e.g., project-specific rules in global config, or generic preferences buried in a project-level file that should be global).

7. Changelist for CLAUDE.md

Rather than a full rewrite, provide a diff-style changelist:

  • Lines/sections to remove (with reason)
  • Lines/sections to merge (with proposed replacement)
  • Lines/sections to reword (with proposed replacement)
  • Suggested reordering if the current structure buries important rules

This applies to both global and project CLAUDE.md if both exist.

Guidelines

  • Be direct. If a rule is useless, say so — the user asked for honesty.
  • Preserve intent. When proposing merges or rewrites, don't lose the user's actual preferences.
  • Respect layers. Global config = cross-project defaults. Project config = overrides for this specific codebase.
  • Don't invent new rules. Only work with what exists.
  • If a rule is good and well-written, skip it silently. Only surface problems.
  • When in doubt about whether something is "default behavior," err on the side of keeping it — false positives are more annoying than a slightly heavy config.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment