Files
claude-plugins/claude-code/skills/claude-code-subagents/SKILL.md
movq 7911d90995 feat: Convert to Claude Code plugin marketplace
Transform repository into a plugin marketplace structure with two plugins:

- claude-code plugin: Complete toolkit with 5 skills
  * claude-code-plugins
  * claude-code-slash-commands
  * claude-code-hooks
  * claude-code-subagents
  * claude-code-memory

- claude-skills plugin: Meta-skill for creating Agent Skills
  * Comprehensive best practices guide
  * Templates and examples
  * Progressive disclosure patterns

Infrastructure:
- Add marketplace.json manifest
- Create plugin.json for each plugin
- Update documentation for marketplace structure
- Add contribution and testing guides

Installation:
- /plugin install claude-code@claude-skills
- /plugin install claude-skills@claude-skills

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-17 11:17:09 -05:00

20 KiB

name, description
name description
Claude Code Subagent Specialist Refine and troubleshoot Claude Code subagents by optimizing prompts, tool access, descriptions, and performance. Use when improving existing subagents, debugging activation issues, or optimizing delegation patterns. NOT for initial creation - use /agents command first.

Claude Code Subagent Refinement & Troubleshooting

When to Use This Skill

Use this skill when:

  • Refining existing subagent prompts for better performance
  • Troubleshooting why a subagent isn't activating
  • Optimizing tool access and permissions
  • Improving subagent descriptions for better delegation
  • Debugging context management issues
  • Testing and validating subagent behavior
  • Converting ad-hoc workflows to reusable subagents

Do NOT use this skill for:

  • Initial creation - Use /agents command instead (it provides interactive UI)
  • Creating slash commands (use claude-code-slash-commands skill)
  • General Claude Code troubleshooting

Important: Always start with /agents to create subagents. Use this skill to refine them afterward.

Quick Reference: Subagent Structure

---
name: agent-name              # Lowercase, kebab-case identifier
description: When to use      # Triggers automatic delegation
tools: Tool1, Tool2          # Optional: omit to inherit all
model: sonnet                # Optional: sonnet/opus/haiku/inherit
---

System prompt defining role, capabilities, and behavior.
Include specific instructions, constraints, and examples.

File Locations:

  • Project: .claude/agents/ (highest priority)
  • User: ~/.claude/agents/ (shared across projects)
  • Plugin: agents/ in plugin directory

Common Problems & Solutions

Problem 1: Subagent Never Activates

Symptoms: Claude doesn't delegate to your subagent

Diagnose:

# Check your description field
---
description: Helper agent  # ❌ Too vague
---

Fix - Make Description Specific:

# Before (vague)
---
description: Helps with security
---

# After (specific)
---
description: Analyze code for security vulnerabilities including SQL injection, XSS, authentication flaws, and hardcoded secrets. Use PROACTIVELY when reviewing code for security issues.
---

Best Practices for Descriptions:

  • Include specific trigger words and scenarios
  • Add "use PROACTIVELY" or "MUST BE USED" for automatic activation
  • Mention the domain/context clearly
  • List key capabilities or checks

Problem 2: Subagent Has Wrong Tools

Symptoms: Subagent can't complete tasks or has too many permissions

Diagnose:

# Check current tool configuration
cat .claude/agents/my-agent.md | grep "tools:"

Fix - Whitelist Specific Tools:

# Inherits all tools (may be too permissive)
---
name: security-analyzer
description: Security analysis
---

# Restricted to read-only tools (better)
---
name: security-analyzer
description: Security analysis
tools: Read, Grep, Glob
---

Tool Access Strategies:

1. Inherit All (Default):

# Omit 'tools' field entirely
---
name: general-helper
description: General assistance
---

Use when: Agent needs full flexibility

2. Read-Only Access:

---
tools: Read, Grep, Glob, Bash(git log:*), Bash(git diff:*)
---

Use when: Analysis, review, documentation

3. Specific Permissions:

---
tools: Read, Write, Edit, Bash(npm test:*)
---

Use when: Implementation with validation

4. No File Access:

---
tools: WebFetch, WebSearch, Bash
---

Use when: Research, external data gathering

Problem 3: Poor Quality Output

Symptoms: Subagent completes tasks but results are inconsistent or low-quality

Diagnose: Check system prompt specificity

Fix - Enhance System Prompt:

# Before (vague)
---
name: code-reviewer
---

You review code for issues.
# After (specific)
---
name: code-reviewer
---

You are a senior code reviewer specializing in production-ready code quality.

## Your Responsibilities

1. **Logic & Correctness**
   - Verify algorithm correctness
   - Check edge case handling
   - Validate error conditions

2. **Code Quality**
   - Ensure single responsibility principle
   - Check for code duplication (DRY)
   - Verify meaningful naming

3. **Security**
   - Identify injection vulnerabilities
   - Check authentication/authorization
   - Flag hardcoded secrets

4. **Performance**
   - Spot O(n²) or worse algorithms
   - Identify unnecessary loops
   - Check resource cleanup

## Output Format

For each issue found:
- **Severity**: Critical/High/Medium/Low
- **Location**: file:line
- **Issue**: Clear description
- **Fix**: Specific code example

## Constraints

- Only report actionable issues
- Provide code examples for fixes
- Focus on high-impact problems first
- No nitpicking style issues unless severe

System Prompt Best Practices:

  • Define role and expertise level
  • List specific responsibilities
  • Include output format requirements
  • Add examples of good/bad cases
  • Specify constraints and boundaries
  • Use headings for scannability

Problem 4: Context Pollution

Symptoms: Main conversation gets cluttered with subagent details

Understand: Subagents have isolated context windows - only their final output returns

Fix - Structure Output Properly:

# System prompt guidance
---
name: research-agent
---

Research [topic] and return ONLY:
1. Key findings (3-5 bullet points)
2. Relevant URLs
3. Recommendation

Do NOT include:
- Full article text
- Research methodology
- Intermediate thoughts

Best Practices:

  • Explicitly tell subagent what to return
  • Request summaries, not full details
  • Have subagent filter before returning
  • Use structured output formats

Problem 5: Activation Too Broad/Narrow

Symptoms: Subagent activates for wrong tasks OR misses relevant tasks

Diagnose - Test Trigger Scenarios:

# Test cases for a "security-analyzer" subagent

Should Activate:
- "Review this auth code for vulnerabilities"
- "Check if we're handling passwords securely"
- "Scan for SQL injection risks"

Should NOT Activate:
- "Write unit tests" (different concern)
- "Refactor this function" (not security-focused)
- "Add logging" (different task)

Fix - Refine Description:

# Too narrow
---
description: Checks for SQL injection only
---

# Too broad
---
description: Helps with code
---

# Just right
---
description: Analyze code for security vulnerabilities including SQL injection, XSS, CSRF, authentication issues, and secrets exposure. Use when reviewing code for security concerns or compliance requirements.
---

Problem 6: Model Selection Issues

Symptoms: Subagent too slow/expensive OR too simple for task

Fix - Choose Right Model:

# Fast, simple tasks (formatting, linting)
---
model: haiku
---

# Complex reasoning (architecture, design)
---
model: opus
---

# Balanced (most cases)
---
model: sonnet
---

# Same as main conversation
---
model: inherit
---

Model Selection Guide:

Model Use For Avoid For
haiku Simple transforms, quick checks Complex reasoning, creativity
sonnet General tasks, balanced quality When opus is specifically needed
opus Complex architecture, creative work Simple/repetitive tasks (cost)
inherit Task complexity matches main thread When you need different capability

Optimization Patterns

Pattern 1: Role-Based Pipeline

Create specialized agents for each workflow stage:

# 1. Spec Agent
---
name: product-spec-writer
description: Create detailed product specifications from user requirements
tools: Read, Write, WebSearch
model: opus
---

You convert user requirements into detailed product specs.

[Detailed prompt...]
# 2. Architect Agent
---
name: solution-architect
description: Design system architecture from product specs
tools: Read, Write, Grep, Glob
model: opus
---

You design scalable system architectures.

[Detailed prompt...]
# 3. Implementer Agent
---
name: code-implementer
description: Implement features from architectural designs
tools: Read, Write, Edit, Bash(npm test:*)
model: sonnet
---

You implement features following architectural guidelines.

[Detailed prompt...]

Usage: Chain with hooks or explicit handoffs

Pattern 2: Domain Specialists

# Frontend Specialist
---
name: frontend-specialist
description: React/TypeScript UI development and component design. Use PROACTIVELY for frontend work.
tools: Read, Write, Edit, Grep, Bash(npm:*)
---

You are a React/TypeScript expert specializing in modern frontend development.

## Tech Stack
- React 18+ with hooks
- TypeScript (strict mode)
- Tailwind CSS
- Component-driven architecture

## Principles
- Functional components only
- Custom hooks for logic reuse
- Accessibility (WCAG AA)
- Performance (lazy loading, memoization)

[More specific guidance...]
# Backend Specialist
---
name: backend-specialist
description: Node.js/Express API development, database design, and server architecture. Use PROACTIVELY for backend work.
tools: Read, Write, Edit, Grep, Bash(npm:*), Bash(docker:*)
---

You are a Node.js backend expert.

[Similar detailed structure...]

Pattern 3: Security-First Architecture

# Security Analyzer (Read-Only)
---
name: security-analyzer
description: Analyze code for security vulnerabilities before allowing modifications. MUST BE USED before code changes in sensitive areas.
tools: Read, Grep, Glob, Bash(git diff:*)
---

You are a security analyst. Review code for vulnerabilities BEFORE changes are made.

## Security Checks
1. Authentication/Authorization
2. Input validation
3. SQL injection
4. XSS vulnerabilities
5. CSRF protection
6. Secrets management

## Output
Return ONLY:
- Security score (1-10)
- Critical issues (block changes)
- Warnings (allow with caution)

Pattern 4: Test-Driven Subagent

---
name: test-first-developer
description: Write comprehensive tests before implementing features. Use PROACTIVELY for TDD workflows.
tools: Read, Write, Bash(npm test:*)
model: sonnet
---

You are a TDD expert. For every feature request:

1. **Analyze Requirements**
   - Extract testable behaviors
   - Identify edge cases

2. **Write Tests FIRST**
   - Unit tests for logic
   - Integration tests for workflows
   - Edge case coverage

3. **Run Tests** (they should fail)
   ```bash
   npm test
  1. Implement ONLY Enough to pass tests

  2. Refactor while keeping tests green

Test Structure

describe('Feature', () => {
  it('handles happy path', () => {})
  it('handles edge case 1', () => {})
  it('throws on invalid input', () => {})
})

Never implement before tests exist.


## Testing & Validation

### 1. Test Trigger Accuracy

Create test scenarios:

```markdown
# Test Plan for "api-developer" subagent

## Positive Tests (Should Activate)
1. "Create a REST endpoint for user authentication"
   - Expected: Activates
   - Actual: ___

2. "Add GraphQL mutation for updating profile"
   - Expected: Activates
   - Actual: ___

## Negative Tests (Should NOT Activate)
1. "Write unit tests for the API"
   - Expected: Does not activate (testing concern)
   - Actual: ___

2. "Review API security"
   - Expected: Does not activate (security concern)
   - Actual: ___

## Results
- Precision: X% (correct activations / total activations)
- Recall: Y% (correct activations / should activate)

2. Test Output Quality

# Quality Checklist

Task: "Review auth.js for security issues"

Subagent Output Should Include:
- [ ] Specific vulnerabilities identified
- [ ] File:line locations
- [ ] Severity ratings
- [ ] Concrete fix suggestions
- [ ] Code examples for fixes

Should NOT Include:
- [ ] Generic advice
- [ ] Full file listings
- [ ] Unrelated issues
- [ ] Style nitpicks

3. Test Tool Access

# Verify tool restrictions work
# Give subagent a task requiring forbidden tool

# Example: Read-only subagent shouldn't be able to edit
# Test by asking it to "fix the security issue"
# Should fail or request permission

4. Performance Testing

# Performance Metrics

Task: "Generate API documentation"

Metrics:
- Time to complete: ___
- Tokens used: ___
- Quality score (1-10): ___
- Required follow-ups: ___

Optimization targets:
- < 30 seconds for docs
- < 5000 tokens
- Quality >= 8
- 0 follow-ups needed

Refinement Workflow

Step 1: Baseline Performance

# Document current behavior
echo "Task: [Specific task]
Expected: [What should happen]
Actual: [What actually happens]
Issues: [Problems observed]
" > .claude/agents/refinement-notes.md

Step 2: Identify Root Cause

Common causes:

  • Description too vague → Won't activate
  • Prompt lacks specificity → Poor output
  • Wrong tools → Can't complete task
  • Wrong model → Too slow/simple
  • Output not filtered → Context pollution

Step 3: Make Targeted Changes

Only change ONE thing at a time:

  1. Update description OR
  2. Refine prompt OR
  3. Adjust tools OR
  4. Change model

Step 4: Test Changes

# Test with same scenarios
# Compare before/after results
# Document improvements

Step 5: Iterate

Repeat until subagent meets quality bar.

Best Practices Summary

Description Writing

# Template
description: [Action verb] [domain/task] [including specific capabilities]. Use [trigger condition]. PROACTIVELY when [scenario].

Examples:

description: Analyze Python code for performance bottlenecks including O(n²) algorithms, memory leaks, and inefficient database queries. Use PROACTIVELY when optimizing Python applications.

description: Generate comprehensive API documentation from code including endpoints, parameters, responses, and examples. Use when documenting REST or GraphQL APIs.

description: Review frontend code for accessibility issues following WCAG 2.1 AA standards. MUST BE USED for all UI component changes.

System Prompt Structure

# Role Definition
You are a [role] specializing in [domain].

## Responsibilities
1. [Primary responsibility]
2. [Secondary responsibility]
3. [Additional responsibilities]

## Process
1. [Step 1]
2. [Step 2]
3. [Step 3]

## Output Format
[Specific structure required]

## Examples

### Good Example
[Show what good looks like]

### Bad Example
[Show what to avoid]

## Constraints
- [Important limitation]
- [Another constraint]

Tool Selection Strategy

Decision Tree:

1. Does it need to modify files?
   No → Read, Grep, Glob only
   Yes → Continue

2. Does it need to run tests/builds?
   No → Read, Write, Edit only
   Yes → Add Bash(test:*), Bash(build:*)

3. Does it need external data?
   Yes → Add WebFetch, WebSearch
   No → Continue

4. Does it need git operations?
   Yes → Add Bash(git:*) with specific commands
   No → Done

Model Selection

Choose model based on:

1. Task complexity
   - Simple transforms → haiku
   - Standard coding → sonnet
   - Complex reasoning → opus

2. Cost sensitivity
   - High volume, simple → haiku
   - Balanced → sonnet
   - Quality critical → opus

3. Speed requirements
   - Real-time needed → haiku
   - Standard → sonnet
   - Can wait → opus

Default: sonnet (best balance)

Debugging Checklist

When subagent doesn't work as expected:

- [ ] Description is specific and includes trigger words
- [ ] Description includes "PROACTIVELY" or "MUST BE USED" if needed
- [ ] System prompt defines role clearly
- [ ] System prompt includes process/steps
- [ ] System prompt specifies output format
- [ ] System prompt has examples
- [ ] Tools match required capabilities
- [ ] Tools follow least-privilege principle
- [ ] Model appropriate for task complexity
- [ ] File location correct (.claude/agents/)
- [ ] YAML frontmatter valid
- [ ] Name uses kebab-case
- [ ] Tested with positive/negative scenarios

Common Anti-Patterns

Anti-Pattern 1: Generic Description

---
description: Helps with coding
---

Why Bad: Won't trigger reliably

Fix: Be specific about domain and triggers

Anti-Pattern 2: No Process Defined

You are a code reviewer. Review code.

Why Bad: Inconsistent results

Fix: Define step-by-step process

Anti-Pattern 3: All Tools Granted

---
# Omitting tools when only reads needed
---

Why Bad: Unnecessary permissions, security risk

Fix: Whitelist minimum required tools

Anti-Pattern 4: Verbose System Prompt

You are an expert developer with 20 years of experience who has worked on numerous projects across different industries... [3000 words]

Why Bad: Token waste, slower activation

Fix: Be concise, focus on process and format

Anti-Pattern 5: No Output Structure

Review the code and tell me about issues.

Why Bad: Inconsistent format, hard to parse

Fix: Define exact output format

Advanced Techniques

Technique 1: Chained Subagents

Use hooks or explicit handoffs:

// .claude/settings.json
{
  "hooks": {
    "PostToolUse": [
      {
        "matcher": "Write",
        "hooks": [
          {
            "type": "command",
            "command": "echo 'Please use security-analyzer subagent to review this file' && exit 0"
          }
        ]
      }
    ]
  }
}

Technique 2: Context Injection

---
name: context-aware-developer
---

Before starting any task:
1. Read PROJECT_CONTEXT.md
2. Review ARCHITECTURE.md
3. Check CODING_STANDARDS.md

Then proceed with development following documented patterns.

Technique 3: Quality Gates

---
name: pr-ready-checker
description: Verify code is PR-ready before submitting. MUST BE USED before creating pull requests.
tools: Read, Grep, Bash(npm test:*), Bash(npm run lint:*)
---

Verify PR readiness:

1. **Tests Pass**
   ```bash
   npm test

All tests must pass.

  1. Linting Clean

    npm run lint
    

    Zero warnings or errors.

  2. Coverage Adequate

    • New code > 80% covered
    • Overall coverage not decreased
  3. Documentation Updated

    • README if public API changed
    • Inline comments for complex logic
  4. No Debug Code

    • No console.log
    • No debugger statements
    • No commented code

Return: "PR Ready: Yes/No" + blockers list


### Technique 4: Iterative Refinement Prompt

```yaml
---
name: iterative-implementer
---

When implementation fails or produces errors:

1. **Analyze Failure**
   - What was the error?
   - Why did it happen?
   - What was I trying to achieve?

2. **Adjust Approach**
   - How should I do it differently?
   - What did I learn?

3. **Re-implement**
   - Apply new approach
   - Test immediately

4. **Verify**
   - Did it work?
   - If not, repeat from step 1

Never give up after one failure. Iterate until success.

Migration: Ad-Hoc to Subagent

When to Migrate

Migrate repetitive prompts to subagents when:

  • You've used the same prompt 3+ times
  • Prompt has clear pattern/structure
  • Task benefits from isolation
  • Multiple team members need it

Migration Process

Step 1: Extract Pattern

# Repeated prompts you've used:

1. "Review auth.js for security issues including SQL injection, XSS, and auth flaws"
2. "Check payment.js for security vulnerabilities like injection and secrets"
3. "Analyze api.js for security problems including validation and auth"

# Common pattern:
Review [file] for security [vulnerability types]

Step 2: Generalize

---
name: security-reviewer
description: Review code for security vulnerabilities including SQL injection, XSS, authentication flaws, and hardcoded secrets. Use PROACTIVELY for security reviews.
tools: Read, Grep, Glob
---

Review provided files for security vulnerabilities:

[Extract common structure from your prompts]

Step 3: Test & Refine

Test with previous use cases, refine until quality matches or exceeds manual prompts.

Resources


Remember: Start with /agents command for creation. Use this skill for refinement. Iterate based on real usage. Test thoroughly. Document learnings.