Skip to content

Agent Teams — Multi-Agent Coordination

Agent Teams let multiple Claude Code agents work on different parts of a project simultaneously. This guide covers when and how to use them effectively.

When to Use Teams

Good candidates: - Large features touching multiple independent modules - Parallel work: frontend + backend, tests + implementation - Bulk operations: migrating many files, updating many modules

Skip teams for: - Single-file changes - Sequential tasks where each step depends on the previous - Tasks under ~30 minutes of work

Setup

Enable Agent Teams in ~/.claude/settings.json:

{
  "env": {
    "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
  }
}

Custom Agents (YAML Frontmatter)

Define specialized agents in .claude/agents/<name>.md using YAML frontmatter:

---
name: builder
description: Implementation agent for writing code. Use when the task requires creating or modifying files.
tools: Read, Write, Edit, Bash, Glob, Grep
model: sonnet
---

# Builder Agent

You are a focused implementation agent. Your job is to write clean,
tested code based on the plan provided.

## Rules
- Follow the coding standards in CLAUDE.md
- Run lint and tests after every file change
- Never modify files outside your assigned module

Frontmatter Fields

Field Required Description
name Yes Agent identifier (kebab-case)
description Yes When to use this agent — include trigger phrases
tools No Comma-separated list of allowed tools (restricts agent capabilities)
model No Model to use: opus, sonnet, or haiku
hooks No Agent-specific hook overrides (same format as settings.local.json)

Agent-Specific Hooks

Agents can have their own hooks that run only when that agent is active:

---
name: builder
description: Implementation agent with lint validation
tools: Read, Write, Edit, Bash, Glob, Grep
hooks:
  PostToolUse:
    - matcher: "Write|Edit"
      hooks:
        - type: command
          command: "python3 .claude/hooks/lint-on-edit.py"
---

Common Agent Patterns

Agent Role Model Tools
Builder Write code, implement features sonnet Read, Write, Edit, Bash, Glob, Grep
Reviewer Read-only code review haiku Read, Glob, Grep
Researcher Explore codebase, gather context haiku Read, Glob, Grep, WebSearch
Tester Write and run tests sonnet Read, Write, Edit, Bash, Glob, Grep

Team Structure

Typical Team Layout

Team Lead (you)
├── Agent A — Module/feature 1
├── Agent B — Module/feature 2
└── Agent C — Tests / verification

Module Boundaries

Define clear boundaries so agents don't conflict:

## Module Boundaries (in CLAUDE.md)

| Domain | Directories | Owner |
|--------|------------|-------|
| API | src/api/ | Agent A |
| Models | src/models/ | Agent B |
| Tests | tests/ | Agent C |

**Shared files (coordinate before editing):**
- src/config.py
- docker-compose.yaml

Coordination Patterns

1. Shared Files Protocol

When multiple agents need the same file: - Read-only sharing: Multiple agents can read shared files freely - Write coordination: Only one agent writes to a file at a time - Signal completion: Agent announces when done with shared file

2. Dependency Chain

When Agent B depends on Agent A's output:

Agent A: Create database models → Signal done
Agent B: Wait for models → Create API endpoints
Agent C: Wait for endpoints → Write integration tests

3. Parallel Independence

When agents work on truly independent modules:

Agent A: Frontend component (src/components/)
Agent B: Backend service (src/services/)
Agent C: Documentation (docs/)
No coordination needed — merge at the end.

Verification Commands

Include verification commands in CLAUDE.md so agents can self-check:

## Verification

- **Syntax check**: `python3 -c "import py_compile; py_compile.compile('file.py', doraise=True)"`
- **Tests**: `pytest tests/ -x`
- **Lint**: `ruff check src/`
- **Type check**: `mypy src/`
- **Build**: `npm run build`

Known Limitations

  • Context window: Each agent has its own context — they don't share conversation history
  • File conflicts: Two agents editing the same file can cause conflicts
  • Coordination overhead: Teams add communication cost — only worth it for parallel work
  • Experimental: Agent Teams is an experimental feature and may change

Skills + Agent Teams

You can create skills that spawn agent teams — combining repeatable workflows with parallel execution.

Example: /agent-team-review

A code review skill that spawns 3-4 specialized reviewers:

/agent-team-review my-module

→ Spawns:
  - accounting-reviewer (field mapping, data integrity)
  - security-reviewer (access rights, SQL injection, XSS)
  - quality-reviewer (error handling, performance)
  - translation-reviewer (i18n, terminology)

→ Each reviews independently
→ Lead merges into unified P0-P3 report

When to use this pattern

  • Tasks that are naturally parallel (review, audit, testing)
  • Large scope where a single pass would fill the context window
  • Each sub-task is independent and well-defined

When NOT to use

  • Simple tasks (coordination overhead > benefit)
  • Tasks with heavy file dependencies between teammates
  • Small changes where a single /code-review is faster

Best Practices

  1. Define boundaries upfront in CLAUDE.md's module boundaries table
  2. List shared files that need coordination
  3. Include verification commands so agents can self-check
  4. Keep teams small (2-4 agents) to minimize coordination overhead
  5. Use sequential tasks within an agent, parallel tasks across agents
  6. Create skills for repeatable team patterns — if you spawn the same team structure often, make it a skill