Tool System — 42 Tools and a Governance Pipeline
The Tool interface, fail-closed defaults, all 42 tools categorized, and the 14-step execution pipeline that runs before any tool touches your filesystem.
The Tool Interface
Every tool in Claude Code implements the same interface defined in Tool.ts. The key methods:
call()— the actual executioninputSchema— JSON Schema for input validationvalidateInput()— pre-execution validation beyond schemacheckPermissions()— returns whether the current permission context allows this callisReadOnly()— signals that this tool makes no persistent changesisDestructive()— signals that this tool can cause irreversible harmisConcurrencySafe()— whether multiple instances can run in parallelprompt()— additional context injected into the system prompt when this tool is active
The interface is designed around risk, not capability. The first questions asked about any tool are not “what can it do” but “what can it damage” and “who needs to approve it.”
buildTool() and Fail-Closed Defaults
Tools are constructed via buildTool(), which applies defaults. The defaults are fail-closed: if you build a tool without specifying isDestructive, it’s treated as destructive. If you don’t specify isReadOnly, it’s treated as capable of writes. If you don’t specify isConcurrencySafe, it’s treated as unsafe to parallelize.
You opt out of restrictions by explicitly declaring a tool safe, not by omitting a declaration. This means a new tool that’s incompletely specified will be treated with maximum caution rather than minimum caution.
The 42 Tools
Categorized by function:
File operations (most frequently used): Read, Write, Edit, MultiEdit, Glob, Grep, LS, NotebookRead, NotebookEdit
Shell: Bash — the single most dangerous tool, covered extensively in the permission system (Ch5)
Agent dispatch: AgentTool (spawns sub-agents), Task (manages async tasks)
MCP: MCPTool — wraps external MCP servers as first-class tools
Web: WebFetch, WebSearch
User interaction: AskPermission, TodoRead, TodoWrite, exit_plan_mode
Mode switching: Tools that change the agent’s operating mode (plan mode, auto-accept mode)
The ratio here is intentional. File operations dominate because that’s what coding agents actually do most of the time. The shell tool is singular — there’s one Bash tool, not a collection of shell utilities, because having a single entry point makes governance tractable.
The 14-Step Execution Pipeline
Before call() runs, a tool call goes through this pipeline:
- Tool lookup — resolve the tool name from the model’s output
- Input parsing — parse the raw JSON arguments
- Schema validation — validate against
inputSchema - Custom validation — run
validateInput() - Permission check — call
checkPermissions()against current permission state - Pre-tool hooks — run registered
PreToolUsehooks (can block or modify input) - Hook permission decision — resolve hook output against settings (see Ch5)
- Speculative Classifier — async risk classification (runs in parallel from step 5)
- Concurrency gate — check
isConcurrencySafe()and queue if needed - Execution — run
call() - Result validation — check output format
- Post-tool hooks — run
PostToolUsehooks - Failure hooks — run
PostToolUseFailurehooks if execution failed - Result injection — package output for the next loop iteration
Steps 5-8 are the security layer and are covered in detail in Ch5. The important point here is the ordering: schema validation happens before permission checks, and permission checks happen before hooks. You can’t bypass schema validation through a hook.
Speculative Classifier
Step 8 runs in parallel with steps 5-7, not sequentially after them. The Speculative Classifier is an async process that classifies the risk level of the pending tool call while the permission and hook machinery is running.
The “speculative” name is accurate: the classifier is making a prediction about whether this call is safe before the execution decision is final. Its output feeds into step 7 (the hook permission decision) as additional signal. If the classifier finishes before the hooks do, its result is available immediately. If it finishes after, it’s available for logging and post-hoc auditing.
This is an engineering tradeoff: running the classifier in parallel reduces latency on the happy path (safe call, hooks approve quickly) while still providing risk signal without adding sequential latency.
Streaming Tool Execution
The loop architecture from Ch2 enables a specific optimization: tools can start running before the model finishes streaming its response. If the model outputs a complete tool call JSON early in its response — before it’s finished with any surrounding explanation text — the pipeline can begin processing that call while the model continues generating.
In practice this means that on calls where the model’s tool invocation appears near the start of its output, the tool’s checkPermissions() and PreToolUse hooks may have already run by the time the model’s streaming response completes. The net effect is reduced wall-clock latency on tool-heavy tasks.
Reference: This chapter draws on Xiao Tan’s (@tvytlx) Claude Code Architecture Deep Dive V2.0 report.