Tool Design &
MCP Integration
How you define tools is how the agent thinks. This domain covers the full stack of tool design — from writing descriptions that produce reliable routing, to implementing structured errors that enable intelligent recovery, to configuring MCP servers and knowing when to reach for built-in tools. Every task statement here directly maps to scenarios the exam will test.
Design effective tool interfaces with clear descriptions and boundaries
The Core Concept
When Claude encounters multiple tools, it has no access to their source code or runtime behavior — only the description text you provide. That description is re-evaluated on every single tool call, making it the most leveraged piece of configuration in your agent design.
The problem compounds with overlapping capabilities. When two tools could plausibly answer the same request, the model falls back to heuristics — and those heuristics are not predictable across invocations.
Knowledge the Exam Tests
Descriptions as Selection Mechanism
Minimal descriptions → unreliable selection. The model has nothing else to differentiate between similar tools.
Required Description Components
Input formats, example queries, edge cases, boundary explanations, and explicit when-to-use-vs-alternatives.
Ambiguity Causes Misrouting
analyze_content vs analyze_document with near-identical descriptions → unpredictable routing failures.
System Prompt Interference
Keyword-sensitive instructions in the system prompt create unintended tool associations that override well-written descriptions.
- Writing descriptions that differentiate purpose, expected inputs, outputs, and when-to-use vs alternatives
- Renaming tools to eliminate functional overlap (e.g.,
analyze_content→extract_web_results) - Splitting generic tools into purpose-specific tools with defined input/output contracts (
analyze_document→extract_data_points+summarize_content+verify_claim_against_source) - Reviewing system prompts for keyword-sensitive instructions that might override well-written descriptions
Anatomy of a Production Tool Description
A complete tool description has five required components. Missing any one of them degrades routing reliability.
{
"name": "search_customer_orders",
"description": """
Search for a customer's order history by customer ID or email.
Use this tool when:
- User asks about their orders, deliveries, or purchases
- You need order IDs before calling process_refund
- User references a specific order number
Do NOT use this tool for:
- Checking inventory (use check_inventory instead)
- Looking up product descriptions (use get_product_details)
Input formats accepted:
- customer_id: "cust_12345" or integer 12345
- email: full address, case-insensitive
Returns: List of orders with order_id, status, total, items[], created_at
Edge cases:
- Returns empty list if customer has no orders (not an error)
- If both customer_id and email provided, customer_id takes precedence
""",
"input_schema": {
"type": "object",
"properties": {
"customer_id": { "type": "string", "description": "e.g. 'cust_12345'" },
"email": { "type": "string", "description": "Customer email address" }
}
}
}
Weak vs Strong Descriptions
The exam frequently presents minimal descriptions and asks what's wrong, or asks which description causes misrouting. Internalize this contrast.
"description": "Analyzes content
and returns results"
"description": "Parses structured data
from web search result JSON.
Use ONLY for raw search API
output. NOT for PDFs, uploads,
or documents — use
analyze_document for those."
The Split Pattern
Generic tools that accept a mode parameter are a routing antipattern. The model must guess the correct mode from within a single tool call — the same disambiguation problem that descriptions are supposed to solve, now embedded deeper inside.
analyze_document(doc_id, mode=
"extract"|"summarize"|"verify")30% of extraction requests
trigger summarization mode.
summarize_content(doc_id)
verify_claim_against_source(
claim, doc_id)
Each tool = one purpose.
System Prompt Interference
This is the least obvious knowledge point in 2.1 and therefore high-probability on the exam. Keyword-sensitive instructions in the system prompt can fire on partial matches and override tool selection logic.
// System prompt instruction: "When users ask about orders, prioritize the order tool." // Tools available: search_orders — "Search customer order history" process_refund — "Process refunds for order items" lookup_invoice — "Retrieve invoice for order billing" // User: "I need my order invoice for tax purposes" // Expected: lookup_invoice // Actual: search_orders (keyword "order" fires the instruction)
The fix: rewrite the system prompt to use specific, unambiguous criteria. "When users ask about order status or tracking, use search_orders." Specificity in the system prompt is as important as specificity in descriptions.
Exam Traps for Task 2.1
| The Trap | Why It Fails | Correct Pattern |
|---|---|---|
| Add prompt instructions to clarify which tool to use | Prompts are evaluated alongside descriptions; keyword instructions often create new collisions | Rewrite descriptions to be unambiguous. Rename if needed. |
| Consolidate overlapping tools into one generic tool | One tool doing multiple things via hidden modes moves disambiguation inside the tool — same root problem | Split into purpose-specific tools, each with a single clear contract |
| Keep description short — "less noise for the model" | No — detailed descriptions reduce ambiguity. The model needs signal, not silence | Include input formats, outputs, edge cases, and negative examples |
| Keep the vague name, improve only the description | Tool names carry semantic signal too. A vague name undermines even a perfect description | Rename to match the scope of the description (analyze_content → extract_web_results) |
🔨 Implementation Task
Build a Tool Suite with Routing Disambiguation
Build a 4-tool customer support suite and achieve deterministic routing across all ambiguous user requests.
- Create 4 tools:
search_orders,process_refund,check_shipping_status,escalate_to_human - Write each description with: use-when, do-not-use-when, input format, output structure, edge cases
- Write a system prompt with a keyword-sensitive instruction that causes a collision — then fix it
- Write 3 ambiguous test prompts (e.g., "where is my stuff") and verify routing is deterministic
- Find a tool doing two things, split it, and prove routing accuracy improves
Exam Simulation — Task 2.1
analyze_content (description: "Analyzes content") and analyze_document (description: "Analyzes documents and content"). During testing, requests for customer order PDFs are routed to analyze_content 40% of the time. What is the most effective fix?analyze_document tool with a mode parameter accepting "extract_data", "summarize", and "verify_claim". The team finds 30% of extraction requests trigger summarization. Without changing underlying logic, what restructuring best resolves this?search_orders, process_refund (mentions "order refunds"), and lookup_invoice (mentions "order invoices"). User asks "I need my order invoice for tax purposes" — agent routes to search_orders. What is the primary cause?search_orders — the model is following the instruction correctly, but the instruction is too broad. Fix: rewrite the system prompt to use specific criteria. A is wrong: The model is obeying the prompt — that's the problem. B is wrong: Tools have no explicit priority weights. D is wrong: List ordering has marginal effect; the keyword collision is dominant.Implement structured error responses for MCP tools
The Core Concept
When an MCP tool fails, the agent must decide: retry with same inputs? retry with modified inputs? try an alternative? escalate to the coordinator? inform the user? Every one of those decisions requires knowing what kind of failure happened and whether retrying makes sense.
A generic status string like "error": "Operation failed" forces the agent to guess — or worse, to blindly retry a non-retryable failure in an infinite loop. Structured error responses hand the coordinator the data it needs to route appropriately.
Error Categories
The exam tests knowledge of these four distinct error types and their recovery implications. Confusing transient with validation errors — or failing to mark business errors as non-retryable — are the primary failure modes.
Transient
Timeouts, service unavailability, network interruptions. Retryable. The subagent should attempt local recovery before escalating.
Validation
Invalid input, malformed parameters, missing required fields. Not retryable without modifying input. Surface to coordinator for correction.
Business
Policy violations — refund exceeds threshold, restricted action. Not retryable. Requires a customer-friendly explanation and potentially human escalation.
Permission
Insufficient access rights, auth token expired. Conditionally retryable after re-authentication. Always propagate to coordinator with context.
The Required Error Schema
The exam tests both the field names and what each field enables. Know all four fields and why each one exists.
{
"isError": true, // MCP isError flag — signals failure to agent
"errorCategory": "transient", // transient | validation | permission | business
"isRetryable": true, // drives coordinator retry decision
"message": "Order service timed out. Retry is safe.",
"attemptedQuery": "order_id: ORD-8821", // what was tried (for coordinator context)
"partialResults": null // any partial data recovered before failure
}
"isError": true,
"message": "Operation failed"
}
Agent cannot determine if retrying
makes sense. Will loop or stall.
"isError": true,
"errorCategory": "business",
"isRetryable": false,
"message": "Refund exceeds $500
policy limit. Requires manager
approval."
}
isError: true for an empty result is a common mistake that causes the agent to retry a perfectly valid response.Recovery Strategy
The exam tests the layered recovery pattern: subagents handle what they can locally, and only propagate what they can't resolve — along with everything the coordinator needs to make a good decision.
- For transient errors: subagent implements local retry with exponential backoff. Only propagates to coordinator after local retries exhausted — includes partial results and what was attempted
- For validation errors: subagent returns immediately with structured error and the specific invalid parameter — coordinator must correct input before re-delegating
- For business errors: subagent returns with
isRetryable: falseand a customer-friendly explanation — coordinator decides whether to escalate to human - For permission errors: subagent returns with context about what access was needed — coordinator handles re-authentication and re-delegation
partialResults in the error payload so the coordinator can use what was retrieved rather than starting over.Exam Traps for Task 2.2
| The Trap | Why It Fails | Correct Pattern |
|---|---|---|
| Return generic "Operation failed" status | Hides failure type and retryability — agent can't make recovery decisions | Return errorCategory + isRetryable + descriptive message |
| Return isError: true for an empty result set | Empty results from a valid query are a success, not an error — triggers unnecessary retries | Return success with empty array; use isError only for actual failures |
| Subagent terminates entire workflow on timeout | Kills all work done so far; coordinator may have recovery strategies available | Local retry first; propagate structured error + partial results if unresolved |
| Mark business rule violations as retryable | Causes infinite retry loops — policy violations won't resolve themselves on retry | isRetryable: false + customer-friendly message for all business errors |
🔨 Implementation Task
Build a Structured Error Handler for an Order Tool
Implement a process_refund MCP tool that returns correctly structured errors for all four error categories.
- Implement transient error handling: timeout after 5s returns retryable error with attempted query and any partial results
- Implement validation error: missing or invalid order_id returns non-retryable error with specific invalid field named
- Implement business error: refund amount > $500 returns non-retryable error with customer-friendly policy explanation
- Implement permission error: expired token returns structured error guiding coordinator to re-authenticate
- Implement empty result handling: customer with no orders returns success with empty array, NOT an error
Exam Simulation — Task 2.2
process_refund tool is called for a $750 refund but the policy maximum is $500. The tool currently returns {"error": "Operation failed"}. The agent retries the call four times before stalling. What is the correct fix?isRetryable: false tells the agent to stop immediately and take a different action (human escalation). The customer-friendly message enables the agent to communicate the reason clearly. A is wrong: A retry cap is a band-aid; it doesn't tell the agent why it failed or what to do next. C is wrong: This is a business error, not a validation error — and marking it retryable causes unnecessary retries. D is wrong: System prompt instructions are probabilistic; structured error metadata is deterministic.search_orders tool returns {"isError": true, "message": "No orders found"} when a customer has no order history. The coordinator agent keeps retrying the call assuming it's a transient failure. What is the root cause and fix?{"orders": [], "total": 0} with a 200-equivalent success status. Marking it as an error corrupts the signal the coordinator uses to make decisions. A is wrong: Parsing error message strings to detect empty results is fragile and bypasses the structured error system. C is wrong: Adding isRetryable: false treats the symptom but not the cause — it's still being reported as an error when it isn't one. D is wrong: System prompt instruction is probabilistic and doesn't fix the corrupted data signal.Distribute tools appropriately across agents and configure tool_choice
The Core Concept
Tool selection reliability degrades as tool count increases. With 4–5 focused tools, a well-described agent routes correctly and consistently. With 18 tools, the model's decision space becomes cluttered — especially when several tools are broadly described or outside the agent's specialization.
The second problem is specialization boundary violations: a synthesis agent that also has web search tools will occasionally use them when it should be synthesizing — because the tools are available and the user query contains search-like phrasing.
verify_fact) for a high-frequency need is acceptable.Tool Count Impact
Too Many Tools (18)
Decision complexity overwhelms description clarity. Models begin selecting based on superficial name matches rather than description logic.
Role-Scoped (4–5)
Each tool is clearly the best choice for its use case. The model routes correctly because ambiguity is structurally eliminated.
Cross-Specialization Risk
A synthesis agent with web search tools will attempt web searches for fact-checking instead of flagging uncertainty to the coordinator.
Constrained Alternatives
Replace generic tools with scoped versions: fetch_url → load_document that validates document URLs only, preventing misuse.
tool_choice Configuration
Three options with very different behaviors. The exam will test whether you can select the right one for a given scenario.
| Value | Behavior | Use When |
|---|---|---|
| "auto" | Model may call a tool OR return plain text — its choice | Normal conversational agent; the model decides if tools are needed |
| "any" | Model must call some tool — cannot return conversational text | Structured output extraction where text responses are invalid; guarantee a tool is invoked |
| {"type":"tool", "name":"X"} |
Model must call exactly the named tool | Force a prerequisite step before enrichment tools (e.g., always run extract_metadata first); subsequent steps handled in follow-up turns |
tool_choice: {"type":"tool","name":"extract_metadata"}, that call returns. You then send a follow-up turn to process the result and call enrichment tools. The forced selection only controls the first call.Scoped Tool Access Pattern
The solution to the 85%/15% verification problem — where a synthesis agent needs simple fact-checks 85% of the time — is a scoped cross-role tool rather than full web search access.
- web_search
- fetch_url
- search_database
- synthesize_findings
- format_report
Agent attempts web searches
instead of synthesizing.
- synthesize_findings
- format_report
- verify_fact (scoped lookup)
85% of verifications handled
directly. 15% escalated to
coordinator → web search agent.
- Restrict each subagent's tool set to those relevant to its role, preventing cross-specialization misuse
- Replace generic tools with constrained alternatives (e.g.,
fetch_url→load_documentthat validates document URLs) - Provide scoped cross-role tools for high-frequency needs while routing complex cases through the coordinator
- Use
tool_choice: "any"to guarantee the model calls a tool rather than returning conversational text - Use forced
tool_choiceto ensure prerequisite tools are called first; process subsequent steps in follow-up turns
Exam Traps for Task 2.3
| The Trap | Why It Fails | Correct Pattern |
|---|---|---|
| Give synthesis agent all web search tools for "flexibility" | Over-provisioned agents misuse tools outside their specialization — synthesis agent will web search instead of synthesizing | Scoped verify_fact tool for the 85% common case; complex verifications route through coordinator |
Use tool_choice: "auto" when structured output is required |
"auto" allows the model to return plain text instead of calling the extraction tool | Use tool_choice: "any" to guarantee a tool call; or force the specific extraction tool |
| Give all agents the same complete tool set for consistency | Consistency at the cost of scoping — every agent's decision space is bloated and cross-specialization errors multiply | Each agent gets only the tools for its role; coordinator routes tasks requiring different tools to appropriate agents |
🔨 Implementation Task
Design a Role-Scoped Multi-Agent Tool Architecture
Design the tool distribution for a 3-agent research system: coordinator, web search agent, synthesis agent.
- List the tools each of the 3 agents receives — verify no agent has more than 5 tools and none have tools outside their specialization
- Identify the high-frequency cross-role need in the synthesis agent and design a scoped tool for it
- Write a scenario where
tool_choice: "any"is the correct choice and explain why "auto" would fail - Write a scenario where forced tool selection is required — implement the prerequisite chain across two turns
- Replace one generic tool (
fetch_url) with a constrained alternative and explain what misuse it prevents
Exam Simulation — Task 2.3
tool_choice: "auto" and has one tool: extract_invoice_data. During testing, 15% of calls return a text response saying "I'll analyze this invoice" instead of calling the tool. What is the correct fix?tool_choice: "any" guarantees the model calls one of the available tools on every invocation — it cannot return plain text. With a single tool, this is the cleanest guarantee. A is wrong: tool_choice: "auto" is the current configuration that's already failing — it allows the model to choose text when it judges it appropriate. C is wrong: "none" forces the model to return only text with no tool calls — the exact opposite of what's needed. D is wrong: Adding more tools increases choice but doesn't change the fundamental behavior of "auto" mode.web_search, fetch_document, extract_quotes, validate_claim, cross_reference, summarize_section, calculate_statistics, and format_output. During testing, the coordinator occasionally calls format_output mid-analysis (generating partial formatted results before research completes) and calls calculate_statistics before all data is available. What is the most architecturally sound fix?format_output physically cannot call it prematurely. The stage-gating is structural, not instructional. A is wrong: The premature calls show prompt instructions aren't reliably followed for tool ordering — the same failure pattern continues. B is partially right (removing format_output from the coordinator is correct) but doesn't solve the calculate_statistics problem and leaves the analysis stage fragmented. C makes no change to the architecture — tool_choice: "auto" is already the default, and better documentation is the same approach that already failed.Integrate MCP servers into Claude Code and agent workflows
The Core Concept
MCP (Model Context Protocol) is the mechanism for exposing external services — GitHub, databases, Jira, custom APIs — as tools available to Claude. The exam tests two layers: where configuration belongs (project vs user scope) and how to make MCP tools reliable in production (descriptions, resources, credential management).
.mcp.json is version-controlled and shared across the team. ~/.claude.json is personal and never shared. Understanding which configuration to use for which purpose is the #1 tested knowledge point in 2.4.Server Scoping Rules
Project-Level: .mcp.json
Shared team tooling. Version controlled. Available to all developers who clone the repo. Use for: GitHub, Jira, databases, any team-standard external tools.
User-Level: ~/.claude.json
Personal or experimental servers. Never shared via version control. Use for: personal integrations, tools under development, anything not yet ready for team-wide use.
Discovery at Connection Time
Tools from all configured MCP servers are discovered simultaneously at connection time and all become available to the agent at once — no per-request server selection.
Community vs Custom
Use existing community MCP servers for standard integrations (Jira, GitHub). Reserve custom MCP server development for team-specific workflows without community alternatives.
Configuration & Credential Management
Never hardcode credentials in .mcp.json. Use environment variable expansion. The ${VAR} syntax is expanded at runtime from the shell environment — the config file itself contains no secrets and is safe to commit.
{
"mcpServers": {
"github": {
"type": "url",
"url": "https://github.mcp.example.com/sse",
"env": {
"GITHUB_TOKEN": "${GITHUB_TOKEN}" // ✓ expanded from shell env
}
},
"jira": {
"type": "url",
"url": "https://jira.mcp.example.com/sse",
"env": {
"JIRA_API_KEY": "${JIRA_API_KEY}" // ✓ never commit actual keys
}
}
}
}
{
"mcpServers": {
"github": {
"env": {
"GITHUB_TOKEN": "ghp_abc123xyz789actual_token_value" // ✗ secret in git
}
}
}
}
MCP Resources — The Often-Missed Knowledge Point
MCP resources are a mechanism for exposing content catalogs to agents — not as callable tools, but as structured data the agent can read at startup. This reduces exploratory tool calls because the agent already knows what data is available before deciding what to look up.
What Resources Expose
Issue summaries, documentation hierarchies, database schemas, file catalogs — anything that helps the agent understand the landscape before making tool calls.
Why It Matters
Without resources, agents spend multiple tool calls exploring what exists. With resources, the agent reads the catalog once and makes targeted calls directly.
Exam Traps for Task 2.4
| The Trap | Why It Fails | Correct Pattern |
|---|---|---|
Put team-shared MCP servers in ~/.claude.json |
User-level config is personal and not shared via version control — teammates won't have the server | Shared team tooling goes in project-level .mcp.json, committed to the repo |
Hardcode API tokens in .mcp.json |
Secrets in version control — major security violation | Use ${ENV_VAR} expansion. The variable is set in shell environment, not in the file. |
| Build a custom MCP server for Jira integration | Jira has a community MCP server — building a custom one wastes effort and creates maintenance burden | Use existing community MCP servers for standard integrations; custom only for team-specific workflows |
| Leave MCP tool descriptions minimal (same mistake as 2.1) | Claude will prefer its familiar built-in tools (Grep, Bash) over MCP tools with weak descriptions | Enhance MCP descriptions to explicitly explain what the MCP tool provides that built-ins cannot |
🔨 Implementation Task
Configure a Multi-Server MCP Setup for a Development Team
Configure a project with GitHub and a custom documentation MCP server — with correct scoping and secure credential management.
- Write a
.mcp.jsonwith GitHub (community server) and a team-custom documentation server — using env var expansion for both tokens - Write a
~/.claude.jsonentry for a personal experimental server you're testing — explain why it goes here and not in.mcp.json - Write enhanced descriptions for both MCP tools that explicitly explain what they provide beyond Claude's built-in Grep and Bash tools
- Design one MCP resource that exposes a documentation hierarchy — write what the resource catalog would contain and explain how it reduces exploratory tool calls
- Identify which of your tool descriptions from step 3 could collide with Claude's built-in tools and fix the collision
Exam Simulation — Task 2.4
~/.claude.json (user-level config on their local machine), which is not committed to version control and doesn't exist on any other developer's machine. User-level configuration is personal and non-shareable. B is wrong: .mcp.json at the project root IS the correct solution — but the question asks for the cause of the problem, not the fix. C is wrong: CLAUDE.md does not configure MCP servers — it provides instructions and conventions for Claude Code, not server definitions. D is wrong: Shared MCP configuration is a real feature of Claude Code via project-level .mcp.json committed to the repository.Grep tool to search the codebase instead of a configured MCP search_codebase server that provides richer results including test coverage data, dependency graphs, and usage frequency. The MCP tool description currently reads: "Searches the codebase." What is the best fix?.mcp.json, Datadog in the user's ~/.claude.json. After onboarding a new developer, they report GitHub and Linear work correctly but Datadog tools are not available. What is the most likely cause?~/.claude.json is a user-level configuration file stored on the individual developer's machine and not committed to version control. When a new developer clones the repo, they get the project's .mcp.json (GitHub, Linear) but not the original developer's personal ~/.claude.json (Datadog). A is wrong: User-level and project-level configs are loaded additively — one doesn't override the other. Both GitHub/Linear and Datadog should be available if both config files exist. B is wrong: User-level MCP servers are available in all directories, not just the home directory. D is plausible but specific — authentication expiry would affect the original developer too. The pattern of GitHub/Linear working but Datadog absent points to the configuration not existing on the new machine, not to credential expiry.Select and apply built-in tools (Read, Write, Edit, Bash, Grep, Glob) effectively
The Core Concept
Claude Code's built-in tools are the primary interface for file and codebase operations. The exam tests precise selection — not just "which tool reads files" but "given this specific task, which combination of tools is correct, and what do you do when your first choice fails?"
The two most commonly confused pairs: Grep vs Glob (content search vs path matching) and Edit vs Read+Write (targeted modification vs full file replacement fallback).
Built-in Tool Reference
Searches inside files for text patterns. Use for: finding all callers of a function, locating error messages, finding import statements, searching for variable names across the codebase.
Matches file paths by name or extension patterns. Use for: finding all test files, locating all config files, getting a list of all TypeScript files in a directory.
Loads the complete content of a file into context. Use for: understanding a complete module, loading a config file, or as the first step in a Read → Write fallback when Edit fails.
Writes complete file content. Use for: creating new files, or as the second step in a Read → Write fallback when Edit cannot find a unique anchor.
Modifies a file by matching unique anchor text and replacing it. Use for: targeted bug fixes, adding a conditional, changing a specific line. Fails if the anchor text appears more than once.
Executes shell commands. Use for: running tests, executing scripts, git operations, package installs, anything requiring system-level execution.
Critical Usage Patterns
The Edit → Read + Write Fallback:
return null;Edit cannot determine which
one to modify. Throws error:
"Anchor text not unique"
2. Identify correct occurrence
by context (line number/surroundings)
3. Write complete modified file
Reliable. Always works.
Incremental Codebase Understanding:
- Start with Grep to find entry points — search for main function names, entry module exports, top-level imports
- Use Read selectively to follow imports and trace flows from those entry points — not reading all files upfront
- For function usage tracing: first use Grep to identify all exported names, then search for each name across the codebase
- Never use Read on every file in a large codebase — build understanding incrementally from the most relevant starting points
Exam Traps for Task 2.5
| The Trap | Why It Fails | Correct Pattern |
|---|---|---|
Use Grep to find all *.test.tsx files |
Grep searches file contents, not file names — wrong tool for path matching | Use Glob with pattern **/*.test.tsx |
| Use Glob to find all files that import React | Glob matches file paths, not file contents — can't search inside files | Use Grep with pattern import.*React |
| Retry Edit after "anchor not unique" error | The anchor won't become unique on retry — structural problem | Fall back to Read → modify → Write immediately |
| Read all files in a large codebase upfront for context | Fills context window with irrelevant content; context degradation follows | Grep to find entry points → Read selectively to follow specific imports |
🔨 Implementation Task
Navigate and Modify a Real Codebase Using Built-in Tools Only
Using only Claude Code's built-in tools, complete these codebase operations on a sample project.
- Find all test files in the project — use the correct tool (Glob) and write the glob pattern
- Find all files that import a specific utility function — use the correct tool (Grep) and write the search pattern
- Make a targeted edit to a unique line — use Edit. Then intentionally trigger the non-unique anchor failure and implement the Read + Write fallback
- Trace the full usage of an exported function: find all exported names → grep for each across the codebase → document the dependency graph
- Build understanding of an unfamiliar module incrementally: start with one Grep, identify the 3 most relevant files, Read only those 3
Exam Simulation — Task 2.5
.test.tsx suffix. Which built-in tool and approach correctly finds all of these files?**/*.test.tsx matches all test files regardless of location. A is wrong: Grep searches file contents for patterns, not file names — using it to find files named .test.tsx is the wrong tool. C is wrong: Manually recursing directories is inefficient and fills context unnecessarily. D may work but Bash is a lower-level fallback — when a dedicated tool (Glob) exists for the task, use it. The exam expects you to know the purpose-built tool for each operation.sed is a pattern-replacement tool. It cannot handle semantic variation in how different files use the old interface.