Claude Code Configuration
& Workflows
Domain 3 is the hands-on configuration domain. It tests whether you know exactly where to put a CLAUDE.md file so team members receive it, which frontmatter keys control skill isolation, and which CLI flag prevents CI pipeline hangs. The exam rewards precision — knowing the right answer often means knowing the exact file path or the exact flag name.
This domain maps primarily to Scenario 2 (Code Generation with Claude Code) and Scenario 5 (CI/CD Integration). Both appear in all four scenario slots the exam presents, making Domain 3 knowledge reliably tested.
Configure CLAUDE.md files with appropriate hierarchy, scoping, and modular organisation
The Core Concept
Claude Code reads CLAUDE.md files from multiple locations and merges them into a single instruction set. The level at which you place a file determines who receives it — and whether it travels with the codebase via version control.
~/.claude/CLAUDE.md (user-level) instead of the project-level file. User-level config is never committed to version control — it exists only on the original developer's machine.Configuration Hierarchy
User-Level — ~/.claude/CLAUDE.md
Personal preferences. Never committed to version control. Applies only on that developer's machine. Use for personal shortcuts, preferred styles not enforced team-wide.
Project-Level — .claude/CLAUDE.md or root CLAUDE.md
Shared team standards. Committed to version control. Received by every developer who clones the repo. This is where coding conventions, testing standards, and architecture rules belong.
Directory-Level — src/payments/CLAUDE.md
Subdirectory-specific instructions. Applies when Claude is working in that directory. Use for package-specific conventions that don't apply to the whole project.
/memory Command
Verify which memory files are currently loaded. Use to diagnose why instructions are inconsistent across sessions — check which CLAUDE.md files Claude actually sees.
my-project/ ├── CLAUDE.md # ← project-level, committed to git ├── .claude/ │ ├── CLAUDE.md # ← alternative project-level location │ ├── rules/ │ │ ├── testing.md # ← topic-specific rules │ │ ├── api-conventions.md │ │ └── deployment.md │ ├── commands/ # ← project-scoped slash commands │ └── skills/ # ← project-scoped skills └── src/ └── payments/ └── CLAUDE.md # ← directory-level, payments only ~/.claude/CLAUDE.md # ← user-level, NEVER committed
Modular Organisation with @import
Large CLAUDE.md files become hard to maintain. The @import syntax lets you split into focused topic files and include only what's relevant per package.
# Project Root CLAUDE.md ## Universal Standards All code must be TypeScript strict mode. All API endpoints require input validation. ## Imported Standards @import .claude/rules/testing.md @import .claude/rules/api-conventions.md --- # src/payments/CLAUDE.md (directory-level, payments team) ## Payment Service Standards Always use idempotency keys for payment calls. Never log raw card numbers or CVV values. ## Payment-Specific Imports @import ../../.claude/rules/pci-compliance.md
- Use
.claude/rules/directory for topic-specific files as an alternative to a single largeCLAUDE.md - Use
@importto selectively include relevant standards per package — a payments package imports PCI rules, a frontend package imports UI conventions - Diagnose "teammate not receiving instructions" by checking whether the file is at user-level vs project-level
- Use
/memorycommand to verify exactly which CLAUDE.md files are loaded in the current session
Exam Traps for Task 3.1
| The Trap | Why It Fails | Correct Pattern |
|---|---|---|
| New team member not receiving instructions → check project CLAUDE.md | If teammates don't receive instructions that work on your machine, the instructions are in ~/.claude/CLAUDE.md, not version-controlled project-level |
Move shared instructions to .claude/CLAUDE.md or root CLAUDE.md and commit |
| Use a single monolithic CLAUDE.md for all rules | Becomes unmaintainable; loads irrelevant rules into every session context; harder for maintainers to own their sections | Split into .claude/rules/*.md files; use @import to compose per-context |
| Place personal API key preference rules in project CLAUDE.md | Personal preferences leak to all teammates via version control — conflicts with team standards | Personal preferences go in ~/.claude/CLAUDE.md; team standards go in project-level |
🔨 Implementation Task
Build and Verify a Complete CLAUDE.md Hierarchy
Create all three config levels, verify inheritance, and deliberately break sharing to confirm the diagnostic path.
- Create project-level
CLAUDE.mdwith 3 universal rules; commit to git - Create
~/.claude/CLAUDE.mdwith 2 personal rules; verify only you see them - Create
src/payments/CLAUDE.mdwith payment-specific rules; verify they load only when working in that directory - Split the project CLAUDE.md into 3
.claude/rules/*.mdfiles using@import— test that all rules still load - Run
/memorycommand; verify the output matches exactly which files you expect to be loaded
Exam Simulation — Task 3.1
~/.claude/CLAUDE.md) lives only on the developer's own machine and is never committed to version control. The fix: move the instructions to project-level (.claude/CLAUDE.md or root CLAUDE.md) and commit. A is wrong: CLAUDE.md support is universal across Claude Code versions. C is wrong: @import is a native CLAUDE.md feature. D is wrong on two counts: first, it misidentifies the root cause — the problem is that the file lives at user-level and was never committed, not a subdirectory inheritance issue. Second, D's premise is itself false: a project-level CLAUDE.md does apply across all subdirectories within the project — there is no need to create one in each folder.CLAUDE.md with 800 lines covering TypeScript standards, testing conventions, PCI compliance rules, infrastructure deployment guidelines, and frontend UI patterns. Different team members own different sections but regularly overwrite each other's changes. What is the best restructuring approach?.claude/rules/*.md files with @import gives each team ownership of their file while keeping everything version-controlled and composable. A would work for directory-scoped rules but doesn't help when conventions span multiple directories (e.g., PCI rules apply to multiple packages). C addresses access control, not maintainability — conflicts still occur. D is wrong: Moving shared conventions to user-level means they won't reach teammates at all.~/.claude/CLAUDE.md), project-level (root CLAUDE.md), and src/payments/CLAUDE.md with PCI-DSS rules. A developer working in src/payments/ reports the PCI-DSS rules are active, but their personal preferences from ~/.claude/CLAUDE.md are absent. What is the most accurate diagnosis?Create and configure custom slash commands and skills
The Core Concept
Custom slash commands are Markdown templates stored as files — they can be shared team-wide or kept personal. Skills are more powerful: they run in their own context and support frontmatter configuration that controls isolation, tool access, and how they prompt for arguments.
context: fork is the key that prevents skill output from polluting the main conversation. When a skill does verbose exploration or generates a lot of intermediate reasoning, it runs in an isolated sub-agent context and returns only the summary. Without it, all that output becomes part of the main session's context window.Slash Commands: Scoping
Skills: Frontmatter Configuration
Skills use YAML frontmatter at the top of the SKILL.md file. Four frontmatter keys the exam tests:
--- name: architecture-review description: Reviews system architecture and returns a focused summary context: fork # ← runs in isolated subagent context allowed-tools: # ← restricts which tools can run - read_file - search_code - Grep argument-hint: "path/to/module" # ← prompts for arg if missing --- Review the architecture of the module at {{args}}. Analyse: 1. Dependency structure and circular imports 2. Public API surface — is it minimal? 3. Separation of concerns violations Return a concise summary with top 3 findings.
context: fork
Runs the skill in an isolated sub-agent. Verbose output stays in the fork and doesn't accumulate in the main conversation. The skill returns only a summary to the main session. Use for: exploration, brainstorming, verbose analysis.
allowed-tools
Restricts which tools the skill can invoke. A write-only skill can be locked to Write and Edit only — preventing it from running destructive Bash commands accidentally. Principle of least privilege for skills.
argument-hint
When a developer invokes the skill without providing arguments, Claude Code shows this hint and asks for the missing parameter. Prevents skills from running against the wrong target.
Personal Variants
Create personal skill variants in ~/.claude/skills/ with different names. They won't conflict with or affect team skills — teammates see only what's in .claude/skills/.
Exam Traps for Task 3.2
| The Trap | Why It Fails | Correct Pattern |
|---|---|---|
Use a skill without context: fork for verbose codebase exploration |
All exploration output accumulates in the main conversation's context window — quickly exhausting available tokens and degrading subsequent responses | Add context: fork so verbose discovery runs in isolation; only the summary returns to the main session |
| Share a personal experimental slash command by putting it in .claude/commands/ | Anything in .claude/commands/ is committed to version control and shared with all teammates |
Personal/experimental commands go in ~/.claude/commands/; team-ready commands go in .claude/commands/ |
Omit allowed-tools from a skill that should only write files |
Without tool restrictions, the skill can invoke any available tool — including destructive Bash commands | Specify allowed-tools: [Write, Edit] to restrict to safe file operations |
🔨 Implementation Task
Build 3 Skills with Different Frontmatter Configurations
Demonstrate each key frontmatter option and verify its effect.
- Create a project slash command at
.claude/commands/pr-review.md; verify it appears for all team members via git - Create a skill with
context: forkfor codebase analysis; run it and confirm that verbose exploration output does NOT appear in main conversation history - Create a skill with
allowed-tools: [Write, Edit]; attempt to callBashfrom within the skill and confirm it is blocked - Create a skill with
argument-hint: "path/to/target"; invoke it without arguments and verify the hint prompt appears - Create a personal variant of a team skill in
~/.claude/skills/with a different name; confirm teammates cannot see it
Exam Simulation — Task 3.2
codebase-audit skill that reads all source files, generates detailed intermediate analysis for each, and produces a final summary. After running it, they notice that all the intermediate file-by-file analysis has been added to the main conversation and subsequent Claude responses are much slower and less focused. What single frontmatter change would fix this?context: fork is the exact mechanism for isolating verbose skill output. The skill runs in a sub-agent context — all intermediate output stays there — and only the final summary is returned to the main conversation. A restricts tool access, not output verbosity. C changes sharing scope, not execution context. D is invented — argument-hint prompts for missing arguments, it has no effect on output verbosity.generate-migration skill that should write SQL migration files but must never execute them via Bash. How should they configure the skill frontmatter?allowed-tools is the frontmatter key for tool restriction. Specifying only [Write, Edit] prevents the skill from invoking Bash regardless of any instructions in the skill body. A is probabilistic — a prompt instruction can be bypassed. C is wrong: context: fork isolates context accumulation, not tool permissions — Bash would still run if allowed. D is wrong: User-scoped vs project-scoped affects sharing, not tool permissions./deploy skill that reads infrastructure config, generates a deployment plan, and executes it via Bash. A junior developer accidentally deployed to production (the config file referenced the wrong target). The team wants to add a structural safeguard that prevents execution without explicit plan review. Which approach provides the most reliable safeguard?/deploy-plan physically cannot execute (no Bash access), and /deploy-execute requires an explicit second invocation after review. The developer is forced by the tool design to pause and review. A is wrong: A prompt-based confirmation is what already exists implicitly — the developer who ran /deploy presumably intended to deploy; a confirmation prompt doesn't prevent a wrong config file being used. C is wrong: context: fork isolates conversation context, but the Bash commands still execute and affect the actual infrastructure. D is wrong: An environment argument doesn't prevent the junior developer from passing the wrong environment name — the config file was already selecting the wrong target.Apply path-specific rules for conditional convention loading
The Core Concept
Files in .claude/rules/ support YAML frontmatter with a paths field. When a paths field is present, the rules load only when Claude is editing files matching those glob patterns. This conditional loading reduces irrelevant context and token usage.
src/, lib/, and packages/, a single path-specific rule with pattern **/*.test.tsx covers all of them. A subdirectory CLAUDE.md can only cover files in its own directory.Glob Patterns and Rule Files
--- paths: - "**/*.test.tsx" # all test files, any directory - "**/*.spec.ts" # all spec files, any directory - "tests/**/*" # anything in a tests/ directory --- ## Testing Standards Always use React Testing Library, not Enzyme. Test user behaviour, not implementation details. Mock external API calls using msw. Each test file must have at least one edge-case test. Snapshot tests require an inline comment explaining intent.
--- paths: - "terraform/**/*" # all terraform files - "**/*.tf" # any .tf file anywhere --- ## Terraform Standards All resources require name, environment, and team tags. Never hardcode AWS account IDs — use data sources. Remote state must use S3 backend with DynamoDB lock table.
- Use path-specific rules for conventions that apply to a file type across directories (test files, migration files, infrastructure files)
- Use directory-level CLAUDE.md for conventions that apply to a domain or service (payments service, admin panel) regardless of file type
- Path rules reduce token usage by loading only when editing matching files — infrastructure rules don't load during frontend work
Exam Traps for Task 3.3
| The Trap | Why It Fails | Correct Pattern |
|---|---|---|
| Use directory CLAUDE.md to enforce test conventions spread across multiple directories | A directory-level file only covers its own directory — test files in sibling or parent directories won't receive the rules | Use a path-specific rule with **/*.test.tsx glob — applies everywhere the pattern matches |
| Load all rules in the root CLAUDE.md regardless of what file is being edited | Infrastructure, payment, and frontend conventions all load for every file — wastes tokens and introduces irrelevant instructions | Use path-scoped rules with paths: frontmatter — each rule loads only when relevant |
🔨 Implementation Task
Create Path-Scoped Rules and Verify Conditional Loading
- Create
.claude/rules/testing.mdwithpaths: ["**/*.test.tsx", "**/*.spec.ts"]and 3 testing conventions - Create
.claude/rules/terraform.mdwithpaths: ["terraform/**/*", "**/*.tf"]and 2 Terraform rules - Open a regular TypeScript source file — verify neither testing nor terraform rules are loaded
- Open a test file — verify testing rules load; verify terraform rules do NOT load
- Open a
.tffile — verify terraform rules load; verify testing rules do NOT load
Exam Simulation — Task 3.3
src/, packages/, and lib/. Which configuration approach correctly applies the conventions to all test files regardless of location?src/api/), a Python ML pipeline (src/ml/), and a React frontend (src/frontend/). Each service has conflicting conventions (Python uses snake_case, JS uses camelCase). A single root CLAUDE.md with all conventions causes Claude to apply Python conventions to TypeScript files. What is the cleanest solution?.claude/rules/ are the designed mechanism for exactly this use case — each convention set loads only when Claude is editing files in the matching path, keeping shared conventions in the root CLAUDE.md and service-specific ones scoped. A would work technically but requires duplicating any shared conventions (git commit format, security guidelines) across 3 files and becomes a maintenance burden as services change. C is wrong: Plain text headings in CLAUDE.md are not a real scoping syntax — Claude reads the entire file for context regardless of headings, applying all conventions to all files. D loses all shared conventions — project-wide rules that apply everywhere (security policies, commit standards) have no home without a root CLAUDE.md..claude/rules/ activate exactly when editing matching files and are silent otherwise — this is the mechanism designed for conditional convention loading. A requires maintenance work: adding a new handler directory means creating another CLAUDE.md in that directory, and any shared conventions must be duplicated. B doesn't scope the rules — the security rules still load for all files; only the model is being asked to apply judgment, which was already failing (test file false alarms). C breaks discoverability — rules that require manual invocation are easily forgotten and create inconsistent application. Path-scoped rules are automatic and reliable.Determine when to use plan mode vs direct execution
The Core Concept
Plan mode lets Claude explore a codebase and propose an approach before making any changes. It prevents costly rework on complex tasks where choosing the wrong implementation approach means undoing a lot of work. Direct execution is appropriate when the task is well-scoped, the correct approach is clear, and the change is small.
Decision Framework
- Use plan mode for exploration, then switch to direct execution once the approach is confirmed — the combination is more powerful than either alone
- Plan mode prevents "costly rework" — the exam phrase for situations where the wrong initial approach would require undoing many changes
- Direct execution is fine for well-understood, single-file, clearly-scoped tasks — plan mode would add overhead without benefit
The Explore Subagent
During a multi-phase task, the Explore subagent isolates verbose discovery output. Rather than flooding the main session with file-by-file analysis, the Explore subagent does the discovery and returns a structured summary. This preserves context window space for the implementation phase.
What It Solves
Multi-phase tasks (explore → plan → implement) generate huge amounts of discovery output. Without isolation, the exploration phase exhausts the context window before implementation begins.
What It Returns
A structured summary of findings — not the raw exploration. The implementation phase starts with a clean context containing only what matters: the summary and the implementation plan.
When to Use It
Any task with a discovery phase that generates more than a few pages of intermediate output: codebase mapping, dependency tracing, test gap analysis, legacy system exploration.
Relationship to context: fork
The Explore subagent is the same isolation concept as context: fork in skills — both run in a separate context and return summaries. Explore is the plan mode variant; fork is the skill variant.
Exam Traps for Task 3.4
| The Trap | Why It Fails | Correct Pattern |
|---|---|---|
| Use plan mode for a single-file bug fix with a clear stack trace | Adds unnecessary overhead — exploration and planning phases add latency for a task where the correct fix is already clear | Direct execution; plan mode is for architectural decisions and multi-file changes with multiple valid approaches |
| Use direct execution for a library migration affecting 45+ files | Without planning, Claude may choose an incompatible migration path — discovering the incompatibility after making 30 changes that must be reverted | Plan mode first: explore affected files, identify approach options, confirm the plan before any changes are made |
| Run exploration and implementation in the same session without Explore subagent | Verbose discovery output fills the context window — implementation phase has reduced capacity and may produce lower-quality output | Use Explore subagent for discovery; implementation phase starts with a clean context plus the exploration summary |
🔨 Implementation Task
Classify and Execute 4 Tasks with Appropriate Mode
- Task A: "Fix null pointer exception in payment processor — stack trace shows line 47 of payments.ts" → Direct execution. Do it. Note time taken.
- Task B: "Migrate from axios to fetch across the entire codebase" → Plan mode. Observe the exploration output and proposed approach before approving.
- Task C: "Add comprehensive integration tests to the authentication module" → Plan mode with Explore subagent. Note that context is clean for implementation.
- Task D: "Add input validation to the createUser endpoint" → Direct execution. Note that planning would add overhead with no benefit.
- Reflect: what signals in the task description indicate plan mode vs direct execution? Write 3 heuristics.
Exam Simulation — Task 3.4
transaction.ts, then asks which error handling pattern to use. The developer realizes this foundational choice affects all previous edits. What workflow would have prevented the rework?Apply iterative refinement techniques for progressive improvement
The Core Concept
When Claude's output is inconsistent with what you want, the problem is often underspecified expectations. Iterative refinement is about progressive specification: start with what you have, identify exactly where the gap is, and communicate the correction with enough precision that Claude can generalise it to novel cases.
Refinement Techniques
Concrete Input/Output Examples
When prose descriptions produce inconsistent transformations, provide 2–3 concrete input/output pairs. Shows the shape of the expected transformation rather than describing it in words. Most effective for format and structure.
Test-Driven Iteration
Write test suites first covering expected behaviour, edge cases, and performance. Then iterate by sharing failing tests — Claude has an unambiguous signal about what needs to change. Eliminates interpretation gaps.
Interview Pattern
Ask Claude to surface design considerations before implementing. Claude asks questions about cache invalidation strategies, failure modes, and unknown edge cases. Use for unfamiliar domains where you may not know what you don't know.
Batch vs Sequential Fixes
When issues interact (fixing one changes what another should be), provide all fixes in a single message. When issues are independent, fix them sequentially so each fix can be validated before the next.
# Instead of: "Format the dates consistently" # → Too vague. Claude will pick a format arbitrarily. # Do this: provide concrete input/output pairs """ The transformation should work like this: Input: { "created": "2024-01-15T14:30:00Z" } Output: { "created": "Jan 15, 2024 at 2:30 PM UTC" } Input: { "created": 1705329000 } Output: { "created": "Jan 15, 2024 at 2:30 PM UTC" } Input: { "created": null } Output: { "created": "Unknown" } Apply this transformation to all date fields in the response. """
Exam Traps for Task 3.5
| The Trap | Why It Fails | Correct Pattern |
|---|---|---|
| Describe transformation requirements in prose when they produce inconsistent results | Prose descriptions are interpreted differently in different contexts — Claude generalises based on examples, not instructions | Provide 2–3 concrete input/output pairs that demonstrate the exact transformation including edge cases |
| Send interacting bugs one at a time for sequential fixing | Each fix changes the codebase state — the second bug's fix may be invalidated by the first, or the intermediate state may cause new errors | Provide all interacting issues in a single message; sequential iteration is correct only for independent issues |
| Implement first, then write tests to verify | Tests written after implementation often over-fit to the implementation rather than specifying the intended behaviour; edge cases are missed | Write test suite first covering expected behaviour and edge cases; iterate by sharing failing tests |
🔨 Implementation Task
Practice All Four Refinement Techniques
- Find a transformation task where prose produces inconsistent output; replace with 3 concrete input/output examples; measure consistency improvement
- Write a test suite for a feature before implementing; share failing tests with Claude iteratively until all pass
- Use the interview pattern for an unfamiliar domain (e.g., "implement a distributed lock"); observe which design questions Claude surfaces that you hadn't considered
- Identify 2 interacting bugs in a codebase; fix them in a single message; compare to fixing sequentially and observe the intermediate state problem
- Document your personal refinement heuristics: when would you reach for each of the four techniques?
Exam Simulation — Task 3.5
tenant_id claim. By the third attempt, a different required field is missing. Each iteration fixes one issue and introduces another. What is the fundamental problem and correct fix?/compact doesn't change what you ask for, only how much history is retained.Integrate Claude Code into CI/CD pipelines
The Core Concept
Claude Code is designed for interactive use by default — it waits for user input when it needs clarification. In CI pipelines, there is no user. Without the correct flag, the pipeline hangs indefinitely waiting for input that will never come.
-p (or --print) flag runs Claude Code in non-interactive mode — it processes the prompt, outputs to stdout, and exits without waiting for user input. This is the exam answer whenever a CI/CD pipeline hangs. No other flag, environment variable, or Unix workaround is correct.CI/CD CLI Flags
# ✓ Non-interactive: process and exit claude -p "Analyze this pull request for security issues" # ✓ Structured JSON output for automated parsing claude -p "Review for security issues" \ --output-format json \ --json-schema review-schema.json # Result: machine-parseable JSON for inline PR comments # { # "findings": [ # {"file": "auth.ts", "line": 42, "severity": "high", "message": "..."}, # ] # } # ✗ WRONG — hangs waiting for interactive input claude "Analyze this pull request for security issues"
-p / --print
Non-interactive mode. Processes the prompt, writes output to stdout, and exits. Required for all CI/CD usage. Without it, the pipeline hangs indefinitely.
--output-format json
Forces structured JSON output. Combine with --json-schema to enforce a specific schema. Enables automated posting as inline PR comments without text parsing.
Re-run Deduplication
When re-running a review after new commits, include prior review findings in context and instruct Claude to report only new or still-unaddressed issues. Prevents duplicate comments on the same PR.
Test Generation Context
Provide existing test files in context so generated tests don't duplicate existing scenarios. Document available fixtures in CLAUDE.md to improve test quality and avoid low-value test output.
Batch API vs Real-Time: The Critical Trade-off
The Message Batches API offers 50% cost savings but with up to 24-hour processing time and no guaranteed latency SLA. The exam tests whether you know which workflows can tolerate async processing and which cannot.
Exam Traps for Task 3.6
| The Trap | Why It Fails | Correct Pattern |
|---|---|---|
Pipeline hangs → set CLAUDE_HEADLESS=true |
This environment variable does not exist. It's a plausible-sounding distractor. | Use -p flag: claude -p "prompt" |
Pipeline hangs → redirect stdin from /dev/null |
A Unix workaround that doesn't address Claude Code's interactive mode design — behaviour is undefined and unreliable | Use -p flag — the documented, supported non-interactive mode |
| Switch blocking pre-merge checks to Batch API for cost savings | Batch API has up to 24h processing time with no SLA — developers cannot wait for pre-merge feedback to complete | Keep pre-merge real-time; use Batch only for non-blocking workflows like overnight reports |
| Re-run code review without prior findings in context | Claude re-flags the same issues as new, creating duplicate PR comments on issues already identified | Include prior review findings in context; instruct Claude to report only new or still-unaddressed issues |
🔨 Implementation Task
Build a Complete CI/CD Review Pipeline
- Write a CI pipeline step that invokes Claude Code with
-pflag — verify the job exits automatically without hanging - Add
--output-format json --json-schemaflags; parse the JSON output and post findings as inline PR comments - Implement deduplication: include prior findings in context on re-runs; verify only new issues are posted
- Implement test generation: provide existing test files in context; verify no duplicate test scenarios are suggested
- Design the cost split: identify which pipeline steps can use Batch API (overnight) vs which must stay real-time (blocking gates)
Exam Simulation — Task 3.6
claude "Analyze this pull request for security issues" but the CI job hangs indefinitely. Logs indicate Claude Code is waiting for interactive input. What is the correct fix?-p (or --print) flag is the documented, supported mechanism for non-interactive CI/CD usage. It processes the prompt, outputs to stdout, and exits cleanly. B is wrong: CLAUDE_HEADLESS is not a real environment variable — it's a plausible-sounding distractor. C is wrong: stdin redirection is a Unix workaround that doesn't properly address Claude Code's interactive mode — behaviour is unreliable. D is wrong: --batch is not a valid Claude Code flag.custom_id fields for exactly that purpose. More importantly, its conclusion to "keep real-time for both" is unnecessarily conservative: the overnight report has no latency requirement and is a textbook batch use case. D adds unnecessary complexity — the simpler correct solution is not switching the blocking workflow to batch at all.claude -p "Audit $(git diff HEAD~7) for security issues". After a library update introduces 14 new files, the diff exceeds 80,000 tokens and the audit begins timing out. The team wants to maintain comprehensive weekly coverage while fixing the timeout. What is the correct architectural approach?