I type /start INT-391 and walk away for thirty seconds. When I come back, Claude has fetched the ticket from Linear, read the description and all comments, detected that it belongs to the magic-platform monorepo, checked out a fresh feature branch from preview, explored the codebase to understand what needs to change, generated a detailed implementation plan, posted that plan as a comment on the Linear ticket, set the status to “In Progress,” and is now waiting for me to approve the plan before it writes any code.
One command. Fifteen steps. Across any of twelve repositories and twelve parallel worktrees.
The /start command is a markdown file. Not a shell script, not a Python program, not a GitHub Action. It’s 1,400 lines of structured documentation that Claude Code reads and executes. Every design decision in it came from a real failure.
A Markdown File Is a State Machine
The first version of /start was about fifty lines of prose. “Fetch the ticket from Linear. Read the description. Create a branch. Explore the codebase and make a plan.” It worked - sometimes. Claude would forget to create the branch before starting the plan. It would skip posting the plan to Linear. It would start writing code without waiting for approval. The instructions were clear to a human reader, but Claude treated them as suggestions.
The fix was structure. Not more words - more explicit control flow.
Session file exists?
├─ YES → Read session file
│ ├─ Steps 0-7 → Restart from Step 1
│ ├─ Step 8+ → Load Workflow Profile first, then resume
│ └─ status = "awaiting_user_test" → Skip to Step 15
└─ NO → Fresh start, continue with Step 1
Decision trees with explicit branching replaced prose paragraphs. Step numbers replaced “next, do…” transitions. Checkpoint markers told Claude exactly when to save state. The markdown became less readable to humans and more reliable for Claude.
This is the core insight: a markdown file can be a state machine. Not metaphorically - literally. Each step has a number, preconditions, actions, a decision tree for branching, and a checkpoint that persists state to disk. Claude reads the file, identifies which step it’s on, and follows the branches. The structure does the work that an interpreter would do in a traditional programming language.
Session file exists?
├─ YES → Read session file
│ ├─ Check stored targetDir value
│ │ ├─ If targetDir differs from $PWD:
│ │ │ → Display: "Session found but for different directory"
│ │ │ → Set TARGET_DIR from session's targetDir
│ │ └─ If targetDir matches $PWD:
│ │ → Set TARGET_DIR = $PWD
│ │
│ ├─ Display: "Found existing session at Step N (status: X)"
│ │
│ └─ Jump to appropriate step based on currentStep:
│ ├─ Steps 0-7 → Restart from Step 1 (no side effects yet)
│ ├─ Step 8+ → Always load Workflow Profile first,
│ │ then resume at stored step
│ └─ status = "awaiting_user_test" → Skip to Step 15
│
└─ NO → Fresh start, continue with Step 1
The key detail: steps 0-7 have no side effects (no branches created, no Linear updates), so they’re safe to restart. Steps 8+ have created branches and modified external state, so they must resume exactly where they left off - but only after loading the Workflow Profile, because later steps reference profile fields like base_branch and quality_gates.
Context Compaction Ate My Progress
Claude Code compresses old messages as conversations grow long. This is called context compaction, and it’s necessary - without it, long coding sessions would hit the context window limit and stop. But compaction means Claude can forget things. Important things. Like which step of a fifteen-step workflow it’s on, what the implementation plan says, and which files have already been modified.
The first time I lost an hour of work to compaction, I added session files.
Every /start invocation creates a JSON file on disk: .claude-session/TICKET-XXX.json. It tracks the ticket ID, the current step, the workflow status, the target directory, and whether the user has been asked to test. When context compacts and Claude loses its in-memory state, it re-reads the session file and picks up where it left off.
But the session file only tracks workflow state. The implementation plan is a separate file - .claude-session/TICKET-XXX-plan.md - with checkbox-style tasks:
## Implementation Tasks
- [x] Add overlay state to landing page store
- [x] Create InlineEditableText component
- [ ] Wire up save action for section headings
- [ ] Add optimistic update with rollback on error
After completing each task, Claude edits the plan file to check the box. When context compacts, Claude re-reads the plan, sees which boxes are checked, and resumes from the first unchecked task. The plan file is the canonical progress tracker - not Claude’s memory.
{
"schemaVersion": 1,
"ticket": "INT-391",
"ticketUUID": "<uuid-from-linear>",
"title": "Overlay cleanup",
"branch": "feature/INT-391-overlay-cleanup",
"project": "magic-platform",
"targetDir": "/Users/bfeld/Code/magic7",
"stashCreated": false,
"createdAt": "2026-03-10T10:00:00Z",
"updatedAt": "2026-03-10T10:15:00Z",
"sessionRules": [],
"ticketContext": {
"description": "...",
"comments": [],
"isReopened": false,
"feedbackToAddress": [],
"previousImplementation": null
},
"workflow": {
"currentStep": 13,
"status": "implementing",
"blockedActions": [],
"nextAction": "Continue implementation"
},
"plan": {
"file": "/Users/bfeld/Code/magic7/.claude-session/INT-391-plan.md",
"postedToLinear": true
},
"progress": {
"filesModified": ["src/app/admin/landing/page.tsx"],
"testsStatus": {}
}
}
Every field exists because something went wrong without it. targetDir was added after cross-repo sessions lost track of which directory to work in. stashCreated was added after users forgot they’d stashed uncommitted changes before starting a ticket. ticketContext.isReopened was added after Claude kept ignoring feedback comments on reopened tickets.
I Kept Starting Tickets in the Wrong Repo
I have twelve repositories. Magic Platform is a monorepo with seven apps. CompanyOS is a standalone repo for business operations. Adventures in Claude is a Hugo blog. MagicEA, Freshell, Overwatch, txvotes, Techstars OS - each lives in its own directory with its own conventions.
The problem: I’d type /start COS-87 from a Magic Platform worktree and Claude would try to create a feature branch in the wrong repository, explore the wrong codebase, and generate a plan for code that didn’t exist there.
The solution is the Team Registry - a YAML block at the top of the /start file that maps every ticket prefix to its repository:
- prefix: [AUTM, MED, MYH, NEW, PLA, INT, CURE]
project_type: magic-platform
directory: (current worktree)
description: "Magic Platform monorepo apps"
- prefix: COS
project_type: companyos
directory: ~/Code/companyos-intensitymagic
description: "Company operations"
- prefix: AIC
project_type: adventuresinclaude
directory: ~/Code/content/aic
description: "Adventures in Claude blog"
When I type /start COS-87 from a Magic Platform worktree, the algorithm looks up COS in the registry, finds it maps to companyos, sees that doesn’t match the current project type, and switches. All subsequent commands use git -C "$TARGET_DIR" and absolute paths - because Claude can’t persist a cd between tool calls. Each Bash invocation starts in the original directory, so the workaround is to never rely on the working directory at all.
The interesting edge case is BAF - Brad’s Todos. It’s a heterogeneous team in Linear where tickets can route to different repositories depending on what they are. A BAF ticket might be a blog post for feld.com, a feature for CompanyOS, or content for Adventures in Claude. There’s no single correct repository, so /start asks:
- prefix: BAF
project_type: (heterogeneous)
routing: ask_user
routing_options:
- label: "feld.com blog"
target_dir: ~/Code/content/feld
- label: "Adventures in Claude"
target_dir: ~/Code/content/aic
- label: "CompanyOS"
target_dir: ~/Code/companyos-intensitymagic
1. Look up the ticket's team prefix in the Team Registry
2. No matching entry? → Stay in current directory (unknown team)
3. Entry has routing: ask_user? → Show options, let user pick
4. Entry's project_type matches current? → Stay (already correct)
5. MISMATCH → Set TARGET_DIR from registry entry
└─ Display: "Ticket COS-87 belongs to team CompanyOS"
"Target: ~/Code/companyos-intensitymagic"
"All operations will use absolute paths in the target directory."
After switching, /start runs a post-switch pre-flight: check for uncommitted changes in the target repo, offer to stash them, and verify the repo is in a clean state before proceeding.
The Ticket Said One Thing, Reality Said Another
A ticket gets worked on, shipped, and then comes back. I found a bug, an edge case was missed, or the behavior isn’t quite right. The ticket gets reopened with feedback in the comments.
Early versions of /start would just read the ticket description and start fresh. The description says “add overlay editing to the landing page.” Claude reads that, explores the codebase, and generates a plan for adding overlay editing - ignoring the three comments that say “the overlay doesn’t close when you click outside it” and “save action fires twice on double-click.”
Now /start scans comments for feedback signals:
A ticket is "reopened" if ANY of these are true:
1. Status is "In Progress" AND comments contain implementation content
2. Comments contain keywords: "sent back", "bug", "fix needed",
"doesn't work", "regression", "not working"
3. A "Progress Update" comment exists followed by feedback comments
When a reopened ticket is detected, /start extracts the specific issues and passes them to the Plan subagent as structured input - not just “here’s a ticket” but “here’s what was built before and here’s what’s wrong with it.”
The Plan subagent itself is a design choice. It runs on Sonnet (nearly identical SWE-bench scores to Opus at a fraction of the cost) in a separate context window. The subagent explores the codebase - grepping for patterns, reading files, tracing code paths - and all that verbose search output stays in the subagent’s context, not the main conversation. The main conversation gets back a clean, structured plan. This matters because codebase exploration can easily consume half the context window, leaving less room for the actual implementation.
The Plan subagent receives a structured prompt with the full ticket context:
## Previous Work & Feedback
This ticket was previously worked on and sent back.
### Issues to Address
- Bug: overlay doesn't close on outside click
- Issue: save action fires twice on double-click
### Previous Implementation
Added overlay editing with InlineEditableText component,
section heading save action, and optimistic updates.
Focus your implementation on addressing the feedback above.
This ensures the Plan subagent searches for the right files - not just the feature files, but the specific code paths that caused issues. Without this context, the subagent would generate a plan for the original ticket, not the reopened one.
Every Project Is Different
Magic Platform uses preview as its base branch, requires user testing before commits, runs type-check, lint, and unit tests as quality gates, and ships via pull request. CompanyOS commits via PR to main with a single validation script and no manual testing. Adventures in Claude auto-deploys on push to main with no quality gates at all.
Hardcoding these differences would mean maintaining separate /start commands - or a single command full of if (project === "magic-platform") branches. Instead, each project declares a Workflow Profile in its CLAUDE.md:
# Magic Platform
workflow:
base_branch: preview
direct_to_main: false
quality_gates:
- pnpm run type-check
- pnpm run lint
user_testing: required
ship:
method: pr
target: preview
deploy_hint: "/staging"
# CompanyOS
workflow:
base_branch: main
direct_to_main: false
quality_gates: ["bash scripts/validate.sh"]
user_testing: skip
ship:
method: pr
target: main
deploy_hint: "PR created - review and merge on GitHub"
/start reads the Workflow Profile at runtime (Step 7.1) and stores the parsed fields. Every subsequent step references the profile instead of hardcoded values: git checkout -b feature/TICKET origin/[profile.base_branch], run profile.quality_gates in sequence, set Linear status to profile.ship.linear_status on commit. The command is generic. The profile makes it specific.
Adding a new project means adding one entry to the Team Registry and writing a Workflow Profile in the project’s CLAUDE.md. No changes to /start itself.
Superpowers: The Methodology Plugin
/start doesn’t try to be a complete development methodology. It manages the lifecycle - ticket to deployment. The methodology comes from somewhere else.
Jesse Vincent built superpowers , an open-source plugin that gives coding agents a complete development workflow. The core idea is that your agent shouldn’t just jump into writing code - it should brainstorm the design with you first, get your sign-off, write a plan detailed enough for an enthusiastic junior engineer to follow, then execute it with subagents while you watch. Jesse has been iterating on this relentlessly, and the result is one of the most thoughtful pieces of AI tooling I’ve seen - not because it’s flashy, but because it encodes hard-won lessons about where agents go wrong and how to keep them on track.
Superpowers installs as a single line in settings:
{ "superpowers@superpowers-marketplace": true }
It auto-updates via the plugin marketplace and provides skills for debugging, verification, brainstorming, plan writing, code review, and TDD. The integration points with /start are specific and deliberate:
Planning (Step 8): The Plan subagent follows superpowers’ plan-writing patterns - required sections (Key Decisions, Rejected Approaches, Edge Cases, Codebase Patterns), task granularity rules (each task is one atomic action), and the principle that a plan must be approved before implementation begins.
Approval (Step 9): The “present the full plan, get explicit approval, re-present after any revision” loop mirrors superpowers’ brainstorming skill, which requires presenting designs and getting sign-off before touching code.
Verification (Step 14.5): This step invokes superpowers’ verification-before-completion skill. It exists because of a specific failure mode: context compaction would cause Claude to skip quality gates - especially unit tests - and claim “done” without evidence. The verification skill forces a final check: did all quality gates actually run? Are all plan tasks checked off? It won’t let Claude proceed until there’s evidence, not just assertions.
The circuit breaker (Step 15): After implementation, /start sets the session status to awaiting_user_test and blocks git commit. Even if context compacts and Claude forgets the original instructions, the session file on disk enforces the gate. This is the same principle from the CompanyOS post
- irreversible actions need explicit approval. Claude can implement, test, and prepare all day long. But the moment a commit needs to leave the working directory, a human says yes.
{
"workflow": {
"status": "awaiting_user_test",
"blockedActions": ["git commit", "git push"],
"nextAction": "User tests manually, then runs /commit"
}
}
The relationship between /start and superpowers is like a project manager and a methodology framework. /start knows the sequence: fetch ticket, plan, branch, implement, test, hand off. Superpowers knows the standards: how plans should be structured, when verification is required, what counts as evidence. Neither embeds the other’s logic. They compose through well-defined integration points - skill invocations and pattern conventions.
Markdown as a Programming Language for AI Behavior
There’s no interpreter executing this markdown. No runtime, no compiler, no AST. Claude reads the file, identifies which step it’s on from the session state, and follows the decision trees. The “execution engine” is Claude’s ability to read structured documentation and act on it.
This works because of specific structural choices:
Decision trees, not prose. “If the session file exists and the current step is 8 or higher, load the Workflow Profile first, then resume at the stored step” is unambiguous. “Resume where you left off” is not.
State on disk, not in memory. Everything that matters - the current step, the plan, task completion status, the target directory - is persisted to files. Claude’s memory is unreliable across long sessions. The filesystem is not.
Step numbers, not transitions. “Step 14.5: Verification Gate” is a fixed location in the workflow. “After testing, verify everything” is a suggestion that can be skipped or reinterpreted.
Integration points, not monolithic logic. /start invokes superpowers skills at specific steps. It reads Workflow Profiles from project CLAUDE.md files. It delegates codebase exploration to a Plan subagent. Each piece does one thing and communicates through structured interfaces - files, JSON schemas, skill invocations.
The broader pattern is this: if you want an AI to do something complex and do it reliably, the answer isn’t better prose instructions. It’s more structured ones. Decision trees instead of paragraphs. Checkpoints instead of assumptions. State machines encoded in markdown - because that’s the format your AI agent already knows how to read.
Subscribe via RSS to follow along. The source is always on GitHub .