All Posts

Building an RTS with 5-10 Parallel Claude Sessions

I built a complete RTS game in Godot using an experimental workflow: multiple AI sessions working autonomously on separate tasks, with automatic testing and merging. Here's how it works and what I learned.

AI Godot Workflow Multiplayer

I wanted to build a real-time strategy game. Not a prototype—a complete game with multiplayer, AI opponents, and a full economic loop. The kind of project that usually takes a team months.

What made this different wasn’t the game itself. It was how I built it: running 5-10 parallel Claude sessions, each working autonomously on independent tasks, with automatic testing and PR merging.

The Problem with Sequential AI Development

The standard AI-assisted workflow is conversational. You explain a task, the AI writes code, you review it, iterate, and move on to the next thing. It works, but it’s slow. You’re always waiting—either for the AI or for yourself.

I wanted something faster. If the tasks are independent, why not run them in parallel?

The obvious approach is git worktrees. Create multiple working directories sharing the same .git/. But worktrees cause conflicts when multiple sessions try to work simultaneously—index locks, ref updates stepping on each other. I needed full isolation.

The Clone-Based System

The solution was simple: give each parallel session its own complete git clone.

A .rts-clones/ directory holds independent working copies. Each clone has its own .git/, its own feature branch, its own Claude session. A marker file tracks which issue it’s linked to.

I built 15 custom slash commands to orchestrate the workflow:

  • /sync - Pull latest changes, show open issues and PR status
  • /clone issue-name --auto - Create an isolated clone and start working autonomously
  • /plan - Research the issue and post implementation notes before coding
  • /test - Run the test suite
  • /finish - If tests pass, create PR and merge

The key command is /clone with its automation flags. Four modes:

ModeFlagWhat Happens
WaitnoneInteractive, I guide each step
Auto--autoAutonomous work, I watch progress
Full-Auto--full-autoFire-and-forget, auto-merges if tests pass
Quiet--quietBackground execution, check logs later

A Typical Session

/sync                                    # See what's open
/clone 51-extract-attack-range --auto    # Terminal 1
/clone 53-refactor-building-types --full-auto  # Terminal 2
/clone 54-movement-constants --quiet     # Terminal 3

Each clone automatically:

  1. Creates a feature branch linked to the issue
  2. Runs /plan to research the approach
  3. Implements the solution
  4. Runs tests
  5. Creates a PR
  6. If full-auto and tests pass: merges automatically

I can watch the --auto session, check on --quiet later, and trust --full-auto to handle itself.

The /audit Pipeline

The most powerful workflow is systematic code review. The /audit command does a three-phase sweep:

  1. Review: /audit scripts/core/ - Analyze code, document findings
  2. Execute: /audit --execute - Convert findings into GitHub issues with labels
  3. Run: /audit --run - Launch 3-5 parallel clones on selected issues

One recent audit identified 12 code quality issues. All 12 became GitHub issues, all 12 became parallel clones, all 12 merged successfully. An 82-line compatibility layer got removed. Object pooling got added to the minimap. The codebase improved systematically while I worked on other things.

Safety Guardrails

Parallel autonomous AI sessions sound dangerous. A few things keep it safe:

Pre-commit hooks validate conventional commit format and scan for secrets. No API keys or passwords get committed.

GDScript linting runs on every file edit. Magic numbers and debug code get flagged.

Permission model whitelists allowed commands. git push --force is explicitly denied. The addons/ directory (third-party code) is protected.

Conditional auto-merge only happens if tests pass. A failing test means the PR sits for my review.

The test suite has 8,000+ lines across 30 files. It’s the foundation that makes autonomous work possible.

What Got Built

The game itself is a complete RTS inspired by Age of Mythology:

  • 5 unit types with a rock-paper-scissors combat triangle
  • 6 building types for base construction
  • 4 resources with gathering and economy
  • AI opponent with economic and military phases
  • Deterministic LAN multiplayer with lockstep synchronization

13,000+ lines of code, 76 classes, 12 serializable commands, 12 unit states. All built in Godot 4.6 with GDScript.

What I Learned

Isolation beats coordination. Git worktrees share too much state. Complete clones mean sessions never conflict. The disk space is worth it.

Tests enable autonomy. Without a solid test suite, I couldn’t trust autonomous sessions. The tests are the contract that lets AI work independently.

The /plan step matters. Jumping straight to code produces worse results. Having Claude research first—find related patterns, identify files to modify, suggest approaches—leads to cleaner implementations.

Systematic beats reactive. The /audit pipeline finds issues I wouldn’t notice manually. Running it regularly keeps tech debt from accumulating.

Parallel work changes the bottleneck. The limiting factor stopped being “how fast can code get written” and became “how well can I define independent tasks.” Task decomposition became the skill that mattered.

Would I Use This Again?

Already am. The workflow isn’t specific to RTS games or even game development. Any project with good test coverage and well-defined tasks can benefit from parallel autonomous sessions.

The game was the goal. The workflow was the discovery.