The runtime is not edited inside ~/.claude/ or ~/.codex/.

Those are build outputs.

The Quelle—source of truth—is the repository.

The template includes ready-to-use deploy scripts for all four targets. Download, fill in your runtime, deploy.

Get Started

Why Git Matters Here

Without version control, runtime context drifts silently:

  • one machine is newer than another
  • one tool got a tweak the others did not
  • a good change is lost because it happened in a hidden folder
  • a bad change is hard to revert because there is no diff

The underlying mistake is treating context files like casual configuration.

They are not.

If changing soul.md, coding-policy.md, or a skill changes how every future session behaves, that is source code behavior. It deserves versioning, diffs, history, and deterministic deployment.

The Current Architecture

The runtime is now called Keeper Runtime.

The source of truth is one visible runtime repository.

Deployment targets:

  • ~/.claude/
  • ~/.codex/ + ~/.agents/skills/
  • ~/.gemini/
  • ~/.copilot/

Each target has a dedicated deploy script that handles the target-specific transformation:

  1. scan identity/ and work-mode/ for .md files (README.md excluded)
  2. concatenate or transform into the target’s format
  3. deploy skills, hooks, and settings where supported

The Deploy Commands

./scripts/deploy.sh claude         # one target
./scripts/deploy.sh codex gemini   # multiple targets
./scripts/deploy.sh                # all targets
./scripts/deploy.sh --preview      # dry run

The --preview mode matters more than it sounds. It makes restructuring safer because you can inspect what will assemble before you overwrite the live runtime.

What the Deploy Scripts Actually Do

Each target has its own script (deploy-claude.sh, deploy-codex.sh, deploy-gemini.sh, deploy-copilot.sh) plus a shared lib.sh. The orchestrator deploy.sh calls them.

Per target:

  1. clean managed files from the deploy root
  2. assemble the prompt file from identity/ and work-mode/ directories
  3. deploy skills in the target-specific format
  4. copy hooks, tools, and settings where supported

The scripts dynamically scan directories for .md files. Add or remove a file — no manifest to update.

Each target handles the format differently:

TargetPrompt formatSkills
Claude CodeSingle CLAUDE.md (concatenated)~/.claude/skills/*/SKILL.md
CodexSingle AGENTS.md (concatenated)~/.agents/skills/*/SKILL.md
GeminiSingle GEMINI.md (concat + skills inlined)No native skill system
CopilotModular *.instructions.md (with YAML frontmatter)~/.copilot/skills/

The Structure That Scales

runtime-repo/
├── identity/          ← who the partner is
├── work-mode/         ← how it operates
├── providers/         ← target-specific overlays
│   ├── anthropic/
│   ├── openai/
│   └── gemini/
├── skills/            ← reusable capabilities
├── hooks/             ← automation scripts
├── tools/             ← standalone utilities
└── scripts/           ← per-target deploy scripts
    ├── deploy.sh
    ├── deploy-claude.sh
    ├── deploy-codex.sh
    ├── deploy-gemini.sh
    ├── deploy-copilot.sh
    └── lib.sh

The split matters because the old “one giant prompt blob for all tools” model collapses as soon as the tools diverge in useful ways.

Multi-Machine Reality

The sync story should be boring:

git pull
bash deploy.sh all

That is the point.

If a new machine needs elaborate hand-synchronization in hidden folders, the system is broken.

Editing Experience

The source repo is plain Markdown. That means it can be edited with:

  • Obsidian
  • VS Code
  • BBEdit
  • whatever editor you actually like

That sounds trivial, but it is not. Good systems are easier to edit than to drift.


What I Do

The actual workflow is simple:

  1. change the source files
  2. commit and push
  3. preview if the change was structural
  4. deploy the affected targets

If the change touched:

  • core context
  • provider overlays
  • skill bundles
  • deploy manifest

then I redeploy. I do not assume “git push” means the live runtimes are current.

Why the split helped

The runtime moved from a monolithic prompt assembly to a clearer composition:

  • identity layer (who)
  • work-mode layer (how)
  • provider overlays (target-specific edges)
  • shared + provider-specific skills

That made change management much cleaner. OpenAI-specific execution contracts no longer bleed into Anthropic. Gemini-specific browser and visual workflow helpers no longer clutter shared behavior.

Each provider overlay captures what that tool does differently: Anthropic gets execution-style guidance tuned to Claude Code’s agent model, OpenAI gets explicit contract emphasis for long-horizon work, and Gemini gets browser and visual workflow helpers. The core stays the same — the overlays handle the edges.

The template available for download strips the provider overlays for simplicity. That is the right starting point — add overlays when you actually need target-specific behavior.

Clean-slate deployment

The deploy script is intentionally closer to a clean build than to a “copy a few files around” helper.

That is important because stale runtime residue — a Baustelle—construction site—you forgot about — is one of the easiest ways to create ghost behavior you cannot explain later.


What Others Do

Anthropic: commit your runtime files

Anthropic explicitly recommends keeping CLAUDE.md under version control so teams can share, review, and evolve the same context. That validates the baseline Git choice directly (as described in the Anthropic best-practices guide linked on the root page).

Google: repository as source of truth

Google’s context-driven development framing for Gemini CLI says the same thing from another angle: the repository should become the source of truth for context, not the ephemeral chat session. Source: Conductor for Gemini CLI.

Community: one file vs. modular system

The community is converging on two broad approaches:

  • one shared top-level file like AGENTS.md
  • modular supporting files loaded intentionally

Layer5 describes the first clearly. HumanLayer argues for the second. Keeper Runtime combines them: one source repo, modular files, deterministic assembly into one final file per target. Sources: Layer5 on AGENTS.md, HumanLayer on modular CLAUDE.md structure.

What is different here

Most setups stop at “version the prompt.”

Keeper Runtime versions:

  • identity
  • user context
  • execution policies
  • skills
  • provider overlays
  • deploy rules

That is a broader and more durable unit of versioning than one prompt file.

Deep Dives