At some point last year, I stopped treating AI as a tool and started treating it like a colleague.

Not because the models suddenly became magical. Because I stopped leaving the relationship to chance.

I wrote down who this partner is, how he works, what quality means, what he should challenge, what he should never do, and which context should always be available on purpose. Over time that stopped feeling like “prompting” and started feeling like a runtime.

The name came from my second brain. I asked the model what a fitting name for its role would be. It answered: Keeper. The name stuck.

That is why the system is called Keeper Runtime.

The idea is simple: one persistent partner, one shared core, multiple deploy targets. Claude Code, Codex, Gemini, and Copilot may differ at the edges, but the working relationship should not.

1 Download the template
2 Unzip into a folder where your runtime should live
3 Open that folder in your coding tool and paste this prompt
Prompt
Read getting-started.md and walk me through the setup. One file at a time.

New to coding tools? The Getting Started deep dive walks through everything step by step — including what to install and what all these terms mean.

What a Partner Looks Like

The first lines of the runtime are not capabilities. They are identity.

I am Keeper. Not an assistant, not a tool: a partner.

From there the runtime makes the working contract explicit: direct first sentence, honest pushback, no filler, ownership until done.

That is not “tone of voice.” It is a working contract.

A tool does what you ask. A partner tells you when you’re asking for the wrong thing.

The Real Shift
Most people ask AI to be helpful. I want it to be honest, context-aware, and structurally consistent across tools.

Why Build It This Way

The practical reason is simple: I do not want three different versions of the same partner hiding in three different dotfolders.

I want one visible source repository that I can inspect, diff, review, and maintain like code. I want the same partner, the same baseline, and the same skills everywhere. And I want hidden runtime folders to be deployment targets, not the source of truth.

That matters more than it sounds.

If the core lives only inside ~/.claude/, ~/.codex/, or ~/.gemini/, it becomes hard to compare, hard to evolve, and easy to forget what is actually running. A visible runtime repository fixes that. It turns a pile of hidden prompt files into a maintainable system.

It is also not a memory dump. Tool-level memories can exist locally where the tools manage them. Keeper Runtime is different: a curated, versioned list of context that I want available everywhere because it reflects stable working reality, not chat residue.

One Partner, Many Frontends
Claude Code, Codex, and Gemini can differ at the edges. The person I work with should not.

How the Runtime Is Structured

Think of it as a Betriebssystem—operating system—for the working relationship. The current Keeper Runtime has four blocks:

  1. Partner — identity, user context, people, tone of voice
  2. Work mode — quality bar, pragmatism, execution policy, coding and research contracts
  3. Skills — research and coding skills loaded on demand
  4. Deployment — git source, provider overlays, deploy script

Each block maps to one deep dive below. The separation matters because personality should not contain deployment logic, coding policy should not contain provider quirks, and search behavior should not be hardcoded into every session.

Today that structure lives in one source repo and deploys into four runtime targets:

ToolDeploy rootPrimary prompt file
Claude Code~/.claude/CLAUDE.md
Codex~/.codex/AGENTS.md
Gemini CLI~/.gemini/GEMINI.md
GitHub Copilot~/.copilot/*.instructions.md (modular)

Editorial Rule
This root page should stand on its own. The deep dives below are extensions, not prerequisites, and they should not repeat the same material just because they exist.

What Others Do

The wider field has converged on the same problem, even if the implementations differ.

OpenClaw: a good idea for giving the runtime a soul

The basic structure was already there before I looked at OpenClaw.

I already had an identity layer and a persistent user file for myself. What OpenClaw gave me was the idea to split out a dedicated soul layer and make Keeper feel more like a named counterpart than a stack of operating instructions. I liked that move, so I adopted that part and folded it into an architecture that already existed. Source: OpenClaw SOUL.md Template.

The result is not “I copied OpenClaw.” It is closer to: I already had the machine, and OpenClaw gave me a nicer way to express its center.

What I like about that is not the branding. It is the convergence. Different people, different stacks, similar ideas showing up around the same pressure points. That is usually a good sign.

Convergence Matters
When the same architectural shape shows up independently in different toolchains, it is usually not fashion. It is pressure from the same underlying problem.

Anthropic: persistent context + skills

Anthropic now treats persistent context and skills as first-class parts of Claude Code. Their own best-practices docs explicitly position CLAUDE.md as the high-leverage place to encode what Claude cannot infer from the codebase alone, and they recommend keeping it concise and hand-crafted rather than auto-generated. Their skills system adds on-demand prompt modules on top of that base layer. Sources: Claude Code Best Practices, Claude Code Skills.

OpenAI: explicit contracts for long-horizon work

OpenAI’s GPT-5.4 prompt guidance pushes the same idea from a different angle: explicit output contracts, dependency checks, tool persistence, completeness rules, and verification loops. That maps directly to the Keeper Runtime split between core policy and OpenAI-specific execution overlays. Source: Prompt Guidance for GPT-5.4.

Google: context-driven development

Google frames the problem as context-driven development for Gemini CLI: the repository becomes the source of truth, not the chat window. That is conceptually the same move as Keeper Runtime, just framed from the IDE side rather than the personal operating-system side. Source: Conductor: Introducing context-driven development for Gemini CLI.

Community practice: modular files, one source

HumanLayer recommends a modular structure instead of one giant root file, and Layer5 describes AGENTS.md as a single source of truth for multi-agent workflows. Keeper Runtime ends up between those two: one source repo, but deliberately split into identity, policies, skills, and provider overlays. Sources: HumanLayer on writing a good CLAUDE.md, Layer5 on AGENTS.md.

Where Keeper Runtime is different

Most setups stop at “persistent project context.”

Keeper Runtime is person-scoped and structured as a Werkbank—workbench—with four blocks: partner, work mode, skills, deployment. That is the main difference. The system is not trying to make one repository smart. It is trying to make one AI partner consistent across everything.

Deep Dives