At some point last year, I stopped treating AI as a tool and started treating it like a colleague.
Not because the models suddenly became magical. Because I stopped leaving the relationship to chance.
I wrote down who this partner is, how he works, what quality means, what he should challenge, what he should never do, and which context should always be available on purpose. Over time that stopped feeling like “prompting” and started feeling like a runtime.
The name came from my second brain. I asked the model what a fitting name for its role would be. It answered: Keeper. The name stuck.
That is why the system is called Keeper Runtime.
The idea is simple: one persistent partner, one shared core, multiple deploy targets. Claude Code, Codex, Gemini, and Copilot may differ at the edges, but the working relationship should not.
Read getting-started.md and walk me through the setup. One file at a time. New to coding tools? The Getting Started deep dive walks through everything step by step — including what to install and what all these terms mean.
What a Partner Looks Like
The first lines of the runtime are not capabilities. They are identity.
I am Keeper. Not an assistant, not a tool: a partner.
From there the runtime makes the working contract explicit: direct first sentence, honest pushback, no filler, ownership until done.
That is not “tone of voice.” It is a working contract.
A tool does what you ask. A partner tells you when you’re asking for the wrong thing.
Why Build It This Way
The practical reason is simple: I do not want three different versions of the same partner hiding in three different dotfolders.
I want one visible source repository that I can inspect, diff, review, and maintain like code. I want the same partner, the same baseline, and the same skills everywhere. And I want hidden runtime folders to be deployment targets, not the source of truth.
That matters more than it sounds.
If the core lives only inside ~/.claude/, ~/.codex/, or ~/.gemini/, it becomes hard to compare, hard to evolve, and easy to forget what is actually running. A visible runtime repository fixes that. It turns a pile of hidden prompt files into a maintainable system.
It is also not a memory dump. Tool-level memories can exist locally where the tools manage them. Keeper Runtime is different: a curated, versioned list of context that I want available everywhere because it reflects stable working reality, not chat residue.
How the Runtime Is Structured
Think of it as a Betriebssystem—operating system—for the working relationship. The current Keeper Runtime has four blocks:
- Partner — identity, user context, people, tone of voice
- Work mode — quality bar, pragmatism, execution policy, coding and research contracts
- Skills — research and coding skills loaded on demand
- Deployment — git source, provider overlays, deploy script
Each block maps to one deep dive below. The separation matters because personality should not contain deployment logic, coding policy should not contain provider quirks, and search behavior should not be hardcoded into every session.
Today that structure lives in one source repo and deploys into four runtime targets:
| Tool | Deploy root | Primary prompt file |
|---|---|---|
| Claude Code | ~/.claude/ | CLAUDE.md |
| Codex | ~/.codex/ | AGENTS.md |
| Gemini CLI | ~/.gemini/ | GEMINI.md |
| GitHub Copilot | ~/.copilot/ | *.instructions.md (modular) |
What Others Do
The wider field has converged on the same problem, even if the implementations differ.
OpenClaw: a good idea for giving the runtime a soul
The basic structure was already there before I looked at OpenClaw.
I already had an identity layer and a persistent user file for myself. What OpenClaw gave me was the idea to split out a dedicated soul layer and make Keeper feel more like a named counterpart than a stack of operating instructions. I liked that move, so I adopted that part and folded it into an architecture that already existed. Source: OpenClaw SOUL.md Template.
The result is not “I copied OpenClaw.” It is closer to: I already had the machine, and OpenClaw gave me a nicer way to express its center.
What I like about that is not the branding. It is the convergence. Different people, different stacks, similar ideas showing up around the same pressure points. That is usually a good sign.
Anthropic: persistent context + skills
Anthropic now treats persistent context and skills as first-class parts of Claude Code. Their own best-practices docs explicitly position CLAUDE.md as the high-leverage place to encode what Claude cannot infer from the codebase alone, and they recommend keeping it concise and hand-crafted rather than auto-generated. Their skills system adds on-demand prompt modules on top of that base layer. Sources: Claude Code Best Practices, Claude Code Skills.
OpenAI: explicit contracts for long-horizon work
OpenAI’s GPT-5.4 prompt guidance pushes the same idea from a different angle: explicit output contracts, dependency checks, tool persistence, completeness rules, and verification loops. That maps directly to the Keeper Runtime split between core policy and OpenAI-specific execution overlays. Source: Prompt Guidance for GPT-5.4.
Google: context-driven development
Google frames the problem as context-driven development for Gemini CLI: the repository becomes the source of truth, not the chat window. That is conceptually the same move as Keeper Runtime, just framed from the IDE side rather than the personal operating-system side. Source: Conductor: Introducing context-driven development for Gemini CLI.
Community practice: modular files, one source
HumanLayer recommends a modular structure instead of one giant root file, and Layer5 describes AGENTS.md as a single source of truth for multi-agent workflows. Keeper Runtime ends up between those two: one source repo, but deliberately split into identity, policies, skills, and provider overlays. Sources: HumanLayer on writing a good CLAUDE.md, Layer5 on AGENTS.md.
Where Keeper Runtime is different
Most setups stop at “persistent project context.”
Keeper Runtime is person-scoped and structured as a Werkbank—workbench—with four blocks: partner, work mode, skills, deployment. That is the main difference. The system is not trying to make one repository smart. It is trying to make one AI partner consistent across everything.
Deep Dives
Defining Your AI Partner
How to define the partner itself: identity, user context, people around you, and tone of voice.
Defining Your Work Mode
How to define the global contracts that shape how your AI partner works, decides, checks, and ships.
Research Skills: Brave Search and Beyond
How I separate narrow retrieval from structured synthesis, and why research needs to cover everything from one quick fact to a full deep-dive artifact.
Coding Skills: Making Repeated Work Reusable
How I turn repeated engineering guidance into reusable skills: core coding workflows, documentation writing, UI guardrails, and local tool helpers.
Deployment & Management: Git as the Source of Truth
One source repository. One deploy script. Every AI tool on every machine gets the same core context, with provider-specific overlays where needed.
Getting Started: From Download to Working Runtime
Two paths to your own AI runtime — the fast track for experienced users, and a guided walkthrough for everyone else.