Most people conflate searching with understanding. They type a query, get a result, and treat it as research. But retrieval is not research. Retrieval fetches. Research thinks — it decomposes, challenges, synthesizes, and knows when the answer is stable enough to act on.

That distinction is the reason research is not a side feature in Keeper Runtime. It is one of the main levers. The Werkzeugkasten—toolbox—needs both a quick lookup and a structured method, and keeping them separate is what gives the system its full bandwidth.

The template includes a research skill you can customize. Download it, open it in your coding tool, and build your own.

Get Started

The Two-Layer Model

One Primitive, One Method
A search primitive should fetch. A research skill should think. Mixing both jobs too early makes the whole thing slower and sloppier.

I keep research split into two layers:

  • a narrow search primitive
  • a structured research method

In my setup, the primitive is Brave Search and the method is k-research.

That split matters because it gives me the full bandwidth:

  • one quick fact check in the middle of a task
  • one doc lookup
  • one freshness check
  • or a full deep-dive artifact with synthesis, sources, and output files

The Search Primitive

Sometimes the right move is one exact lookup, not a whole research ceremony. The search primitive exists for:

  • technical searches
  • operator-heavy queries
  • documentation lookups
  • API checks
  • bounded verification in the middle of real work

It should be:

  • clean
  • low-noise
  • precise
  • easy to reuse downstream

That is why I like Brave here. It is good at operator-heavy web retrieval and works well as infrastructure instead of pretending to be the whole workflow.

The Research Method

Research Needs Shape
Good research is not just more searching. It is decomposition, challenge points, fallback logic, and stopping when the answer is stable enough.

The structured layer handles:

  • decomposition
  • quick sweeps
  • challenge points
  • fallback searches when coverage is weak
  • citations
  • local plus web synthesis
  • output artifacts when the result should survive the chat

That is what gives research Tiefgang—depth—and turns “look this up” into a usable operating model.

The Full Bandwidth

One Skill Family, Multiple Depths
The same research stack should handle a one-line clarification and a multi-page deep dive. The difference is depth and output shape, not a totally different philosophy.

For me, the useful range is wide:

  • direct answer in chat
  • short evidence-backed synthesis
  • saved research note
  • or a full deep dive with sub-pages and durable structure

That is also where documentation comes back in. Once research should survive the chat, I often hand it into doc-write or one of my S16E content pipelines.

When A Skill Becomes Worth It

I make a skill when I notice I am re-explaining the same thing again and again.

For research, that usually means I keep repeating:

  • when to search and when not to
  • how to tell a quick lookup from a real research task
  • how to decompose the topic
  • how to handle weak results
  • how to document the result so it is reusable later

At that point it stops being prompting and starts being reusable method.

What Others Do

Brave is becoming agent infrastructure

Brave is clearly positioning Search API as infrastructure for AI agents, not just human search. Official announcements now highlight integrations like Snowflake Cortex and broader enterprise distribution channels such as AWS Marketplace. Brave also offers an official MCP server for search. Sources: Brave x Snowflake, Brave Search API Growth.

Context engineering is replacing prompt engineering

The field keeps moving from “how do I phrase the prompt?” toward “how do I control what enters context, when, and in what shape?” Weaviate describes this directly as context engineering. Azure frames a related pattern as agentic retrieval, where the system breaks a task into smaller searches before synthesis. Sources: Weaviate on Context Engineering, Azure AI Search: Agentic Retrieval.

Retrieval is moving from static RAG to dynamic search loops

What many teams used to do with one-shot retrieval pipelines is being replaced by multi-step retrieval: decompose, search, rank, filter, synthesize. That is conceptually much closer to a research skill than to classic top-k retrieval.

Download

If you want the public starter files one by one instead of the full bundle, start here:

Deep Dives