Most AI advice assumes you craft a perfect prompt, submit it, and receive a perfect output. That’s not how I work. I conduct sessions—streaming intent, correcting in real-time, iterating until it’s right.

This is the Dirigent (conductor) pattern. Not author mode where you write every word. Conductor mode where you direct a system that executes.

Dirty Input Is a Feature

I dictate. Messy. Typos. Incomplete sentences.

"T: fuck die transcription quality... nim die transcription mit etwas salz"

The AI understands intent. I correct outputs, not inputs. That’s faster.

Why this works:

  • Perfecting input is cognitive overhead you don’t need
  • AI is trained to interpret messy human language
  • Downstream correction is faster than upstream polishing
  • You iterate more cycles in the same time

The anti-pattern: Polish prompt for 10 minutes → get output → throw away → polish again. That’s author mode. Slow.

The pattern: Stream consciousness → let AI interpret → course-correct in real-time. That’s conductor mode. Fast.

Interrupt as Redirect

Don’t wait until AI finishes to say “that was wrong.” Interrupt.

SignalMeaning
”nope”Wrong direction, stop
”noch mal:“Retry with different approach
”müssen wir das?”Rhetorical—answer is no
[interrupt mid-output]Stop, you’re going wrong

Why this works:

  • Prevents 500 lines in the wrong direction
  • Faster than review-after-completion
  • AI adapts mid-stream

I don’t explain why something is wrong. I redirect. “That’s not it. Try X instead.” The explanation is implicit in the redirect.

Frustration = Architectural Smell

When you’re frustrated with AI output, that’s diagnostic data.

Frustration → Debug context, not prompt

Your frustration means:

  • Wrong context loaded — Information that doesn’t apply
  • Too much context — Attention budget exhausted
  • Conflicting constraints — AI verschlimmbessert
  • No challenge points — AI makes its Stiefel

I used to get frustrated and rewrite prompts. Now I ask: “What’s wrong with my context architecture?”

Challenge Points Break the Stiefel

Without challenge, AI takes the path of least resistance. The training data wins. You get average output.

Challenge techniques:

"Why did you do it that way?"
→ Forces justification, often reveals it can't

"What would someone who disagrees say?"
→ Gegenpositionen, alternative views

"Is this still true in 2025?"
→ Temporal check, training data ages

"Are we sure this isn't just repeated wisdom?"
→ Like "searing seals in the juices"—widely believed, completely wrong

"Think outside the box—what else could this be?"
→ Breaks the default interpretation

When to challenge:

  • Output feels generic (Stiefel detected)
  • AI confidently states something without evidence
  • You’re about to accept something because it sounds right
  • The answer came too fast for the complexity

Example-Driven, Not Rule-Driven

Don’t explain what you want. Show it.

Rule-driven (slow):

"Please categorize the sentiment as positive, negative, or neutral,
considering the overall tone, context, and potential sarcasm..."

Example-driven (fast):

"Love this!" → positive
"Worst ever" → negative
"It arrived" → neutral
"Sure, whatever" → negative

Now: "Pretty good I guess"

The example IS the specification. AI generalizes from examples better than it follows abstract rules.

Applied to real work:

  • Don’t explain your preferred code style—show a file that exemplifies it
  • Don’t describe your voice—paste something you wrote
  • Don’t list formatting rules—give an example of correct format

Parallel Threads

I don’t think sequentially. Multiple threads run simultaneously.

Main thread: Writing this article
├── Spin-off: "Do we follow our own guidelines?"
├── Spin-off: "What about the blog post?"
└── Spin-off: "Check the other deep dives"

AI can handle this. I track context, AI executes threads, I synthesize.

How to do it:

  • Spawn agents for parallel work
  • Keep main thread for direction
  • Merge results when ready
  • Don’t force sequential when parallel is natural

Emotion as Signal

My emotions are data, not noise.

ExpressionSignal
”scheiße”Real problem detected, not surface issue
”ultrathink”Think deeper, don’t rush
”GEIL!”Solution found, momentum high
”nervt”Pattern problem, not one-off

I don’t suppress frustration to be “professional.” Frustration tells me something is architecturally wrong. Excitement tells me we found the right path.

Conversation Anchors

Long sessions drift. Earlier context competes with recent context. Unique phrases cut through.

Me: “If you change things we’re not discussing, ich ziehe dir die Ohren lang.”

(An hour later)

Me: “Your ears.”

Claude: 🙉

Two words. Full context activated. No re-explanation needed.

Why this works:

  • Emotional/absurd language sticks
  • The weirder, the more unique
  • Short recall activates full context

This is the Fred Flintstone Method applied to conversations—searching for unique anchors instead of common keywords.

The Meta-Pattern: Directed Stream

My work style is a directed stream:

INPUT:     Messy (dictation, incomplete, parallel threads)
DIRECTION: Clear (strategy, quality criteria, challenges)
CORRECTION: Constant (interrupt, redirect, iterate)
OUTPUT:    Clean (AI produces, I verify)

Not linear-sequential. Not chaotic. Directed exploration.

The conductor doesn’t play every instrument. They give direction, listen for wrong notes, adjust in real-time. The orchestra plays.

Key Takeaways

  1. Dirty input is faster. Correct outputs, not inputs.

  2. Interrupt early. Don’t wait for completion to redirect.

  3. Frustration is diagnostic. Debug context, not prompts.

  4. Challenge everything. Break the Stiefel.

  5. Show, don’t explain. Examples beat rules.

  6. Think parallel. Multiple threads, synthesize later.

  7. Emotions are data. Don’t suppress signals.

  8. Anchor unique phrases. Two words can recall full context.

Deep Dives

01 Here

Working Iteratively: The Conductor Pattern

Stop perfecting prompts. Start conducting sessions. How to work with AI in real-time.

02

Building Reliable Pipelines: Same Quality Every Time

When you need consistent results, not creative exploration. Phased execution, validation gates, and prompt chains.