For Builders
How to implement the Bottleneck Principle in your own work.
The Core Question
Before you build anything, ask:
"Where is the human sovereignty point in this system?"
Not "how do we automate this?" but "where does the human decide?"
Implementation Principles
1. Identify Your Phase 3
Every system has a compression point where infinite options become human-scale choices. Find it. Protect it. Optimize it.
Questions to ask:
- Where does the human make the final decision?
- How many options are they choosing from?
- Can they understand each option with confidence?
2. Compress to 2-5, Not 1000
The compression ratio matters:
- 10,000 options → human paralysis
- 2-5 options → human agency
- 1 option → no human agency (AI decided for them)
Implementation:
Input: Thousands of possibilities
AI: Analyze, rank, filter
Output: 2-5 clear options with reasoning
Human: Decides with confidence
3. Audit Trail Everything
Every AI action should trace back to a human decision.
Requirements:
- Log every human decision point
- Document AI reasoning at each step
- Make reversal possible at any point
- Human can audit any step at any time
4. Translation Over Intelligence
Stop trying to make AI smarter. Start making the human-AI interface clearer.
Bad approach:
- "Let's add more AI capabilities"
- "Let's make the AI more autonomous"
- "Let's reduce human involvement"
Good approach:
- "Let's make options clearer"
- "Let's improve explanation quality"
- "Let's increase human confidence"
Anti-Patterns
Don't: Eliminate the Bottleneck
The temptation: "Let's remove human decision points to go faster."
The reality: You're removing sovereignty, not friction.
The bottleneck is the human-AI interface. It's a feature, not a bug. It's what keeps humans sovereign.
Don't: Automate Decisions
If AI makes the decision, the human loses responsibility. And responsibility is sovereignty.
Don't: Optimize for Speed Over Understanding
Speed without understanding is just faster confusion.
AI moves at the speed of human understanding, not the other way around.
Quick Implementation Checklist
- Identified the human sovereignty point (Phase 3)
- Compression outputs 2-5 options, not more
- Each option includes reasoning
- Human can understand each option with confidence
- Every AI action has audit trail
- Decisions are reversible
- Speed is limited by human understanding, not AI capability
The Test
Ask yourself: "If this system makes a mistake, who is responsible?"
If the answer isn't clear, you've eliminated too much human sovereignty.
If the answer is "the AI," you've built the wrong system.
If the answer is "the human who made the decision," you've built it right.
Contribute implementation patterns, case studies, or failure modes.