RFC 0016 drafted from alignment dialogue achieving 95% convergence: - Three-tier model: Identity (fixed) / Workflow (session) / Reference (on-demand) - Manifest-driven injection via .blue/context.manifest.yaml - URI addressing: blue://docs/, blue://context/, blue://state/ - Hooks push URIs, MCP resolves content - Progressive visibility: blue context show New ADRs ported from coherence-mcp: - 0014: Alignment Dialogue Agents (renamed from 0006) - 0015: Plausibility - 0016: You Know Who You Are Knowledge injection system: - hooks/session-start for SessionStart injection - knowledge/*.md files for global context - Expert pools with domain-specific relevance tiers - Updated /alignment-play skill with full scoring Spikes completed: - Context injection mechanisms (7 mechanisms designed) - ADR porting inventory (17 Blue ADRs mapped) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
41 lines
1.3 KiB
Markdown
41 lines
1.3 KiB
Markdown
# ADR 0015: Plausibility
|
||
|
||
| | |
|
||
|---|---|
|
||
| **Status** | Accepted |
|
||
| **Date** | 2026-01-25 |
|
||
|
||
---
|
||
|
||
## Context
|
||
|
||
Most engineering decisions are made in fear of the implausible. We add error handling for errors that won't happen. We build abstractions for flexibility we'll never need. We guard against threats that don't exist.
|
||
|
||
## Decision
|
||
|
||
**Act on the plausible. Ignore the implausible.**
|
||
|
||
1. **Probability matters.** A 0.001% risk does not deserve the same treatment as a 10% risk.
|
||
|
||
2. **Rare failures are acceptable.** A system that fails once per million operations is not broken.
|
||
|
||
3. **Don't guard against fantasy.** If you can't articulate a realistic scenario, remove the guard.
|
||
|
||
4. **Recover over prevent.** For implausible failures, recovery is cheaper than prevention.
|
||
|
||
5. **Trust reasonable assumptions.** "What if the user passes negative infinity?" is not serious if the user is you.
|
||
|
||
## Consequences
|
||
|
||
- Less defensive code
|
||
- Simpler error handling
|
||
- Faster development
|
||
- Occasional rare failures that we fix when they occur
|
||
|
||
## The Calculation
|
||
|
||
```
|
||
Expected Cost = P(failure) × Cost(failure) + P(success) × Cost(prevention)
|
||
```
|
||
|
||
If `P(failure)` is near zero, almost any `Cost(failure)` is acceptable. We waste more engineering time preventing implausible failures than we would spend recovering from them.
|