chore: move adrs, rfcs, spikes to .blue/docs
Per RFC 0003, Blue-tracked documents live in per-repo .blue/ directories. ADRs needed for semantic adherence checking. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
parent
eedae91178
commit
0150a5d1ed
33 changed files with 2683 additions and 0 deletions
198
.blue/docs/rfcs/0001-dialogue-sqlite-metadata.md
Normal file
198
.blue/docs/rfcs/0001-dialogue-sqlite-metadata.md
Normal file
|
|
@ -0,0 +1,198 @@
|
|||
# RFC 0001: Dialogue SQLite Metadata
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Draft |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Source Spike** | sqlite-storage-expansion |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
Dialogue files (.dialogue.md) are not indexed in SQLite. Can't query them, link them to RFCs, or track relationships. Need to add DocType::Dialogue and store metadata while keeping content in markdown.
|
||||
|
||||
## Background
|
||||
|
||||
Dialogues are transcripts of conversations - different from RFCs/spikes which are living documents with status transitions.
|
||||
|
||||
Current state:
|
||||
- Dialogues exist as `.dialogue.md` files in `docs/dialogues/`
|
||||
- No SQLite tracking
|
||||
- No way to search or link them
|
||||
|
||||
## Proposal
|
||||
|
||||
### 1. Add DocType::Dialogue
|
||||
|
||||
```rust
|
||||
pub enum DocType {
|
||||
Rfc,
|
||||
Spike,
|
||||
Adr,
|
||||
Decision,
|
||||
Prd,
|
||||
Postmortem,
|
||||
Runbook,
|
||||
Dialogue, // NEW
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Dialogue Metadata (SQLite)
|
||||
|
||||
Store in `documents` table:
|
||||
- `doc_type`: "dialogue"
|
||||
- `title`: Dialogue title
|
||||
- `status`: "complete" (dialogues don't have status transitions)
|
||||
- `file_path`: Path to .dialogue.md file
|
||||
|
||||
Store in `metadata` table:
|
||||
- `date`: When dialogue occurred
|
||||
- `participants`: Who was involved (e.g., "Claude, Eric")
|
||||
- `linked_rfc`: RFC this dialogue relates to (optional)
|
||||
- `topic`: Short description of what was discussed
|
||||
|
||||
### 3. New Tool: `blue_dialogue_create`
|
||||
|
||||
```
|
||||
blue_dialogue_create title="realm-design-session" linked_rfc="cross-repo-realms"
|
||||
```
|
||||
|
||||
Creates:
|
||||
- Entry in documents table
|
||||
- Metadata entries
|
||||
- Skeleton .dialogue.md file
|
||||
|
||||
### 4. Dialogue File Format
|
||||
|
||||
```markdown
|
||||
# Dialogue: Realm Design Session
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Participants** | Claude, Eric |
|
||||
| **Topic** | Designing cross-repo coordination |
|
||||
| **Linked RFC** | [cross-repo-realms](../rfcs/0001-cross-repo-realms.md) |
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
[Why this dialogue happened]
|
||||
|
||||
## Key Decisions
|
||||
|
||||
- Decision 1
|
||||
- Decision 2
|
||||
|
||||
## Transcript
|
||||
|
||||
[Full conversation or summary]
|
||||
|
||||
---
|
||||
|
||||
*Extracted by Blue*
|
||||
```
|
||||
|
||||
### 5. Keep Content in Markdown
|
||||
|
||||
Unlike other doc types, dialogue content stays primarily in markdown:
|
||||
- Full transcripts can be large
|
||||
- Human-readable format preferred
|
||||
- Git diff friendly
|
||||
|
||||
SQLite stores metadata only for:
|
||||
- Fast searching
|
||||
- Relationship tracking
|
||||
- Listing/filtering
|
||||
|
||||
### 6. New Tool: `blue_dialogue_get`
|
||||
|
||||
```
|
||||
blue_dialogue_get title="realm-design-session"
|
||||
```
|
||||
|
||||
Returns dialogue metadata and file path.
|
||||
|
||||
### 7. New Tool: `blue_dialogue_list`
|
||||
|
||||
```
|
||||
blue_dialogue_list linked_rfc="cross-repo-realms"
|
||||
```
|
||||
|
||||
Returns all dialogues, optionally filtered by linked RFC.
|
||||
|
||||
### 8. Integration with `blue_extract_dialogue`
|
||||
|
||||
Existing `blue_extract_dialogue` extracts text from Claude JSONL. Extend to:
|
||||
|
||||
```
|
||||
blue_extract_dialogue task_id="abc123" save_as="realm-design-session" linked_rfc="cross-repo-realms"
|
||||
```
|
||||
|
||||
- Extract dialogue from JSONL
|
||||
- Create .dialogue.md file
|
||||
- Register in SQLite with metadata
|
||||
|
||||
### 9. Migration of Existing Dialogues
|
||||
|
||||
On first run, scan `docs/dialogues/` for `.dialogue.md` files:
|
||||
- Parse frontmatter for metadata
|
||||
- Register in documents table
|
||||
- Preserve file locations
|
||||
|
||||
## Security Note
|
||||
|
||||
Dialogues may contain sensitive information discussed during development. Before committing:
|
||||
- Review for credentials, API keys, or secrets
|
||||
- Use `[REDACTED]` for sensitive values
|
||||
- Consider if full transcript is needed vs summary
|
||||
|
||||
## Example Transcript Section
|
||||
|
||||
```markdown
|
||||
## Transcript
|
||||
|
||||
**Eric**: How should we handle authentication for the API?
|
||||
|
||||
**Claude**: I'd recommend JWT tokens with short expiry. Here's why:
|
||||
1. Stateless - no session storage needed
|
||||
2. Can include claims for authorization
|
||||
3. Easy to invalidate by changing signing key
|
||||
|
||||
**Eric**: What about refresh tokens?
|
||||
|
||||
**Claude**: Store refresh tokens in httpOnly cookies. When access token expires,
|
||||
use refresh endpoint to get new pair. This balances security with UX.
|
||||
|
||||
**Decision**: Use JWT + refresh token pattern.
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
1. Add `DocType::Dialogue` to enum
|
||||
2. Create `blue_dialogue_create` handler
|
||||
3. Create `blue_dialogue_list` handler
|
||||
4. Update `blue_search` to include dialogues
|
||||
5. Add dialogue markdown generation
|
||||
|
||||
## Test Plan
|
||||
|
||||
- [ ] Create dialogue with metadata
|
||||
- [ ] Link dialogue to RFC
|
||||
- [ ] Dialogue without linked RFC works
|
||||
- [ ] Search finds dialogues by title/topic
|
||||
- [ ] List dialogues by RFC works
|
||||
- [ ] List all dialogues works
|
||||
- [ ] Get specific dialogue returns metadata
|
||||
- [ ] Dialogue content stays in markdown
|
||||
- [ ] Metadata stored in SQLite
|
||||
- [ ] Existing dialogues migrated on first run
|
||||
- [ ] Extract dialogue from JSONL creates proper entry
|
||||
|
||||
---
|
||||
|
||||
*"Right then. Let's get to it."*
|
||||
|
||||
— Blue
|
||||
227
.blue/docs/rfcs/0002-runbook-action-lookup.md
Normal file
227
.blue/docs/rfcs/0002-runbook-action-lookup.md
Normal file
|
|
@ -0,0 +1,227 @@
|
|||
# RFC 0002: Runbook Action Lookup
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Draft |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Source Spike** | runbook-driven-actions |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
No way to discover and follow runbooks when performing repo actions. Claude guesses instead of following documented procedures for docker builds, deploys, releases, etc.
|
||||
|
||||
## Proposal
|
||||
|
||||
### 1. Action Tags in Runbooks
|
||||
|
||||
Add `actions` field to runbook frontmatter:
|
||||
|
||||
```markdown
|
||||
# Runbook: Docker Build
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Active |
|
||||
| **Actions** | docker build, build image, container build |
|
||||
```
|
||||
|
||||
Store actions in SQLite metadata table for fast lookup.
|
||||
|
||||
### 2. New Tool: `blue_runbook_lookup`
|
||||
|
||||
```
|
||||
blue_runbook_lookup action="docker build"
|
||||
```
|
||||
|
||||
Returns structured response:
|
||||
|
||||
```json
|
||||
{
|
||||
"found": true,
|
||||
"runbook": {
|
||||
"title": "Docker Build",
|
||||
"file": ".blue/docs/runbooks/docker-build.md",
|
||||
"actions": ["docker build", "build image", "container build"],
|
||||
"operations": [
|
||||
{
|
||||
"name": "Build Production Image",
|
||||
"steps": ["...", "..."],
|
||||
"verification": "docker images | grep myapp",
|
||||
"rollback": "docker rmi myapp:latest"
|
||||
}
|
||||
]
|
||||
},
|
||||
"hint": "Follow the steps above. Use verification to confirm success."
|
||||
}
|
||||
```
|
||||
|
||||
If no match: `{ "found": false, "hint": "No runbook found. Proceed with caution." }`
|
||||
|
||||
### 3. New Tool: `blue_runbook_actions`
|
||||
|
||||
List all registered actions:
|
||||
|
||||
```
|
||||
blue_runbook_actions
|
||||
```
|
||||
|
||||
Returns:
|
||||
```json
|
||||
{
|
||||
"actions": [
|
||||
{ "action": "docker build", "runbook": "Docker Build" },
|
||||
{ "action": "deploy staging", "runbook": "Deployment" },
|
||||
{ "action": "run tests", "runbook": "Testing" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Matching Algorithm
|
||||
|
||||
Word-based matching with priority:
|
||||
|
||||
1. **Exact match** - "docker build" matches "docker build" (100%)
|
||||
2. **All words match** - "docker" matches "docker build" (90%)
|
||||
3. **Partial words** - "build" matches "docker build" (80%)
|
||||
|
||||
If multiple runbooks match, return highest priority. Ties broken by most specific (more words in action).
|
||||
|
||||
### 5. Schema
|
||||
|
||||
```sql
|
||||
-- In metadata table
|
||||
INSERT INTO metadata (document_id, key, value)
|
||||
VALUES (runbook_id, 'action', 'docker build');
|
||||
|
||||
-- Multiple actions = multiple rows
|
||||
INSERT INTO metadata (document_id, key, value)
|
||||
VALUES (runbook_id, 'action', 'build image');
|
||||
```
|
||||
|
||||
### 6. Update `blue_runbook_create`
|
||||
|
||||
```
|
||||
blue_runbook_create title="Docker Build" actions=["docker build", "build image"]
|
||||
```
|
||||
|
||||
- Accept `actions` array parameter
|
||||
- Store each action in metadata table
|
||||
- Include in generated markdown
|
||||
|
||||
### 7. CLAUDE.md Guidance
|
||||
|
||||
Document the pattern for repos:
|
||||
|
||||
```markdown
|
||||
## Runbooks
|
||||
|
||||
Before executing build, deploy, or release operations:
|
||||
|
||||
1. Check for runbook: `blue_runbook_lookup action="docker build"`
|
||||
2. If found, follow the documented steps
|
||||
3. Use verification commands to confirm success
|
||||
4. If something fails, check rollback procedures
|
||||
|
||||
Available actions: `blue_runbook_actions`
|
||||
```
|
||||
|
||||
## Security Note
|
||||
|
||||
Runbooks should **never** contain actual credentials or secrets. Use placeholders:
|
||||
|
||||
```markdown
|
||||
**Steps**:
|
||||
1. Export credentials: `export API_KEY=$YOUR_API_KEY`
|
||||
2. Run deploy: `./deploy.sh`
|
||||
```
|
||||
|
||||
Not:
|
||||
```markdown
|
||||
**Steps**:
|
||||
1. Run deploy: `API_KEY=abc123 ./deploy.sh` # WRONG!
|
||||
```
|
||||
|
||||
## Example Runbook
|
||||
|
||||
```markdown
|
||||
# Runbook: Docker Build
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Active |
|
||||
| **Actions** | docker build, build image, container build |
|
||||
| **Owner** | Platform Team |
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Build and tag Docker images for the application.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [ ] Docker installed and running
|
||||
- [ ] Access to container registry
|
||||
- [ ] `.env` file configured
|
||||
|
||||
## Common Operations
|
||||
|
||||
### Operation: Build Production Image
|
||||
|
||||
**When to use**: Preparing for deployment
|
||||
|
||||
**Steps**:
|
||||
1. Ensure on correct branch: `git branch --show-current`
|
||||
2. Pull latest: `git pull origin main`
|
||||
3. Build image: `docker build -t myapp:$(git rev-parse --short HEAD) .`
|
||||
4. Tag as latest: `docker tag myapp:$(git rev-parse --short HEAD) myapp:latest`
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
docker images | grep myapp
|
||||
docker run --rm myapp:latest --version
|
||||
```
|
||||
|
||||
**Rollback**:
|
||||
```bash
|
||||
docker rmi myapp:latest
|
||||
docker tag myapp:previous myapp:latest
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Symptom: Build fails with "no space left"
|
||||
|
||||
**Resolution**:
|
||||
1. `docker system prune -a`
|
||||
2. Retry build
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
1. Add `actions` parameter to `blue_runbook_create`
|
||||
2. Store actions in metadata table
|
||||
3. Implement `blue_runbook_lookup` with matching algorithm
|
||||
4. Implement `blue_runbook_actions` for discovery
|
||||
5. Parse runbook markdown to extract operations
|
||||
6. Update runbook markdown generation
|
||||
|
||||
## Test Plan
|
||||
|
||||
- [ ] Create runbook with actions tags
|
||||
- [ ] Lookup by exact action match
|
||||
- [ ] Lookup by partial match (word subset)
|
||||
- [ ] No match returns gracefully
|
||||
- [ ] Multiple runbooks - highest priority wins
|
||||
- [ ] List all actions works
|
||||
- [ ] Actions stored in SQLite metadata
|
||||
- [ ] Operations parsed from markdown correctly
|
||||
- [ ] Malformed runbook returns partial data gracefully
|
||||
|
||||
---
|
||||
|
||||
*"Right then. Let's get to it."*
|
||||
|
||||
— Blue
|
||||
155
.blue/docs/rfcs/0003-per-repo-blue-folders.md
Normal file
155
.blue/docs/rfcs/0003-per-repo-blue-folders.md
Normal file
|
|
@ -0,0 +1,155 @@
|
|||
# RFC 0003: Per Repo Blue Folders
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Draft |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Source Spike** | per-repo-blue-folder |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
Currently all docs flow to one central .blue folder. Each repo should have its own .blue folder so docs live with code and git tracking works naturally.
|
||||
|
||||
## Current Behavior
|
||||
|
||||
```
|
||||
blue/ # Central repo
|
||||
├── .blue/
|
||||
│ ├── repos/
|
||||
│ │ ├── blue/docs/... # Blue's docs
|
||||
│ │ └── other-repo/docs/ # Other repo's docs (wrong!)
|
||||
│ └── data/
|
||||
│ └── blue/blue.db
|
||||
```
|
||||
|
||||
All repos' docs end up in the blue repo's `.blue/repos/`.
|
||||
|
||||
## Proposed Behavior
|
||||
|
||||
```
|
||||
repo-a/
|
||||
├── .blue/
|
||||
│ ├── docs/
|
||||
│ │ ├── rfcs/
|
||||
│ │ ├── spikes/
|
||||
│ │ └── runbooks/
|
||||
│ └── blue.db
|
||||
└── src/...
|
||||
|
||||
repo-b/
|
||||
├── .blue/
|
||||
│ ├── docs/...
|
||||
│ └── blue.db
|
||||
└── src/...
|
||||
```
|
||||
|
||||
Each repo has its own `.blue/` with its own docs and database.
|
||||
|
||||
## Changes Required
|
||||
|
||||
### 1. Simplify BlueHome structure
|
||||
|
||||
```rust
|
||||
pub struct BlueHome {
|
||||
pub root: PathBuf, // Repo root
|
||||
pub blue_dir: PathBuf, // .blue/
|
||||
pub docs_path: PathBuf, // .blue/docs/
|
||||
pub db_path: PathBuf, // .blue/blue.db
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Change detect_blue behavior
|
||||
|
||||
- Find git repo root for current directory
|
||||
- Look for `.blue/` there (don't search upward beyond repo)
|
||||
- Auto-create on first blue command (no `blue init` required)
|
||||
|
||||
**Edge cases:**
|
||||
- No git repo: Create `.blue/` in current directory with warning
|
||||
- Monorepo: One `.blue/` at git root (packages share it)
|
||||
- Subdirectory: Always resolve to git root
|
||||
|
||||
### 3. Flatten docs structure
|
||||
|
||||
Before: `.blue/repos/<project>/docs/rfcs/`
|
||||
After: `.blue/docs/rfcs/`
|
||||
|
||||
No need for project subdirectory when per-repo.
|
||||
|
||||
### 4. Migration
|
||||
|
||||
Automatic on first run:
|
||||
|
||||
1. Detect old structure (`.blue/repos/` exists)
|
||||
2. Find docs for current project in `.blue/repos/<project>/docs/`
|
||||
3. Move to `.blue/docs/`
|
||||
4. Migrate database entries
|
||||
5. Clean up empty directories
|
||||
6. Log what was migrated
|
||||
|
||||
**Conflict resolution:** If docs exist in both locations, prefer newer by mtime.
|
||||
|
||||
## Git Tracking
|
||||
|
||||
Repos should commit their `.blue/` folder:
|
||||
|
||||
**Track:**
|
||||
- `.blue/docs/**` - RFCs, spikes, runbooks, etc.
|
||||
- `.blue/blue.db` - SQLite database (source of truth)
|
||||
- `.blue/config.yaml` - Configuration
|
||||
|
||||
**Gitignore:**
|
||||
- `.blue/*.db-shm` - SQLite shared memory (transient)
|
||||
- `.blue/*.db-wal` - SQLite write-ahead log (transient)
|
||||
|
||||
Recommended `.gitignore` addition:
|
||||
```
|
||||
# Blue transient files
|
||||
.blue/*.db-shm
|
||||
.blue/*.db-wal
|
||||
```
|
||||
|
||||
## Cross-Repo Coordination
|
||||
|
||||
The daemon/realm system (RFC 0001) handles cross-repo concerns:
|
||||
- Central daemon tracks active sessions
|
||||
- Realms coordinate contracts between repos
|
||||
- Each repo remains self-contained
|
||||
|
||||
## FAQ
|
||||
|
||||
**Q: Do I need to run `blue init`?**
|
||||
A: No. Blue auto-creates `.blue/` on first command.
|
||||
|
||||
**Q: What about my existing docs in the central location?**
|
||||
A: Auto-migrated on first run. Check git status to verify.
|
||||
|
||||
**Q: Should I commit `.blue/blue.db`?**
|
||||
A: Yes. It's the source of truth for your project's Blue state.
|
||||
|
||||
**Q: What if I'm in a monorepo?**
|
||||
A: One `.blue/` at the git root. All packages share it.
|
||||
|
||||
**Q: Can I use Blue without git?**
|
||||
A: Yes, but with a warning. `.blue/` created in current directory.
|
||||
|
||||
**Q: How do I see cross-repo status?**
|
||||
A: Use `blue realm_status` (requires daemon running).
|
||||
|
||||
## Test Plan
|
||||
|
||||
- [ ] New repo gets `.blue/` on first blue command
|
||||
- [ ] Docs created in repo's own `.blue/docs/`
|
||||
- [ ] Database at `.blue/blue.db`
|
||||
- [ ] Old structure migrated automatically
|
||||
- [ ] Realm/daemon still works across repos
|
||||
- [ ] No git repo falls back gracefully with warning
|
||||
- [ ] Monorepo uses single `.blue/` at root
|
||||
|
||||
---
|
||||
|
||||
*"Right then. Let's get to it."*
|
||||
|
||||
— Blue
|
||||
363
.blue/docs/rfcs/0004-adr-adherence.md
Normal file
363
.blue/docs/rfcs/0004-adr-adherence.md
Normal file
|
|
@ -0,0 +1,363 @@
|
|||
# RFC 0004: ADR Adherence
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Draft |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Source Spike** | adr-adherence |
|
||||
| **ADRs** | 0004 (Evidence), 0007 (Integrity), 0008 (Honor) |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
No mechanism to surface relevant ADRs during work, track ADR citations, or verify adherence to testable architectural decisions.
|
||||
|
||||
## Philosophy
|
||||
|
||||
**Guide, don't block.** ADRs are beliefs, not bureaucracy. Blue should:
|
||||
- Help you find relevant ADRs
|
||||
- Make citing ADRs easy
|
||||
- Verify testable ADRs optionally
|
||||
- Never require ADR approval to proceed
|
||||
|
||||
## Proposal
|
||||
|
||||
### Layer 1: Awareness (Passive)
|
||||
|
||||
#### `blue_adr_list`
|
||||
|
||||
List all ADRs with summaries:
|
||||
|
||||
```
|
||||
blue_adr_list
|
||||
```
|
||||
|
||||
Returns:
|
||||
```json
|
||||
{
|
||||
"adrs": [
|
||||
{ "number": 0, "title": "Never Give Up", "summary": "The only rule we need" },
|
||||
{ "number": 4, "title": "Evidence", "summary": "Show, don't tell" },
|
||||
...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### `blue_adr_get`
|
||||
|
||||
Get full ADR content:
|
||||
|
||||
```
|
||||
blue_adr_get number=4
|
||||
```
|
||||
|
||||
Returns ADR markdown and metadata.
|
||||
|
||||
### Layer 2: Contextual Relevance (Active)
|
||||
|
||||
#### `blue_adr_relevant`
|
||||
|
||||
Given context, use AI to suggest relevant ADRs:
|
||||
|
||||
```
|
||||
blue_adr_relevant context="testing strategy"
|
||||
```
|
||||
|
||||
Returns:
|
||||
```json
|
||||
{
|
||||
"relevant": [
|
||||
{
|
||||
"number": 4,
|
||||
"title": "Evidence",
|
||||
"confidence": 0.95,
|
||||
"why": "Testing is the primary form of evidence that code works. This ADR's core principle 'show, don't tell' directly applies to test strategy decisions."
|
||||
},
|
||||
{
|
||||
"number": 7,
|
||||
"title": "Integrity",
|
||||
"confidence": 0.82,
|
||||
"why": "Tests verify structural wholeness - that the system holds together under various conditions."
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**AI-Powered Relevance:**
|
||||
|
||||
Keyword matching fails for philosophical ADRs. "Courage" won't match "deleting legacy code" even though ADR 0009 is highly relevant.
|
||||
|
||||
The AI evaluator:
|
||||
1. Receives the full context (RFC title, problem, code diff, etc.)
|
||||
2. Reads all ADR content (cached in prompt)
|
||||
3. Determines semantic relevance with reasoning
|
||||
4. Returns confidence scores and explanations
|
||||
|
||||
**Prompt Structure:**
|
||||
|
||||
```
|
||||
You are evaluating which ADRs are relevant to this work.
|
||||
|
||||
Context: {user_context}
|
||||
|
||||
ADRs:
|
||||
{all_adr_summaries}
|
||||
|
||||
For each ADR, determine:
|
||||
1. Is it relevant? (yes/no)
|
||||
2. Confidence (0.0-1.0)
|
||||
3. Why is it relevant? (1-2 sentences)
|
||||
|
||||
Only return ADRs with confidence > 0.7.
|
||||
```
|
||||
|
||||
**Model Selection:**
|
||||
- Use fast/cheap model (Haiku) for relevance checks
|
||||
- Results are suggestions, not authoritative
|
||||
- User can override or ignore
|
||||
|
||||
**Graceful Degradation:**
|
||||
|
||||
| Condition | Behavior |
|
||||
|-----------|----------|
|
||||
| API key configured, API up | AI relevance (default) |
|
||||
| API key configured, API down | Fallback to keywords + warning |
|
||||
| No API key | Keywords only (no warning) |
|
||||
| `--no-ai` flag | Keywords only (explicit) |
|
||||
|
||||
**Response Metadata:**
|
||||
|
||||
```json
|
||||
{
|
||||
"method": "ai", // or "keyword"
|
||||
"cached": false,
|
||||
"latency_ms": 287,
|
||||
"relevant": [...]
|
||||
}
|
||||
```
|
||||
|
||||
**Privacy:**
|
||||
- Only context string sent to API (not code, not files)
|
||||
- No PII should be in context string
|
||||
- User controls what context to send
|
||||
|
||||
#### RFC ADR Suggestions
|
||||
|
||||
When creating an RFC, Blue suggests relevant ADRs based on title/problem:
|
||||
|
||||
```
|
||||
blue_rfc_create title="testing-framework" ...
|
||||
|
||||
→ "Consider these ADRs: 0004 (Evidence), 0010 (No Dead Code)"
|
||||
```
|
||||
|
||||
#### ADR Citations in Documents
|
||||
|
||||
RFCs can cite ADRs in frontmatter:
|
||||
|
||||
```markdown
|
||||
| **ADRs** | 0004, 0007, 0010 |
|
||||
```
|
||||
|
||||
Or inline:
|
||||
|
||||
```markdown
|
||||
Per ADR 0004 (Evidence), we require test coverage > 80%.
|
||||
```
|
||||
|
||||
### Layer 3: Lightweight Verification (Optional)
|
||||
|
||||
#### `blue_adr_audit`
|
||||
|
||||
Scan for potential ADR violations. Only for testable ADRs:
|
||||
|
||||
```
|
||||
blue_adr_audit
|
||||
```
|
||||
|
||||
Returns:
|
||||
```json
|
||||
{
|
||||
"findings": [
|
||||
{
|
||||
"adr": 10,
|
||||
"title": "No Dead Code",
|
||||
"type": "warning",
|
||||
"message": "3 unused exports in src/utils.rs",
|
||||
"locations": ["src/utils.rs:45", "src/utils.rs:67", "src/utils.rs:89"]
|
||||
},
|
||||
{
|
||||
"adr": 4,
|
||||
"title": "Evidence",
|
||||
"type": "info",
|
||||
"message": "Test coverage at 72% (threshold: 80%)"
|
||||
}
|
||||
],
|
||||
"passed": [
|
||||
{ "adr": 5, "title": "Single Source", "message": "No duplicate definitions found" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Testable ADRs:**
|
||||
|
||||
| ADR | Check |
|
||||
|-----|-------|
|
||||
| 0004 Evidence | Test coverage, assertion ratios |
|
||||
| 0005 Single Source | Duplicate definitions, copy-paste detection |
|
||||
| 0010 No Dead Code | Unused exports, unreachable branches |
|
||||
|
||||
**Non-testable ADRs** (human judgment):
|
||||
|
||||
| ADR | Guidance |
|
||||
|-----|----------|
|
||||
| 0001 Purpose | Does this serve meaning? |
|
||||
| 0002 Presence | Are we actually here? |
|
||||
| 0009 Courage | Are we acting rightly? |
|
||||
| 0013 Overflow | Building from fullness? |
|
||||
|
||||
### Layer 4: Documentation Trail
|
||||
|
||||
#### ADR-Document Links
|
||||
|
||||
Store citations in `document_links` table:
|
||||
|
||||
```sql
|
||||
INSERT INTO document_links (source_id, target_id, link_type)
|
||||
VALUES (rfc_id, adr_doc_id, 'cites_adr');
|
||||
```
|
||||
|
||||
#### Search by ADR
|
||||
|
||||
```
|
||||
blue_search query="adr:0004"
|
||||
```
|
||||
|
||||
Returns all documents citing ADR 0004.
|
||||
|
||||
#### ADR "Referenced By"
|
||||
|
||||
```
|
||||
blue_adr_get number=4
|
||||
```
|
||||
|
||||
Includes:
|
||||
```json
|
||||
{
|
||||
"referenced_by": [
|
||||
{ "type": "rfc", "title": "testing-framework", "date": "2026-01-20" },
|
||||
{ "type": "decision", "title": "require-integration-tests", "date": "2026-01-15" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## ADR Metadata Enhancement
|
||||
|
||||
Add to each ADR:
|
||||
|
||||
```markdown
|
||||
## Applies When
|
||||
|
||||
- Writing or modifying tests
|
||||
- Reviewing pull requests
|
||||
- Evaluating technical claims
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
- Claiming code works without tests
|
||||
- Trusting documentation over running code
|
||||
- Accepting "it works on my machine"
|
||||
```
|
||||
|
||||
This gives the AI richer context for relevance matching. Anti-patterns are particularly useful - they help identify when work might be *violating* an ADR.
|
||||
|
||||
## Implementation
|
||||
|
||||
1. Add ADR document type and loader
|
||||
2. Implement `blue_adr_list` and `blue_adr_get`
|
||||
3. **Implement AI relevance evaluator:**
|
||||
- Load all ADRs into prompt context
|
||||
- Send context + ADRs to LLM (Haiku for speed/cost)
|
||||
- Parse structured response with confidence scores
|
||||
- Cache ADR summaries to minimize token usage
|
||||
4. Implement `blue_adr_relevant` using AI evaluator
|
||||
5. Add ADR citation parsing to RFC creation
|
||||
6. Implement `blue_adr_audit` for testable ADRs
|
||||
7. Add "referenced_by" to ADR responses
|
||||
8. Extend `blue_search` for ADR queries
|
||||
|
||||
**AI Integration Notes:**
|
||||
|
||||
- Blue MCP server needs LLM access (API key in `.blue/config.yaml`)
|
||||
- Use streaming for responsiveness
|
||||
- Fallback to keyword matching if AI unavailable
|
||||
- Cache relevance results per context hash (5 min TTL)
|
||||
|
||||
**Caching Strategy:**
|
||||
|
||||
```sql
|
||||
CREATE TABLE adr_relevance_cache (
|
||||
context_hash TEXT PRIMARY KEY,
|
||||
adr_versions_hash TEXT, -- Invalidate if ADRs change
|
||||
result_json TEXT,
|
||||
created_at TEXT,
|
||||
expires_at TEXT
|
||||
);
|
||||
```
|
||||
|
||||
**Testing AI Relevance:**
|
||||
|
||||
- Golden test cases with expected ADRs (fuzzy match)
|
||||
- Confidence thresholds: 0004 should be > 0.8 for "testing"
|
||||
- Mock AI responses in unit tests
|
||||
- Integration tests hit real API (rate limited)
|
||||
|
||||
## Test Plan
|
||||
|
||||
- [ ] List all ADRs returns correct count and summaries
|
||||
- [ ] Get specific ADR returns full content
|
||||
- [ ] AI relevance: "testing" context suggests 0004 (Evidence)
|
||||
- [ ] AI relevance: "deleting old code" suggests 0009 (Courage), 0010 (No Dead Code)
|
||||
- [ ] AI relevance: confidence scores are reasonable (0.7-1.0 range)
|
||||
- [ ] AI relevance: explanations are coherent
|
||||
- [ ] Fallback: keyword matching works when AI unavailable
|
||||
- [ ] RFC with `| **ADRs** | 0004 |` creates document link
|
||||
- [ ] Search `adr:0004` finds citing documents
|
||||
- [ ] Audit detects unused exports (ADR 0010)
|
||||
- [ ] Audit reports test coverage (ADR 0004)
|
||||
- [ ] Non-testable ADRs not included in audit findings
|
||||
- [ ] Caching: repeated same context uses cached result
|
||||
- [ ] Cache invalidation: ADR content change clears relevant cache
|
||||
- [ ] `--no-ai` flag forces keyword matching
|
||||
- [ ] Response includes method (ai/keyword), cached, latency
|
||||
- [ ] Graceful degradation when API unavailable
|
||||
|
||||
## FAQ
|
||||
|
||||
**Q: Will this block my PRs?**
|
||||
A: No. All ADR features are informational. Nothing blocks.
|
||||
|
||||
**Q: Do I have to cite ADRs in every RFC?**
|
||||
A: No. Citations are optional but encouraged for significant decisions.
|
||||
|
||||
**Q: What if I disagree with an ADR?**
|
||||
A: ADRs can be superseded. Create a new ADR documenting why.
|
||||
|
||||
**Q: How do I add a new ADR?**
|
||||
A: `blue_adr_create` (future work) or manually add to `docs/adrs/`.
|
||||
|
||||
**Q: Why use AI for relevance instead of keywords?**
|
||||
A: Keywords fail for philosophical ADRs. "Courage" won't match "deleting legacy code" but ADR 0009 is highly relevant. AI understands semantic meaning.
|
||||
|
||||
**Q: What if I don't have an API key configured?**
|
||||
A: Falls back to keyword matching. Less accurate but still functional.
|
||||
|
||||
**Q: How much does the AI relevance check cost?**
|
||||
A: Uses Haiku (~$0.00025 per check). Cached for 5 minutes per unique context.
|
||||
|
||||
---
|
||||
|
||||
*"The beliefs that guide us, made visible."*
|
||||
|
||||
— Blue
|
||||
1044
.blue/docs/rfcs/0005-local-llm-integration.md
Normal file
1044
.blue/docs/rfcs/0005-local-llm-integration.md
Normal file
File diff suppressed because it is too large
Load diff
260
.blue/docs/rfcs/0006-document-deletion-tools.md
Normal file
260
.blue/docs/rfcs/0006-document-deletion-tools.md
Normal file
|
|
@ -0,0 +1,260 @@
|
|||
# RFC 0006: Document Deletion Tools
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | In-Progress |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Ported From** | coherence-mcp RFC 0050 |
|
||||
| **Alignment** | 94% (12 experts, 5 tensions resolved) |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
Blue has no way to cleanly remove documents (RFCs, spikes, decisions). Users must manually delete files and hope the database syncs correctly. This creates orphaned records, broken links, and confusion about what documents exist.
|
||||
|
||||
## Goals
|
||||
|
||||
- Add unified `blue_delete` tool with safety checks
|
||||
- Add `blue_restore` tool for recovering soft-deleted documents
|
||||
- Implement soft-delete with 7-day retention before permanent deletion
|
||||
- Provide dry-run flag to preview deletions
|
||||
- Block deletion of documents with ADR dependents (ADRs are permanent)
|
||||
- Check realm sessions before allowing deletion
|
||||
- Support CLI usage via `blue delete`
|
||||
|
||||
## Design
|
||||
|
||||
### Unified Delete Tool
|
||||
|
||||
#### `blue_delete`
|
||||
|
||||
Delete any document type with consistent behavior.
|
||||
|
||||
```
|
||||
Parameters:
|
||||
doc_type: string (required) - "rfc" | "spike" | "decision" | "prd"
|
||||
title: string (required) - Document title to delete
|
||||
force: boolean - Required for non-draft documents
|
||||
permanent: boolean - Skip soft-delete, remove immediately
|
||||
dry_run: boolean - Preview what would be deleted without acting
|
||||
```
|
||||
|
||||
#### `blue_restore`
|
||||
|
||||
Recover a soft-deleted document within the retention period.
|
||||
|
||||
```
|
||||
Parameters:
|
||||
doc_type: string (required) - "rfc" | "spike" | "decision" | "prd"
|
||||
title: string (required) - Document title to restore
|
||||
```
|
||||
|
||||
### Deletion Flow
|
||||
|
||||
```
|
||||
1. Validate document exists
|
||||
2. Check status:
|
||||
- draft → proceed
|
||||
- other → require force=true
|
||||
3. Check realm sessions:
|
||||
- If active sessions exist → block with message listing repos
|
||||
- Unless force=true
|
||||
4. Check dependents:
|
||||
- ADR references → BLOCK (no override, ADRs are permanent)
|
||||
- Other links → warn but allow
|
||||
5. If dry_run=true:
|
||||
- Return preview of what would be deleted
|
||||
- Exit without changes
|
||||
6. Set status = 'deleting' (prevents concurrent operations)
|
||||
7. Delete files:
|
||||
- Primary .md file
|
||||
- Companion files (.plan.md, .dialogue.md)
|
||||
- Worktree if exists
|
||||
8. Update database:
|
||||
- If permanent=true: DELETE record (cascades)
|
||||
- Else: SET deleted_at = now()
|
||||
9. Return summary
|
||||
```
|
||||
|
||||
### Two-Phase Delete (Failure Recovery)
|
||||
|
||||
If file deletion fails mid-operation:
|
||||
1. Revert status from 'deleting' to previous value
|
||||
2. Return error with list of files that couldn't be deleted
|
||||
3. User can retry or manually clean up
|
||||
|
||||
### Soft Delete Schema
|
||||
|
||||
```sql
|
||||
ALTER TABLE documents ADD COLUMN deleted_at TIMESTAMP NULL;
|
||||
|
||||
-- Hide soft-deleted from normal queries
|
||||
CREATE VIEW active_documents AS
|
||||
SELECT * FROM documents WHERE deleted_at IS NULL;
|
||||
|
||||
-- Auto-cleanup after 7 days (run periodically)
|
||||
DELETE FROM documents
|
||||
WHERE deleted_at IS NOT NULL
|
||||
AND deleted_at < datetime('now', '-7 days');
|
||||
```
|
||||
|
||||
### Safety Matrix
|
||||
|
||||
| Status | force | Result |
|
||||
|--------|-------|--------|
|
||||
| draft | - | Soft delete |
|
||||
| accepted | no | "Use force=true to delete accepted RFC" |
|
||||
| accepted | yes | Soft delete |
|
||||
| in-progress | no | "Active work with worktree. Use force=true" |
|
||||
| in-progress | yes | Soft delete + remove worktree |
|
||||
| implemented | no | "Historical record. Use force=true" |
|
||||
| implemented | yes | Soft delete (if no ADR) |
|
||||
| any | - | **BLOCKED if ADR exists** |
|
||||
|
||||
### ADR Protection
|
||||
|
||||
ADRs are permanent architectural records. They are never auto-deleted.
|
||||
|
||||
If a document has an ADR referencing it:
|
||||
```
|
||||
Cannot delete RFC 'feature-x'.
|
||||
|
||||
This RFC has ADR 0005 documenting its architectural decisions.
|
||||
ADRs are permanent records and cannot be cascade-deleted.
|
||||
|
||||
To proceed:
|
||||
1. Update ADR 0005 to remove the RFC reference, or
|
||||
2. Mark this RFC as 'superseded' instead of deleting
|
||||
```
|
||||
|
||||
### Realm Session Check
|
||||
|
||||
Before deletion, query active sessions:
|
||||
```sql
|
||||
SELECT realm, repo FROM active_sessions
|
||||
WHERE document_id = ? AND ended_at IS NULL;
|
||||
```
|
||||
|
||||
If found:
|
||||
```
|
||||
Cannot delete RFC 'feature-x'.
|
||||
|
||||
Active realm sessions:
|
||||
- myproject/api-service (session started 2h ago)
|
||||
- myproject/web-client (session started 1h ago)
|
||||
|
||||
End these sessions first with `blue session stop`, or use force=true.
|
||||
```
|
||||
|
||||
### CLI Support
|
||||
|
||||
```bash
|
||||
# Soft delete (recoverable for 7 days)
|
||||
blue delete rfc my-feature
|
||||
blue delete spike investigation-x --force
|
||||
|
||||
# Preview what would be deleted
|
||||
blue delete rfc my-feature --dry-run
|
||||
|
||||
# Permanent deletion (no recovery)
|
||||
blue delete rfc old-draft --permanent
|
||||
|
||||
# Restore soft-deleted document
|
||||
blue restore rfc my-feature
|
||||
|
||||
# List soft-deleted documents
|
||||
blue deleted list
|
||||
```
|
||||
|
||||
### Error Messages (User-Friendly)
|
||||
|
||||
Instead of terse errors, provide context:
|
||||
|
||||
```
|
||||
# Bad
|
||||
"Use force=true"
|
||||
|
||||
# Good
|
||||
"Cannot delete RFC 'auth-refactor'.
|
||||
|
||||
Status: in-progress
|
||||
Worktree: /path/to/worktrees/rfc/auth-refactor
|
||||
Last activity: 3 hours ago
|
||||
|
||||
This RFC has active work. Use --force to delete anyway,
|
||||
which will also remove the worktree."
|
||||
```
|
||||
|
||||
## Non-Goals
|
||||
|
||||
- No ADR deletion - ADRs are permanent architectural records
|
||||
- No bulk delete - delete one at a time for safety
|
||||
- No cross-realm cascade - each repo manages its own documents
|
||||
|
||||
## Test Plan
|
||||
|
||||
- [ ] Soft delete hides document from listings
|
||||
- [ ] Soft-deleted docs appear in `blue deleted list`
|
||||
- [ ] Soft-deleted docs recoverable with `blue restore`
|
||||
- [ ] Documents auto-purge after 7 days
|
||||
- [ ] Permanent delete removes DB record and files
|
||||
- [ ] Dry-run shows what would be deleted without acting
|
||||
- [ ] Realm session check blocks deletion appropriately
|
||||
- [ ] Force flag overrides session check
|
||||
- [ ] ADR dependency permanently blocks (no override)
|
||||
- [ ] Partial failure reverts status to original
|
||||
- [ ] CLI `blue delete` works end-to-end
|
||||
- [ ] Concurrent deletion attempts handled safely
|
||||
- [ ] Companion files (.plan.md) are removed
|
||||
- [ ] Worktree cleanup happens on RFC deletion
|
||||
|
||||
## Alignment Dialogue Summary
|
||||
|
||||
**12 Expert Perspectives Integrated:**
|
||||
|
||||
| Expert | Key Contribution |
|
||||
|--------|------------------|
|
||||
| Database Engineer | Two-phase delete, correct ordering |
|
||||
| Security Analyst | Audit trail via soft-delete |
|
||||
| UX Designer | Contextual error messages |
|
||||
| Distributed Systems | Realm session coordination |
|
||||
| API Designer | Unified `blue_delete` tool |
|
||||
| Test Engineer | Expanded test scenarios |
|
||||
| Product Manager | Soft-delete with recovery |
|
||||
| DevOps Engineer | CLI support, dry-run flag |
|
||||
| Data Integrity | ADR protection (no cascade) |
|
||||
| Performance Engineer | (Deferred: DB-stored companion paths) |
|
||||
| Documentation Writer | Resolved ADR contradiction |
|
||||
| Chaos Engineer | Partial failure recovery |
|
||||
|
||||
**Tensions Resolved:**
|
||||
1. Deletion order → Two-phase with status lock
|
||||
2. ADR cascade contradiction → ADRs never auto-deleted
|
||||
3. No undo capability → Soft-delete with 7-day retention
|
||||
4. Realm coordination → Session check before delete
|
||||
5. API inconsistency → Single unified tool
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
- [x] Add deleted_at column to documents table
|
||||
- [x] Add schema migration v2 -> v3
|
||||
- [x] Update Document struct with deleted_at field
|
||||
- [x] Update all queries to exclude soft-deleted documents
|
||||
- [x] Add soft_delete_document method to store
|
||||
- [x] Add restore_document method to store
|
||||
- [x] Add get_deleted_document method to store
|
||||
- [x] Add list_deleted_documents method to store
|
||||
- [x] Add purge_old_deleted_documents method to store
|
||||
- [x] Add has_adr_dependents method to store
|
||||
- [x] Create handlers/delete.rs with all handlers
|
||||
- [x] Register blue_delete, blue_restore, blue_deleted_list, blue_purge_deleted in MCP
|
||||
- [ ] Add CLI commands (blue delete, blue restore, blue deleted list)
|
||||
- [ ] Write integration tests for soft-delete flow
|
||||
- [ ] Update documentation
|
||||
|
||||
---
|
||||
|
||||
*"Delete boldly. Git remembers. But so does soft-delete, for 7 days."*
|
||||
|
||||
— Blue
|
||||
17
.blue/docs/spikes/2026-01-24-adr-adherence.md
Normal file
17
.blue/docs/spikes/2026-01-24-adr-adherence.md
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
# Spike: Adr Adherence
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Complete |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Time Box** | 2 hours |
|
||||
|
||||
---
|
||||
|
||||
## Question
|
||||
|
||||
How can Blue help ensure work adheres to ADRs? What mechanisms could check, remind, or enforce architectural decisions?
|
||||
|
||||
---
|
||||
|
||||
*Investigation notes by Blue*
|
||||
169
.blue/docs/spikes/2026-01-24-agentic-cli-integration.md
Normal file
169
.blue/docs/spikes/2026-01-24-agentic-cli-integration.md
Normal file
|
|
@ -0,0 +1,169 @@
|
|||
# Spike: Agentic Cli Integration
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | In Progress |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Time Box** | 2 hours |
|
||||
|
||||
---
|
||||
|
||||
## Question
|
||||
|
||||
Which commercial-compatible local agentic coding CLI (Aider, Goose, OpenCode) can be integrated into Blue CLI, and what's the best integration pattern?
|
||||
|
||||
---
|
||||
|
||||
## Findings
|
||||
|
||||
### Candidates Evaluated
|
||||
|
||||
| Tool | License | Language | MCP Support | Integration Pattern |
|
||||
|------|---------|----------|-------------|---------------------|
|
||||
| **Goose** | Apache-2.0 | Rust | Native | MCP client/server, subprocess |
|
||||
| **Aider** | Apache-2.0 | Python | Via extensions | Subprocess, CLI flags |
|
||||
| **OpenCode** | MIT | Go | Native | Go SDK, subprocess |
|
||||
|
||||
### Goose (Recommended)
|
||||
|
||||
**Why Goose wins:**
|
||||
|
||||
1. **Same language as Blue** - Rust-based, can share types and potentially link as library
|
||||
2. **Native MCP support** - Goose is built on MCP (co-developed with Anthropic). Blue already speaks MCP.
|
||||
3. **Apache-2.0** - Commercial-compatible with patent grant
|
||||
4. **Block backing** - Maintained by Block (Square/Cash App), contributed to Linux Foundation's Agentic AI Foundation in Dec 2025
|
||||
5. **25+ LLM providers** - Works with Ollama, OpenAI, Anthropic, local models
|
||||
|
||||
**Integration patterns:**
|
||||
|
||||
```
|
||||
Option A: MCP Extension (Lowest friction)
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ Goose CLI │
|
||||
│ ↓ (MCP client) │
|
||||
│ Blue MCP Server (existing blue-mcp) │
|
||||
│ ↓ │
|
||||
│ Blue tools: rfc_create, worktree, etc. │
|
||||
└─────────────────────────────────────────────┘
|
||||
|
||||
Option B: Blue as Goose Extension
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ Blue CLI │
|
||||
│ ↓ (spawns) │
|
||||
│ Goose (subprocess) │
|
||||
│ ↓ (MCP client) │
|
||||
│ Blue MCP Server │
|
||||
└─────────────────────────────────────────────┘
|
||||
|
||||
Option C: Embedded (Future)
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ Blue CLI │
|
||||
│ ↓ (links) │
|
||||
│ goose-core (Rust crate) │
|
||||
│ ↓ │
|
||||
│ Local LLM / API │
|
||||
└─────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Recommendation: Option A first**
|
||||
|
||||
Goose already works as an MCP client. Blue already has an MCP server (`blue mcp`). The integration is:
|
||||
|
||||
```bash
|
||||
# User installs goose
|
||||
brew install block/tap/goose
|
||||
|
||||
# User configures Blue as Goose extension
|
||||
# In ~/.config/goose/config.yaml:
|
||||
extensions:
|
||||
blue:
|
||||
type: stdio
|
||||
command: blue mcp
|
||||
```
|
||||
|
||||
This requires **zero code changes** to Blue. Users get agentic coding with Blue's workflow tools immediately.
|
||||
|
||||
### Aider
|
||||
|
||||
**Pros:**
|
||||
- Mature, battle-tested (Apache-2.0)
|
||||
- Git-native with smart commits
|
||||
- Strong local model support via Ollama
|
||||
|
||||
**Cons:**
|
||||
- Python-based (foreign to Rust codebase)
|
||||
- CLI scripting API is "not officially supported"
|
||||
- No native MCP (would need wrapper)
|
||||
|
||||
**Integration pattern:** Subprocess with `--message` flag for non-interactive use.
|
||||
|
||||
```rust
|
||||
// Hypothetical
|
||||
let output = Command::new("aider")
|
||||
.args(["--message", "implement the function", "--yes-always"])
|
||||
.output()?;
|
||||
```
|
||||
|
||||
**Verdict:** Viable but more friction than Goose.
|
||||
|
||||
### OpenCode
|
||||
|
||||
**Pros:**
|
||||
- MIT license (most permissive)
|
||||
- Go SDK available
|
||||
- Native MCP support
|
||||
- Growing fast (45K+ GitHub stars)
|
||||
|
||||
**Cons:**
|
||||
- Go-based (FFI overhead to call from Rust)
|
||||
- Newer, less mature than Aider
|
||||
- SDK is for Go clients, not embedding
|
||||
|
||||
**Integration pattern:** Go SDK or subprocess.
|
||||
|
||||
**Verdict:** Good option if Goose doesn't work out.
|
||||
|
||||
### Local LLM Backend
|
||||
|
||||
All three support Ollama for local models:
|
||||
|
||||
```bash
|
||||
# Install Ollama
|
||||
brew install ollama
|
||||
|
||||
# Pull a coding model (Apache-2.0 licensed)
|
||||
ollama pull qwen2.5-coder:32b # 19GB, best quality
|
||||
ollama pull qwen2.5-coder:7b # 4.4GB, faster
|
||||
ollama pull deepseek-coder-v2 # Alternative
|
||||
```
|
||||
|
||||
Goose config for local:
|
||||
```yaml
|
||||
# ~/.config/goose/config.yaml
|
||||
provider: ollama
|
||||
model: qwen2.5-coder:32b
|
||||
```
|
||||
|
||||
## Outcome
|
||||
|
||||
**Recommends implementation** with Goose as the integration target.
|
||||
|
||||
### Immediate (Zero code):
|
||||
1. Document Blue + Goose setup in docs/
|
||||
2. Ship example `goose-extension.yaml` config
|
||||
|
||||
### Short-term (Minimal code):
|
||||
1. Add `blue agent` subcommand that launches Goose with Blue extension pre-configured
|
||||
2. Add Blue-specific prompts/instructions for Goose
|
||||
|
||||
### Medium-term (More code):
|
||||
1. Investigate goose-core Rust crate for tighter integration
|
||||
2. Consider Blue daemon serving as persistent MCP host
|
||||
|
||||
## Sources
|
||||
|
||||
- [Goose GitHub](https://github.com/block/goose)
|
||||
- [Goose Architecture](https://block.github.io/goose/docs/goose-architecture/)
|
||||
- [Aider Scripting](https://aider.chat/docs/scripting.html)
|
||||
- [OpenCode Go SDK](https://pkg.go.dev/github.com/sst/opencode-sdk-go)
|
||||
- [Goose MCP Deep Dive](https://dev.to/lymah/deep-dive-into-gooses-extension-system-and-model-context-protocol-mcp-3ehl)
|
||||
182
.blue/docs/spikes/2026-01-24-dialogue-to-blue-directory.md
Normal file
182
.blue/docs/spikes/2026-01-24-dialogue-to-blue-directory.md
Normal file
|
|
@ -0,0 +1,182 @@
|
|||
# Spike: Dialogue To Blue Directory
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Complete |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Time Box** | 1 hour |
|
||||
|
||||
---
|
||||
|
||||
## Question
|
||||
|
||||
How can we move dialogues from docs/dialogues/ to .blue/docs/dialogues/ and what migration is needed?
|
||||
|
||||
---
|
||||
|
||||
## Findings
|
||||
|
||||
### Current State
|
||||
|
||||
**Good news:** The code already writes new dialogues to `.blue/docs/dialogues/`.
|
||||
|
||||
The dialogue handler (`crates/blue-mcp/src/handlers/dialogue.rs:291-293`) uses:
|
||||
|
||||
```rust
|
||||
let file_path = PathBuf::from("dialogues").join(&file_name);
|
||||
let docs_path = state.home.docs_path.clone(); // .blue/docs/
|
||||
let dialogue_path = docs_path.join(&file_path); // .blue/docs/dialogues/
|
||||
```
|
||||
|
||||
And `BlueHome.docs_path` is set to `.blue/docs/` in `crates/blue-core/src/repo.rs:44`:
|
||||
|
||||
```rust
|
||||
docs_path: blue_dir.join("docs"),
|
||||
```
|
||||
|
||||
**The problem:** 4 legacy dialogue files exist outside `.blue/`:
|
||||
|
||||
```
|
||||
docs/dialogues/persephone-phalaenopsis.dialogue.md
|
||||
docs/dialogues/cross-repo-realms.dialogue.md
|
||||
docs/dialogues/cross-repo-realms-refinement.dialogue.md
|
||||
docs/dialogues/realm-mcp-design.dialogue.md
|
||||
```
|
||||
|
||||
These were created before RFC 0003 was implemented.
|
||||
|
||||
### What Needs to Happen
|
||||
|
||||
1. **Move files:** `docs/dialogues/*.dialogue.md` → `.blue/docs/dialogues/`
|
||||
2. **Update database:** Fix `file_path` column in `documents` table
|
||||
3. **Clean up:** Remove empty `docs/dialogues/` directory
|
||||
|
||||
### Options
|
||||
|
||||
#### Option A: Manual Migration (One-Time)
|
||||
|
||||
```bash
|
||||
# Move files
|
||||
mkdir -p .blue/docs/dialogues
|
||||
mv docs/dialogues/*.dialogue.md .blue/docs/dialogues/
|
||||
|
||||
# Update database paths
|
||||
sqlite3 .blue/data/blue/blue.db <<'EOF'
|
||||
UPDATE documents
|
||||
SET file_path = REPLACE(file_path, '../../../docs/dialogues/', 'dialogues/')
|
||||
WHERE doc_type = 'dialogue';
|
||||
EOF
|
||||
|
||||
# Clean up
|
||||
rmdir docs/dialogues
|
||||
```
|
||||
|
||||
**Pros:** Simple, done once
|
||||
**Cons:** Other repos might have same issue
|
||||
|
||||
#### Option B: Auto-Migration in `detect_blue()`
|
||||
|
||||
Add dialogue migration to the existing migration logic in `repo.rs`:
|
||||
|
||||
```rust
|
||||
// In migrate_to_new_structure():
|
||||
let old_dialogues = root.join("docs").join("dialogues");
|
||||
let new_dialogues = new_docs_path.join("dialogues");
|
||||
if old_dialogues.exists() && !new_dialogues.exists() {
|
||||
std::fs::rename(&old_dialogues, &new_dialogues)?;
|
||||
}
|
||||
```
|
||||
|
||||
Plus update the store to fix file_path entries after migration.
|
||||
|
||||
**Pros:** Handles all repos automatically
|
||||
**Cons:** More code to maintain
|
||||
|
||||
#### Option C: Support Both Locations (Read)
|
||||
|
||||
Modify `handle_get()` to check both locations:
|
||||
|
||||
```rust
|
||||
let content = if let Some(ref rel_path) = doc.file_path {
|
||||
let new_path = state.home.docs_path.join(rel_path);
|
||||
let old_path = state.home.root.join("docs").join(rel_path);
|
||||
|
||||
fs::read_to_string(&new_path)
|
||||
.or_else(|_| fs::read_to_string(&old_path))
|
||||
.ok()
|
||||
} else {
|
||||
None
|
||||
};
|
||||
```
|
||||
|
||||
**Pros:** No migration needed, backwards compatible
|
||||
**Cons:** Technical debt, two locations forever
|
||||
|
||||
### Recommendation
|
||||
|
||||
**Option A (manual migration)** for Blue repo + **Option B (auto-migration)** as follow-up RFC.
|
||||
|
||||
Rationale:
|
||||
- The legacy dialogues only exist in the blue repo
|
||||
- Manual migration is quick and verifiable
|
||||
- Auto-migration can be added properly with tests
|
||||
|
||||
### Database Path Investigation
|
||||
|
||||
**Finding:** The 4 legacy dialogues are not registered in the database at all.
|
||||
|
||||
```
|
||||
sqlite3 .blue/blue.db "SELECT doc_type, COUNT(*) FROM documents GROUP BY doc_type"
|
||||
rfc|6
|
||||
spike|7
|
||||
```
|
||||
|
||||
No dialogue entries. They exist only as markdown files.
|
||||
|
||||
**This simplifies migration:** Just move the files. No database updates needed.
|
||||
|
||||
If we want them tracked in SQLite, we could either:
|
||||
1. Register them after moving with `blue_dialogue_create`
|
||||
2. Import them with a script that parses the markdown headers
|
||||
|
||||
### Edge Cases
|
||||
|
||||
1. **In-flight dialogues:** If someone creates a dialogue during migration, could conflict
|
||||
2. **Git history:** Moving files loses git blame (use `git mv` to preserve)
|
||||
3. **Symlinks:** If `docs/dialogues` is a symlink, need to handle differently
|
||||
|
||||
### Migration Commands
|
||||
|
||||
Since dialogues aren't in the database, migration is just a file move:
|
||||
|
||||
```bash
|
||||
# Create destination
|
||||
mkdir -p .blue/docs/dialogues
|
||||
|
||||
# Move with git history preservation
|
||||
git mv docs/dialogues/*.dialogue.md .blue/docs/dialogues/
|
||||
|
||||
# Clean up empty directory
|
||||
rmdir docs/dialogues
|
||||
rmdir docs # if empty
|
||||
|
||||
# Commit
|
||||
git add -A && git commit -m "chore: move dialogues to .blue/docs/dialogues"
|
||||
```
|
||||
|
||||
### Future Consideration
|
||||
|
||||
To register existing dialogues in SQLite, we could add a `blue_dialogue_import` tool that:
|
||||
1. Parses the markdown header for title, date, linked RFC
|
||||
2. Creates document entries in SQLite
|
||||
3. Sets correct file_path relative to `.blue/docs/`
|
||||
|
||||
This is optional - the files are still human-readable without database tracking.
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
**Simpler than expected.** The code already writes to `.blue/docs/dialogues/`. The 4 legacy files just need `git mv` to the new location. No database migration needed since they were never registered.
|
||||
|
||||
**Recommend:** Move files with `git mv`, then optionally register them with a new import tool later.
|
||||
17
.blue/docs/spikes/2026-01-24-local-llm-integration.md
Normal file
17
.blue/docs/spikes/2026-01-24-local-llm-integration.md
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
# Spike: Local Llm Integration
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | In Progress |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Time Box** | 2 hours |
|
||||
|
||||
---
|
||||
|
||||
## Question
|
||||
|
||||
Which commercial-compatible local LLM CLI tool can be integrated into Blue CLI, and what's the best integration approach?
|
||||
|
||||
---
|
||||
|
||||
*Investigation notes by Blue*
|
||||
17
.blue/docs/spikes/2026-01-24-per-repo-blue-folder.md
Normal file
17
.blue/docs/spikes/2026-01-24-per-repo-blue-folder.md
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
# Spike: Per Repo Blue Folder
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Complete |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Time Box** | 1 hour |
|
||||
|
||||
---
|
||||
|
||||
## Question
|
||||
|
||||
Should each repo have its own .blue folder with docs, or centralize in one location? What are the tradeoffs and what changes are needed?
|
||||
|
||||
---
|
||||
|
||||
*Investigation notes by Blue*
|
||||
17
.blue/docs/spikes/2026-01-24-runbook-driven-actions.md
Normal file
17
.blue/docs/spikes/2026-01-24-runbook-driven-actions.md
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
# Spike: Runbook Driven Actions
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Complete |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Time Box** | 2 hours |
|
||||
|
||||
---
|
||||
|
||||
## Question
|
||||
|
||||
How can runbooks guide Claude Code through repo actions (docker builds, deploys, tests) so it follows the documented steps rather than guessing?
|
||||
|
||||
---
|
||||
|
||||
*Investigation notes by Blue*
|
||||
17
.blue/docs/spikes/2026-01-24-sqlite-storage-expansion.md
Normal file
17
.blue/docs/spikes/2026-01-24-sqlite-storage-expansion.md
Normal file
|
|
@ -0,0 +1,17 @@
|
|||
# Spike: Sqlite Storage Expansion
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Complete |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Time Box** | 2 hours |
|
||||
|
||||
---
|
||||
|
||||
## Question
|
||||
|
||||
What changes are needed to store spikes and plans in SQLite like RFCs, and store dialogue metadata (but not content) in SQLite?
|
||||
|
||||
---
|
||||
|
||||
*Investigation notes by Blue*
|
||||
Loading…
Reference in a new issue