diff --git a/.blue/data/blue/blue.db b/.blue/data/blue/blue.db deleted file mode 100644 index c1eac9a..0000000 Binary files a/.blue/data/blue/blue.db and /dev/null differ diff --git a/.blue/docs/rfcs/0007-consistent-branch-naming.md b/.blue/docs/rfcs/0007-consistent-branch-naming.md new file mode 100644 index 0000000..cc39bf1 --- /dev/null +++ b/.blue/docs/rfcs/0007-consistent-branch-naming.md @@ -0,0 +1,85 @@ +# RFC 0007: Consistent Branch Naming + +| | | +|---|---| +| **Status** | Implemented | +| **Date** | 2026-01-24 | + +--- + +## Summary + +Branch names and worktrees for RFC implementation are inconsistent. Some use the full RFC name with number prefix, others use arbitrary names. This makes it hard to correlate branches with their source RFCs and clutters the git history. + +## Problem + +Currently when implementing an RFC: +- Branch names vary: `rfc-0005`, `feature/local-llm`, `0005-local-llm-integration`, etc. +- Worktree directories follow no convention +- No clear way to find which branch implements which RFC +- PR titles don't consistently reference the RFC number + +## Proposal + +### Naming Convention + +For an RFC file named `NNNN-feature-description.md`: + +| Artifact | Name | +|----------|------| +| RFC file | `NNNN-feature-description.md` | +| Branch | `feature-description` | +| Worktree | `feature-description` | +| PR title | `RFC NNNN: Feature Description` | + +### Examples + +| RFC File | Branch | Worktree | +|----------|--------|----------| +| `0005-local-llm-integration.md` | `local-llm-integration` | `local-llm-integration` | +| `0006-document-deletion-tools.md` | `document-deletion-tools` | `document-deletion-tools` | +| `0007-consistent-branch-naming.md` | `consistent-branch-naming` | `consistent-branch-naming` | + +### Rationale + +**Why strip the number prefix?** +- Branch names stay short and readable +- The RFC number is metadata, not the feature identity +- `git branch` output is cleaner +- Tab completion is easier + +**Why keep feature-description?** +- Direct correlation to RFC title +- Descriptive without being verbose +- Consistent kebab-case convention + +### Implementation + +1. Update `blue_worktree_create` to derive branch name from RFC title (strip number prefix) +2. Update `blue_pr_create` to include RFC number in PR title +3. ~~Add validation to reject branches with number prefixes~~ (deferred - convention is enforced by tooling) +4. Document convention in CLAUDE.md + +### Migration + +Existing branches don't need to change. Convention applies to new work only. + +## Test Plan + +- [x] `blue worktree create` uses `feature-description` format +- [x] Branch name derived correctly from RFC title +- [x] PR title includes RFC number when `rfc` parameter provided +- [ ] ~~Validation rejects `NNNN-*` branch names with helpful message~~ (deferred) + +## Implementation Plan + +- [x] Update worktree handler to strip RFC number from branch name +- [x] Update PR handler to format title as `RFC NNNN: Title` +- [x] Add `strip_rfc_number_prefix` helper function with tests +- [ ] Update documentation (CLAUDE.md) + +--- + +*"Names matter. Make them count."* + +— Blue diff --git a/.blue/docs/spikes/2026-01-24-docs-path-resolution-bug.md b/.blue/docs/spikes/2026-01-24-docs-path-resolution-bug.md new file mode 100644 index 0000000..f7f0c48 --- /dev/null +++ b/.blue/docs/spikes/2026-01-24-docs-path-resolution-bug.md @@ -0,0 +1,58 @@ +# Spike: Docs Path Resolution Bug + +| | | +|---|---| +| **Status** | Completed | +| **Date** | 2026-01-24 | +| **Outcome** | Answered | + +--- + +## Question + +Why does blue_rfc_create write to .blue/repos/blue/docs/rfcs/ instead of .blue/docs/rfcs/? + +## Root Cause + +The bug was caused by coexistence of OLD and NEW directory structures: +- OLD: `.blue/repos/blue/docs/`, `.blue/data/blue/blue.db` +- NEW: `.blue/docs/`, `.blue/blue.db` + +When `detect_blue()` runs: +1. It sees `.blue/repos/` or `.blue/data/` exist +2. Calls `migrate_to_new_structure()` +3. Migration is a NO-OP because new paths already exist +4. Returns `BlueHome::new(root)` which sets correct paths + +**However**, the MCP server caches `ProjectState` in `self.state`. If the server was started when old structure was the only structure, the cached state has old paths. The state only resets when `cwd` changes. + +## Evidence + +1. RFC created at `.blue/repos/blue/docs/rfcs/` (wrong) +2. Spike created at `.blue/docs/spikes/` (correct) +3. `detect_blue()` now returns correct paths +4. Old DB (`.blue/data/blue/blue.db`) was modified at 16:28 +5. New DB (`.blue/blue.db`) was modified at 16:01 + +The RFC was stored in the old database because the MCP server had cached the old state. + +## Fix Applied + +Removed old structure directories: +```bash +rm -rf .blue/repos .blue/data +``` + +This prevents the migration code path from triggering and ensures only new paths are used. + +## Recommendations + +1. Migration should DELETE old directories after migration completes, not leave them as orphans +2. Or: `detect_blue()` should always use new paths and ignore old structure once new structure exists +3. Consider adding a version marker file (`.blue/version`) to distinguish structure versions + +--- + +*"The old paths were ghosts. We exorcised them."* + +— Blue diff --git a/.blue/repos/blue/docs/rfcs/0001-dialogue-sqlite-metadata.md b/.blue/repos/blue/docs/rfcs/0001-dialogue-sqlite-metadata.md deleted file mode 100644 index e9f60d1..0000000 --- a/.blue/repos/blue/docs/rfcs/0001-dialogue-sqlite-metadata.md +++ /dev/null @@ -1,198 +0,0 @@ -# RFC 0001: Dialogue SQLite Metadata - -| | | -|---|---| -| **Status** | Draft | -| **Date** | 2026-01-24 | -| **Source Spike** | sqlite-storage-expansion | - ---- - -## Summary - -Dialogue files (.dialogue.md) are not indexed in SQLite. Can't query them, link them to RFCs, or track relationships. Need to add DocType::Dialogue and store metadata while keeping content in markdown. - -## Background - -Dialogues are transcripts of conversations - different from RFCs/spikes which are living documents with status transitions. - -Current state: -- Dialogues exist as `.dialogue.md` files in `docs/dialogues/` -- No SQLite tracking -- No way to search or link them - -## Proposal - -### 1. Add DocType::Dialogue - -```rust -pub enum DocType { - Rfc, - Spike, - Adr, - Decision, - Prd, - Postmortem, - Runbook, - Dialogue, // NEW -} -``` - -### 2. Dialogue Metadata (SQLite) - -Store in `documents` table: -- `doc_type`: "dialogue" -- `title`: Dialogue title -- `status`: "complete" (dialogues don't have status transitions) -- `file_path`: Path to .dialogue.md file - -Store in `metadata` table: -- `date`: When dialogue occurred -- `participants`: Who was involved (e.g., "Claude, Eric") -- `linked_rfc`: RFC this dialogue relates to (optional) -- `topic`: Short description of what was discussed - -### 3. New Tool: `blue_dialogue_create` - -``` -blue_dialogue_create title="realm-design-session" linked_rfc="cross-repo-realms" -``` - -Creates: -- Entry in documents table -- Metadata entries -- Skeleton .dialogue.md file - -### 4. Dialogue File Format - -```markdown -# Dialogue: Realm Design Session - -| | | -|---|---| -| **Date** | 2026-01-24 | -| **Participants** | Claude, Eric | -| **Topic** | Designing cross-repo coordination | -| **Linked RFC** | [cross-repo-realms](../rfcs/0001-cross-repo-realms.md) | - ---- - -## Context - -[Why this dialogue happened] - -## Key Decisions - -- Decision 1 -- Decision 2 - -## Transcript - -[Full conversation or summary] - ---- - -*Extracted by Blue* -``` - -### 5. Keep Content in Markdown - -Unlike other doc types, dialogue content stays primarily in markdown: -- Full transcripts can be large -- Human-readable format preferred -- Git diff friendly - -SQLite stores metadata only for: -- Fast searching -- Relationship tracking -- Listing/filtering - -### 6. New Tool: `blue_dialogue_get` - -``` -blue_dialogue_get title="realm-design-session" -``` - -Returns dialogue metadata and file path. - -### 7. New Tool: `blue_dialogue_list` - -``` -blue_dialogue_list linked_rfc="cross-repo-realms" -``` - -Returns all dialogues, optionally filtered by linked RFC. - -### 8. Integration with `blue_extract_dialogue` - -Existing `blue_extract_dialogue` extracts text from Claude JSONL. Extend to: - -``` -blue_extract_dialogue task_id="abc123" save_as="realm-design-session" linked_rfc="cross-repo-realms" -``` - -- Extract dialogue from JSONL -- Create .dialogue.md file -- Register in SQLite with metadata - -### 9. Migration of Existing Dialogues - -On first run, scan `docs/dialogues/` for `.dialogue.md` files: -- Parse frontmatter for metadata -- Register in documents table -- Preserve file locations - -## Security Note - -Dialogues may contain sensitive information discussed during development. Before committing: -- Review for credentials, API keys, or secrets -- Use `[REDACTED]` for sensitive values -- Consider if full transcript is needed vs summary - -## Example Transcript Section - -```markdown -## Transcript - -**Eric**: How should we handle authentication for the API? - -**Claude**: I'd recommend JWT tokens with short expiry. Here's why: -1. Stateless - no session storage needed -2. Can include claims for authorization -3. Easy to invalidate by changing signing key - -**Eric**: What about refresh tokens? - -**Claude**: Store refresh tokens in httpOnly cookies. When access token expires, -use refresh endpoint to get new pair. This balances security with UX. - -**Decision**: Use JWT + refresh token pattern. -``` - -## Implementation - -1. Add `DocType::Dialogue` to enum -2. Create `blue_dialogue_create` handler -3. Create `blue_dialogue_list` handler -4. Update `blue_search` to include dialogues -5. Add dialogue markdown generation - -## Test Plan - -- [ ] Create dialogue with metadata -- [ ] Link dialogue to RFC -- [ ] Dialogue without linked RFC works -- [ ] Search finds dialogues by title/topic -- [ ] List dialogues by RFC works -- [ ] List all dialogues works -- [ ] Get specific dialogue returns metadata -- [ ] Dialogue content stays in markdown -- [ ] Metadata stored in SQLite -- [ ] Existing dialogues migrated on first run -- [ ] Extract dialogue from JSONL creates proper entry - ---- - -*"Right then. Let's get to it."* - -— Blue diff --git a/.blue/repos/blue/docs/rfcs/0002-runbook-action-lookup.md b/.blue/repos/blue/docs/rfcs/0002-runbook-action-lookup.md deleted file mode 100644 index 9d2c190..0000000 --- a/.blue/repos/blue/docs/rfcs/0002-runbook-action-lookup.md +++ /dev/null @@ -1,227 +0,0 @@ -# RFC 0002: Runbook Action Lookup - -| | | -|---|---| -| **Status** | Draft | -| **Date** | 2026-01-24 | -| **Source Spike** | runbook-driven-actions | - ---- - -## Summary - -No way to discover and follow runbooks when performing repo actions. Claude guesses instead of following documented procedures for docker builds, deploys, releases, etc. - -## Proposal - -### 1. Action Tags in Runbooks - -Add `actions` field to runbook frontmatter: - -```markdown -# Runbook: Docker Build - -| | | -|---|---| -| **Status** | Active | -| **Actions** | docker build, build image, container build | -``` - -Store actions in SQLite metadata table for fast lookup. - -### 2. New Tool: `blue_runbook_lookup` - -``` -blue_runbook_lookup action="docker build" -``` - -Returns structured response: - -```json -{ - "found": true, - "runbook": { - "title": "Docker Build", - "file": ".blue/docs/runbooks/docker-build.md", - "actions": ["docker build", "build image", "container build"], - "operations": [ - { - "name": "Build Production Image", - "steps": ["...", "..."], - "verification": "docker images | grep myapp", - "rollback": "docker rmi myapp:latest" - } - ] - }, - "hint": "Follow the steps above. Use verification to confirm success." -} -``` - -If no match: `{ "found": false, "hint": "No runbook found. Proceed with caution." }` - -### 3. New Tool: `blue_runbook_actions` - -List all registered actions: - -``` -blue_runbook_actions -``` - -Returns: -```json -{ - "actions": [ - { "action": "docker build", "runbook": "Docker Build" }, - { "action": "deploy staging", "runbook": "Deployment" }, - { "action": "run tests", "runbook": "Testing" } - ] -} -``` - -### 4. Matching Algorithm - -Word-based matching with priority: - -1. **Exact match** - "docker build" matches "docker build" (100%) -2. **All words match** - "docker" matches "docker build" (90%) -3. **Partial words** - "build" matches "docker build" (80%) - -If multiple runbooks match, return highest priority. Ties broken by most specific (more words in action). - -### 5. Schema - -```sql --- In metadata table -INSERT INTO metadata (document_id, key, value) -VALUES (runbook_id, 'action', 'docker build'); - --- Multiple actions = multiple rows -INSERT INTO metadata (document_id, key, value) -VALUES (runbook_id, 'action', 'build image'); -``` - -### 6. Update `blue_runbook_create` - -``` -blue_runbook_create title="Docker Build" actions=["docker build", "build image"] -``` - -- Accept `actions` array parameter -- Store each action in metadata table -- Include in generated markdown - -### 7. CLAUDE.md Guidance - -Document the pattern for repos: - -```markdown -## Runbooks - -Before executing build, deploy, or release operations: - -1. Check for runbook: `blue_runbook_lookup action="docker build"` -2. If found, follow the documented steps -3. Use verification commands to confirm success -4. If something fails, check rollback procedures - -Available actions: `blue_runbook_actions` -``` - -## Security Note - -Runbooks should **never** contain actual credentials or secrets. Use placeholders: - -```markdown -**Steps**: -1. Export credentials: `export API_KEY=$YOUR_API_KEY` -2. Run deploy: `./deploy.sh` -``` - -Not: -```markdown -**Steps**: -1. Run deploy: `API_KEY=abc123 ./deploy.sh` # WRONG! -``` - -## Example Runbook - -```markdown -# Runbook: Docker Build - -| | | -|---|---| -| **Status** | Active | -| **Actions** | docker build, build image, container build | -| **Owner** | Platform Team | - ---- - -## Overview - -Build and tag Docker images for the application. - -## Prerequisites - -- [ ] Docker installed and running -- [ ] Access to container registry -- [ ] `.env` file configured - -## Common Operations - -### Operation: Build Production Image - -**When to use**: Preparing for deployment - -**Steps**: -1. Ensure on correct branch: `git branch --show-current` -2. Pull latest: `git pull origin main` -3. Build image: `docker build -t myapp:$(git rev-parse --short HEAD) .` -4. Tag as latest: `docker tag myapp:$(git rev-parse --short HEAD) myapp:latest` - -**Verification**: -```bash -docker images | grep myapp -docker run --rm myapp:latest --version -``` - -**Rollback**: -```bash -docker rmi myapp:latest -docker tag myapp:previous myapp:latest -``` - -## Troubleshooting - -### Symptom: Build fails with "no space left" - -**Resolution**: -1. `docker system prune -a` -2. Retry build -``` - -## Implementation - -1. Add `actions` parameter to `blue_runbook_create` -2. Store actions in metadata table -3. Implement `blue_runbook_lookup` with matching algorithm -4. Implement `blue_runbook_actions` for discovery -5. Parse runbook markdown to extract operations -6. Update runbook markdown generation - -## Test Plan - -- [ ] Create runbook with actions tags -- [ ] Lookup by exact action match -- [ ] Lookup by partial match (word subset) -- [ ] No match returns gracefully -- [ ] Multiple runbooks - highest priority wins -- [ ] List all actions works -- [ ] Actions stored in SQLite metadata -- [ ] Operations parsed from markdown correctly -- [ ] Malformed runbook returns partial data gracefully - ---- - -*"Right then. Let's get to it."* - -— Blue diff --git a/.blue/repos/blue/docs/rfcs/0003-per-repo-blue-folders.md b/.blue/repos/blue/docs/rfcs/0003-per-repo-blue-folders.md deleted file mode 100644 index 2981dcc..0000000 --- a/.blue/repos/blue/docs/rfcs/0003-per-repo-blue-folders.md +++ /dev/null @@ -1,155 +0,0 @@ -# RFC 0003: Per Repo Blue Folders - -| | | -|---|---| -| **Status** | Draft | -| **Date** | 2026-01-24 | -| **Source Spike** | per-repo-blue-folder | - ---- - -## Summary - -Currently all docs flow to one central .blue folder. Each repo should have its own .blue folder so docs live with code and git tracking works naturally. - -## Current Behavior - -``` -blue/ # Central repo -├── .blue/ -│ ├── repos/ -│ │ ├── blue/docs/... # Blue's docs -│ │ └── other-repo/docs/ # Other repo's docs (wrong!) -│ └── data/ -│ └── blue/blue.db -``` - -All repos' docs end up in the blue repo's `.blue/repos/`. - -## Proposed Behavior - -``` -repo-a/ -├── .blue/ -│ ├── docs/ -│ │ ├── rfcs/ -│ │ ├── spikes/ -│ │ └── runbooks/ -│ └── blue.db -└── src/... - -repo-b/ -├── .blue/ -│ ├── docs/... -│ └── blue.db -└── src/... -``` - -Each repo has its own `.blue/` with its own docs and database. - -## Changes Required - -### 1. Simplify BlueHome structure - -```rust -pub struct BlueHome { - pub root: PathBuf, // Repo root - pub blue_dir: PathBuf, // .blue/ - pub docs_path: PathBuf, // .blue/docs/ - pub db_path: PathBuf, // .blue/blue.db -} -``` - -### 2. Change detect_blue behavior - -- Find git repo root for current directory -- Look for `.blue/` there (don't search upward beyond repo) -- Auto-create on first blue command (no `blue init` required) - -**Edge cases:** -- No git repo: Create `.blue/` in current directory with warning -- Monorepo: One `.blue/` at git root (packages share it) -- Subdirectory: Always resolve to git root - -### 3. Flatten docs structure - -Before: `.blue/repos//docs/rfcs/` -After: `.blue/docs/rfcs/` - -No need for project subdirectory when per-repo. - -### 4. Migration - -Automatic on first run: - -1. Detect old structure (`.blue/repos/` exists) -2. Find docs for current project in `.blue/repos//docs/` -3. Move to `.blue/docs/` -4. Migrate database entries -5. Clean up empty directories -6. Log what was migrated - -**Conflict resolution:** If docs exist in both locations, prefer newer by mtime. - -## Git Tracking - -Repos should commit their `.blue/` folder: - -**Track:** -- `.blue/docs/**` - RFCs, spikes, runbooks, etc. -- `.blue/blue.db` - SQLite database (source of truth) -- `.blue/config.yaml` - Configuration - -**Gitignore:** -- `.blue/*.db-shm` - SQLite shared memory (transient) -- `.blue/*.db-wal` - SQLite write-ahead log (transient) - -Recommended `.gitignore` addition: -``` -# Blue transient files -.blue/*.db-shm -.blue/*.db-wal -``` - -## Cross-Repo Coordination - -The daemon/realm system (RFC 0001) handles cross-repo concerns: -- Central daemon tracks active sessions -- Realms coordinate contracts between repos -- Each repo remains self-contained - -## FAQ - -**Q: Do I need to run `blue init`?** -A: No. Blue auto-creates `.blue/` on first command. - -**Q: What about my existing docs in the central location?** -A: Auto-migrated on first run. Check git status to verify. - -**Q: Should I commit `.blue/blue.db`?** -A: Yes. It's the source of truth for your project's Blue state. - -**Q: What if I'm in a monorepo?** -A: One `.blue/` at the git root. All packages share it. - -**Q: Can I use Blue without git?** -A: Yes, but with a warning. `.blue/` created in current directory. - -**Q: How do I see cross-repo status?** -A: Use `blue realm_status` (requires daemon running). - -## Test Plan - -- [ ] New repo gets `.blue/` on first blue command -- [ ] Docs created in repo's own `.blue/docs/` -- [ ] Database at `.blue/blue.db` -- [ ] Old structure migrated automatically -- [ ] Realm/daemon still works across repos -- [ ] No git repo falls back gracefully with warning -- [ ] Monorepo uses single `.blue/` at root - ---- - -*"Right then. Let's get to it."* - -— Blue diff --git a/.blue/repos/blue/docs/rfcs/0004-adr-adherence.md b/.blue/repos/blue/docs/rfcs/0004-adr-adherence.md deleted file mode 100644 index c198fa9..0000000 --- a/.blue/repos/blue/docs/rfcs/0004-adr-adherence.md +++ /dev/null @@ -1,363 +0,0 @@ -# RFC 0004: ADR Adherence - -| | | -|---|---| -| **Status** | Draft | -| **Date** | 2026-01-24 | -| **Source Spike** | adr-adherence | -| **ADRs** | 0004 (Evidence), 0007 (Integrity), 0008 (Honor) | - ---- - -## Summary - -No mechanism to surface relevant ADRs during work, track ADR citations, or verify adherence to testable architectural decisions. - -## Philosophy - -**Guide, don't block.** ADRs are beliefs, not bureaucracy. Blue should: -- Help you find relevant ADRs -- Make citing ADRs easy -- Verify testable ADRs optionally -- Never require ADR approval to proceed - -## Proposal - -### Layer 1: Awareness (Passive) - -#### `blue_adr_list` - -List all ADRs with summaries: - -``` -blue_adr_list -``` - -Returns: -```json -{ - "adrs": [ - { "number": 0, "title": "Never Give Up", "summary": "The only rule we need" }, - { "number": 4, "title": "Evidence", "summary": "Show, don't tell" }, - ... - ] -} -``` - -#### `blue_adr_get` - -Get full ADR content: - -``` -blue_adr_get number=4 -``` - -Returns ADR markdown and metadata. - -### Layer 2: Contextual Relevance (Active) - -#### `blue_adr_relevant` - -Given context, use AI to suggest relevant ADRs: - -``` -blue_adr_relevant context="testing strategy" -``` - -Returns: -```json -{ - "relevant": [ - { - "number": 4, - "title": "Evidence", - "confidence": 0.95, - "why": "Testing is the primary form of evidence that code works. This ADR's core principle 'show, don't tell' directly applies to test strategy decisions." - }, - { - "number": 7, - "title": "Integrity", - "confidence": 0.82, - "why": "Tests verify structural wholeness - that the system holds together under various conditions." - } - ] -} -``` - -**AI-Powered Relevance:** - -Keyword matching fails for philosophical ADRs. "Courage" won't match "deleting legacy code" even though ADR 0009 is highly relevant. - -The AI evaluator: -1. Receives the full context (RFC title, problem, code diff, etc.) -2. Reads all ADR content (cached in prompt) -3. Determines semantic relevance with reasoning -4. Returns confidence scores and explanations - -**Prompt Structure:** - -``` -You are evaluating which ADRs are relevant to this work. - -Context: {user_context} - -ADRs: -{all_adr_summaries} - -For each ADR, determine: -1. Is it relevant? (yes/no) -2. Confidence (0.0-1.0) -3. Why is it relevant? (1-2 sentences) - -Only return ADRs with confidence > 0.7. -``` - -**Model Selection:** -- Use fast/cheap model (Haiku) for relevance checks -- Results are suggestions, not authoritative -- User can override or ignore - -**Graceful Degradation:** - -| Condition | Behavior | -|-----------|----------| -| API key configured, API up | AI relevance (default) | -| API key configured, API down | Fallback to keywords + warning | -| No API key | Keywords only (no warning) | -| `--no-ai` flag | Keywords only (explicit) | - -**Response Metadata:** - -```json -{ - "method": "ai", // or "keyword" - "cached": false, - "latency_ms": 287, - "relevant": [...] -} -``` - -**Privacy:** -- Only context string sent to API (not code, not files) -- No PII should be in context string -- User controls what context to send - -#### RFC ADR Suggestions - -When creating an RFC, Blue suggests relevant ADRs based on title/problem: - -``` -blue_rfc_create title="testing-framework" ... - -→ "Consider these ADRs: 0004 (Evidence), 0010 (No Dead Code)" -``` - -#### ADR Citations in Documents - -RFCs can cite ADRs in frontmatter: - -```markdown -| **ADRs** | 0004, 0007, 0010 | -``` - -Or inline: - -```markdown -Per ADR 0004 (Evidence), we require test coverage > 80%. -``` - -### Layer 3: Lightweight Verification (Optional) - -#### `blue_adr_audit` - -Scan for potential ADR violations. Only for testable ADRs: - -``` -blue_adr_audit -``` - -Returns: -```json -{ - "findings": [ - { - "adr": 10, - "title": "No Dead Code", - "type": "warning", - "message": "3 unused exports in src/utils.rs", - "locations": ["src/utils.rs:45", "src/utils.rs:67", "src/utils.rs:89"] - }, - { - "adr": 4, - "title": "Evidence", - "type": "info", - "message": "Test coverage at 72% (threshold: 80%)" - } - ], - "passed": [ - { "adr": 5, "title": "Single Source", "message": "No duplicate definitions found" } - ] -} -``` - -**Testable ADRs:** - -| ADR | Check | -|-----|-------| -| 0004 Evidence | Test coverage, assertion ratios | -| 0005 Single Source | Duplicate definitions, copy-paste detection | -| 0010 No Dead Code | Unused exports, unreachable branches | - -**Non-testable ADRs** (human judgment): - -| ADR | Guidance | -|-----|----------| -| 0001 Purpose | Does this serve meaning? | -| 0002 Presence | Are we actually here? | -| 0009 Courage | Are we acting rightly? | -| 0013 Overflow | Building from fullness? | - -### Layer 4: Documentation Trail - -#### ADR-Document Links - -Store citations in `document_links` table: - -```sql -INSERT INTO document_links (source_id, target_id, link_type) -VALUES (rfc_id, adr_doc_id, 'cites_adr'); -``` - -#### Search by ADR - -``` -blue_search query="adr:0004" -``` - -Returns all documents citing ADR 0004. - -#### ADR "Referenced By" - -``` -blue_adr_get number=4 -``` - -Includes: -```json -{ - "referenced_by": [ - { "type": "rfc", "title": "testing-framework", "date": "2026-01-20" }, - { "type": "decision", "title": "require-integration-tests", "date": "2026-01-15" } - ] -} -``` - -## ADR Metadata Enhancement - -Add to each ADR: - -```markdown -## Applies When - -- Writing or modifying tests -- Reviewing pull requests -- Evaluating technical claims - -## Anti-Patterns - -- Claiming code works without tests -- Trusting documentation over running code -- Accepting "it works on my machine" -``` - -This gives the AI richer context for relevance matching. Anti-patterns are particularly useful - they help identify when work might be *violating* an ADR. - -## Implementation - -1. Add ADR document type and loader -2. Implement `blue_adr_list` and `blue_adr_get` -3. **Implement AI relevance evaluator:** - - Load all ADRs into prompt context - - Send context + ADRs to LLM (Haiku for speed/cost) - - Parse structured response with confidence scores - - Cache ADR summaries to minimize token usage -4. Implement `blue_adr_relevant` using AI evaluator -5. Add ADR citation parsing to RFC creation -6. Implement `blue_adr_audit` for testable ADRs -7. Add "referenced_by" to ADR responses -8. Extend `blue_search` for ADR queries - -**AI Integration Notes:** - -- Blue MCP server needs LLM access (API key in `.blue/config.yaml`) -- Use streaming for responsiveness -- Fallback to keyword matching if AI unavailable -- Cache relevance results per context hash (5 min TTL) - -**Caching Strategy:** - -```sql -CREATE TABLE adr_relevance_cache ( - context_hash TEXT PRIMARY KEY, - adr_versions_hash TEXT, -- Invalidate if ADRs change - result_json TEXT, - created_at TEXT, - expires_at TEXT -); -``` - -**Testing AI Relevance:** - -- Golden test cases with expected ADRs (fuzzy match) -- Confidence thresholds: 0004 should be > 0.8 for "testing" -- Mock AI responses in unit tests -- Integration tests hit real API (rate limited) - -## Test Plan - -- [ ] List all ADRs returns correct count and summaries -- [ ] Get specific ADR returns full content -- [ ] AI relevance: "testing" context suggests 0004 (Evidence) -- [ ] AI relevance: "deleting old code" suggests 0009 (Courage), 0010 (No Dead Code) -- [ ] AI relevance: confidence scores are reasonable (0.7-1.0 range) -- [ ] AI relevance: explanations are coherent -- [ ] Fallback: keyword matching works when AI unavailable -- [ ] RFC with `| **ADRs** | 0004 |` creates document link -- [ ] Search `adr:0004` finds citing documents -- [ ] Audit detects unused exports (ADR 0010) -- [ ] Audit reports test coverage (ADR 0004) -- [ ] Non-testable ADRs not included in audit findings -- [ ] Caching: repeated same context uses cached result -- [ ] Cache invalidation: ADR content change clears relevant cache -- [ ] `--no-ai` flag forces keyword matching -- [ ] Response includes method (ai/keyword), cached, latency -- [ ] Graceful degradation when API unavailable - -## FAQ - -**Q: Will this block my PRs?** -A: No. All ADR features are informational. Nothing blocks. - -**Q: Do I have to cite ADRs in every RFC?** -A: No. Citations are optional but encouraged for significant decisions. - -**Q: What if I disagree with an ADR?** -A: ADRs can be superseded. Create a new ADR documenting why. - -**Q: How do I add a new ADR?** -A: `blue_adr_create` (future work) or manually add to `docs/adrs/`. - -**Q: Why use AI for relevance instead of keywords?** -A: Keywords fail for philosophical ADRs. "Courage" won't match "deleting legacy code" but ADR 0009 is highly relevant. AI understands semantic meaning. - -**Q: What if I don't have an API key configured?** -A: Falls back to keyword matching. Less accurate but still functional. - -**Q: How much does the AI relevance check cost?** -A: Uses Haiku (~$0.00025 per check). Cached for 5 minutes per unique context. - ---- - -*"The beliefs that guide us, made visible."* - -— Blue diff --git a/.blue/repos/blue/docs/rfcs/0005-local-llm-integration.md b/.blue/repos/blue/docs/rfcs/0005-local-llm-integration.md deleted file mode 100644 index bb736ec..0000000 --- a/.blue/repos/blue/docs/rfcs/0005-local-llm-integration.md +++ /dev/null @@ -1,1044 +0,0 @@ -# RFC 0005: Local Llm Integration - -| | | -|---|---| -| **Status** | Draft | -| **Date** | 2026-01-24 | -| **Source Spike** | local-llm-integration, agentic-cli-integration | - ---- - -## Summary - -Blue needs local LLM capabilities for: -1. **Semantic tasks** - ADR relevance, runbook matching, dialogue summarization (lightweight, fast) -2. **Agentic coding** - Full code generation via Goose integration (heavyweight, powerful) - -Unified approach: **Ollama as shared backend** + **Goose for agentic tasks** + **Blue's LlmProvider for semantic tasks**. - -Must support CUDA > MPS > CPU backend priority. - -## Background - -### Two Use Cases - -| Use Case | Latency | Complexity | Tool | -|----------|---------|------------|------| -| **Semantic tasks** | <500ms | Short prompts, structured output | Blue internal | -| **Agentic coding** | Minutes | Multi-turn, code generation | Goose | - -### Blue's Semantic Tasks - -| Feature | RFC | Need | -|---------|-----|------| -| ADR Relevance | 0004 | Match context to philosophical ADRs | -| Runbook Lookup | 0002 | Semantic action matching | -| Dialogue Summary | 0001 | Extract key decisions | - -### Why Local LLM? - -- **Privacy**: No data leaves the machine -- **Cost**: Zero per-query cost after model download -- **Speed**: Sub-second latency for short tasks -- **Offline**: Works without internet - -### Why Embed Ollama? - -| Approach | Pros | Cons | -|----------|------|------| -| llama-cpp-rs | Rust-native | Build complexity, no model management | -| Ollama (external) | Easy setup | User must install separately | -| **Ollama (embedded)** | Single install, full features | Larger binary, Go dependency | - -**Embedded Ollama wins because:** -1. **Single install** - `cargo install blue` gives you everything -2. **Model management built-in** - pull, list, remove models -3. **Goose compatibility** - Goose connects to Blue's embedded Ollama -4. **Battle-tested** - Ollama handles CUDA/MPS/CPU, quantization, context -5. **One model, all uses** - Semantic tasks + agentic coding share model - -### Ollama Version - -Blue embeds a specific, tested Ollama version: - -| Blue Version | Ollama Version | Release Date | -|--------------|----------------|--------------| -| 0.1.x | 0.5.4 | 2026-01 | - -Version pinned in `build.rs`. Updated via Blue releases, not automatically. - -## Proposal - -### 1. LlmProvider Trait - -```rust -#[async_trait] -pub trait LlmProvider: Send + Sync { - async fn complete(&self, prompt: &str, options: &CompletionOptions) -> Result; - fn name(&self) -> &str; -} - -pub struct CompletionOptions { - pub max_tokens: usize, - pub temperature: f32, - pub stop_sequences: Vec, -} -``` - -### 2. Implementations - -```rust -pub enum LlmBackend { - Ollama(OllamaLlm), // Embedded Ollama server - Api(ApiLlm), // External API fallback - Mock(MockLlm), // Testing -} -``` - -**OllamaLlm**: Embedded Ollama server managed by Blue -**ApiLlm**: Uses Anthropic/OpenAI APIs (fallback) -**MockLlm**: Returns predefined responses (testing) - -### 2.1 Embedded Ollama Architecture - -``` -┌─────────────────────────────────────────────────────────┐ -│ Blue CLI │ -├─────────────────────────────────────────────────────────┤ -│ blue-ollama (embedded) │ -│ ├── Ollama server (Go, compiled to lib) │ -│ ├── Model management (pull, list, remove) │ -│ └── HTTP API on localhost:11434 │ -├─────────────────────────────────────────────────────────┤ -│ Consumers: │ -│ ├── Blue semantic tasks (ADR relevance, etc.) │ -│ ├── Goose (connects to localhost:11434) │ -│ └── Any Ollama-compatible client │ -└─────────────────────────────────────────────────────────┘ -``` - -**Embedding Strategy:** - -```rust -// blue-ollama crate -pub struct EmbeddedOllama { - process: Option, - port: u16, - models_dir: PathBuf, -} - -impl EmbeddedOllama { - /// Start embedded Ollama server - pub async fn start(&mut self) -> Result<()> { - // Ollama binary bundled in Blue release - let ollama_bin = Self::bundled_binary_path(); - - self.process = Some( - Command::new(ollama_bin) - .env("OLLAMA_MODELS", &self.models_dir) - .env("OLLAMA_HOST", format!("127.0.0.1:{}", self.port)) - .spawn()? - ); - - self.wait_for_ready().await - } - - /// Stop embedded server - pub async fn stop(&mut self) -> Result<()> { - if let Some(mut proc) = self.process.take() { - proc.kill()?; - } - Ok(()) - } -} -``` - -### 3. Backend Priority (CUDA > MPS > CPU) - -**Ollama handles this automatically.** Ollama detects GPU at runtime: - -| Platform | Backend | Detection | -|----------|---------|-----------| -| NVIDIA GPU | CUDA | Auto-detected via driver | -| Apple Silicon | **Metal (MPS)** | Auto-detected on M1/M2/M3/M4 | -| AMD GPU | ROCm | Auto-detected on Linux | -| No GPU | CPU | Fallback | - -```bash -# Ollama auto-detects best backend -ollama run qwen2.5:7b # Uses CUDA → Metal → ROCm → CPU -``` - -**Apple Silicon (M1/M2/M3/M4):** -- Ollama uses Metal Performance Shaders (MPS) automatically -- No configuration needed - just works -- Full GPU acceleration on unified memory - -**Blue just starts Ollama and lets it choose:** - -```rust -impl EmbeddedOllama { - pub async fn start(&mut self) -> Result<()> { - let mut cmd = Command::new(Self::bundled_binary_path()); - - // Force specific backend if configured - match self.config.backend { - BackendChoice::Cuda => { - cmd.env("CUDA_VISIBLE_DEVICES", "0"); - cmd.env("OLLAMA_NO_METAL", "1"); // Prefer CUDA over Metal - } - BackendChoice::Mps => { - // Metal/MPS on Apple Silicon (default on macOS) - cmd.env("CUDA_VISIBLE_DEVICES", ""); // Disable CUDA - } - BackendChoice::Cpu => { - cmd.env("CUDA_VISIBLE_DEVICES", ""); // Disable CUDA - cmd.env("OLLAMA_NO_METAL", "1"); // Disable Metal/MPS - } - BackendChoice::Auto => { - // Let Ollama decide: CUDA → MPS → ROCm → CPU - } - } - - self.process = Some(cmd.spawn()?); - self.wait_for_ready().await - } -} -``` - -**Backend verification:** - -```rust -impl EmbeddedOllama { - pub async fn detected_backend(&self) -> Result { - // Query Ollama for what it's using - let resp = self.client.get("/api/version").await?; - // Returns: {"version": "0.5.1", "gpu": "cuda"} or "metal" or "cpu" - Ok(resp.gpu) - } -} -``` - -### 4. Configuration - -**Default: API (easier setup)** - -New users get API by default - just set an env var: - -```bash -export ANTHROPIC_API_KEY=sk-... -# That's it. Blue works. -``` - -**Opt-in: Local (better privacy/cost)** - -```bash -blue_model_download name="qwen2.5-7b" -# Edit .blue/config.yaml to prefer local -``` - -**Full Configuration:** - -```yaml -# .blue/config.yaml -llm: - provider: auto # auto | local | api | none - - # auto (default): Use local if model exists, else API, else keywords - # local: Only use local, fail if unavailable - # api: Only use API, fail if unavailable - # none: Disable AI features entirely - - local: - model: qwen2.5-7b # Shorthand, resolves to full path - # Or explicit: model_path: ~/.blue/models/qwen2.5-7b-instruct-q4_k_m.gguf - backend: auto # cuda | mps | cpu | auto - context_size: 8192 - threads: 8 # for CPU backend - - api: - provider: anthropic # anthropic | openai - model: claude-3-haiku-20240307 - api_key_env: ANTHROPIC_API_KEY # Read from env var -``` - -**Zero-Config Experience:** - -| User State | Behavior | -|------------|----------| -| No config, no env var | Keywords only (works offline) | -| `ANTHROPIC_API_KEY` set | API (easiest) | -| Model downloaded | Local (best) | -| Both available | Local preferred | - -### 5. Model Management (via Embedded Ollama) - -Blue wraps Ollama's model commands: - -``` -blue_model_list # ollama list -blue_model_pull # ollama pull -blue_model_remove # ollama rm -blue_model_info # ollama show -``` - -Model storage: `~/.ollama/models/` (Ollama default, shared with external Ollama) - -**Recommended Models:** - -| Model | Size | Use Case | -|-------|------|----------| -| `qwen2.5:7b` | ~4.4GB | Fast, good quality | -| `qwen2.5:32b` | ~19GB | Best quality | -| `qwen2.5-coder:7b` | ~4.4GB | Code-focused | -| `qwen2.5-coder:32b` | ~19GB | Best for agentic coding | - -**Pull Example:** - -``` -blue_model_pull name="qwen2.5:7b" - -→ Pulling qwen2.5:7b... -→ [████████████████████] 100% (4.4 GB) -→ Model ready. Run: blue_model_info name="qwen2.5:7b" -``` - -**Licensing:** Qwen2.5 models are Apache 2.0 - commercial use permitted. - -### 5.1 Goose Integration - -Blue's embedded Ollama serves Goose for agentic coding: - -``` -┌─────────────────────────────────────────────────────────┐ -│ User runs: goose │ -│ ↓ │ -│ Goose connects to localhost:11434 (Blue's Ollama) │ -│ ↓ │ -│ Uses same model Blue uses for semantic tasks │ -└─────────────────────────────────────────────────────────┘ -``` - -**Setup:** - -```bash -# 1. Start Blue (starts embedded Ollama) -blue daemon start - -# 2. Configure Goose to use Blue's Ollama -# ~/.config/goose/config.yaml -provider: ollama -model: qwen2.5-coder:32b -host: http://localhost:11434 - -# 3. Run Goose with Blue's MCP tools -goose --extension "blue mcp" -``` - -**Convenience command:** - -```bash -# Start Goose with Blue pre-configured -blue agent - -# Equivalent to: -# 1. Ensure Blue daemon running (Ollama ready) -# 2. Launch Goose with Blue extension -# 3. Model auto-pulled if missing -``` - -**Shared Model Benefits:** - -| Without Blue | With Blue | -|--------------|-----------| -| Install Ollama separately | Blue bundles Ollama | -| Configure Goose manually | `blue agent` just works | -| Model loaded twice (Ollama + Goose) | One model instance | -| 40GB RAM for two 32B models | 20GB for shared model | - -### 6. Graceful Degradation - -```rust -impl BlueState { - pub async fn get_llm(&self) -> Option<&dyn LlmProvider> { - // Try local first - if let Some(local) = &self.local_llm { - if local.is_ready() { - return Some(local); - } - } - - // Fall back to API - if let Some(api) = &self.api_llm { - return Some(api); - } - - // No LLM available - None - } -} -``` - -| Condition | Behavior | -|-----------|----------| -| Local model loaded | Use local (default) | -| Local unavailable, API configured | Fall back to API + warning | -| Neither available | Keyword matching only | -| `--no-ai` flag | Skip AI entirely | - -### 7. Model Loading Strategy - -**Problem:** Model load takes 5-10 seconds. Can't block MCP calls. - -**Solution:** Daemon preloads model on startup. - -```rust -impl EmbeddedOllama { - pub async fn warmup(&self, model: &str) -> Result<()> { - // Send a dummy request to load model into memory - let resp = self.client - .post("/api/generate") - .json(&json!({ - "model": model, - "prompt": "Hi", - "options": { "num_predict": 1 } - })) - .send() - .await?; - - // Model now loaded and warm - Ok(()) - } -} -``` - -**Daemon Startup:** - -```bash -blue daemon start - -→ Starting embedded Ollama... -→ Ollama ready on localhost:11434 -→ Warming up qwen2.5:7b... (5-10 seconds) -→ Model ready. -``` - -**MCP Tool Response During Load:** - -```json -{ - "status": "loading", - "message": "Model loading... Try again in a few seconds.", - "retry_after_ms": 2000 -} -``` - -**Auto-Warmup:** Daemon warms up configured model on start. First MCP request is fast. - -**Manual Warmup:** - -``` -blue_model_warmup model="qwen2.5:32b" # Load specific model -``` - -### 8. Multi-Session Model Handling - -**Question:** What if user has multiple Blue MCP sessions (multiple IDE windows)? - -**Answer:** All sessions share one Ollama instance via `blue daemon`. - -``` -┌─────────────────────────────────────────────────────────┐ -│ blue daemon (singleton) │ -│ └── Embedded Ollama (localhost:11434) │ -│ └── Model loaded once (~20GB for 32B) │ -├─────────────────────────────────────────────────────────┤ -│ Blue MCP Session 1 ──┐ │ -│ Blue MCP Session 2 ──┼──→ HTTP to localhost:11434 │ -│ Goose ──┘ │ -└─────────────────────────────────────────────────────────┘ -``` - -**Benefits:** -- One model in memory, not per-session -- Goose shares same model instance -- Daemon manages Ollama lifecycle -- Sessions can come and go - -**Daemon Lifecycle:** - -```bash -blue daemon start # Start Ollama, keep running -blue daemon stop # Stop Ollama -blue daemon status # Check health and GPU info - -# Auto-start: first MCP connection starts daemon if not running -``` - -**Status Output:** - -``` -$ blue daemon status - -Blue Daemon: running -├── Ollama: healthy (v0.5.4) -├── Backend: Metal (MPS) - Apple M4 Max -├── Port: 11434 -├── Models loaded: qwen2.5:32b (19GB) -├── Uptime: 2h 34m -└── Requests served: 1,247 -``` - -### Daemon Health & Recovery - -**Health checks:** - -```rust -impl EmbeddedOllama { - pub async fn health_check(&self) -> Result { - match self.client.get("/api/version").await { - Ok(resp) => Ok(HealthStatus::Healthy { - version: resp.version, - gpu: resp.gpu, - }), - Err(e) => Ok(HealthStatus::Unhealthy { error: e.to_string() }), - } - } - - pub fn start_health_monitor(&self) { - tokio::spawn(async move { - loop { - tokio::time::sleep(Duration::from_secs(30)).await; - - if let Ok(HealthStatus::Unhealthy { .. }) = self.health_check().await { - log::warn!("Ollama unhealthy, attempting restart..."); - self.restart().await; - } - } - }); - } -} -``` - -**Crash recovery:** - -| Scenario | Behavior | -|----------|----------| -| Ollama crashes | Auto-restart within 5 seconds | -| Restart fails 3x | Mark as failed, fall back to API | -| User calls `daemon restart` | Force restart, reset failure count | - -**Graceful shutdown:** - -```rust -impl EmbeddedOllama { - pub async fn stop(&mut self) -> Result<()> { - // Signal Ollama to finish current requests - self.client.post("/api/shutdown").await.ok(); - - // Wait up to 10 seconds for graceful shutdown - tokio::time::timeout( - Duration::from_secs(10), - self.wait_for_exit() - ).await.ok(); - - // Force kill if still running - if let Some(proc) = self.process.take() { - proc.kill().ok(); - } - - Ok(()) - } -} -``` - -### 8. Integration Points - -**ADR Relevance (RFC 0004):** -```rust -pub async fn find_relevant_adrs( - llm: &dyn LlmProvider, - context: &str, - adrs: &[AdrSummary], -) -> Result> { - let prompt = format_relevance_prompt(context, adrs); - let response = llm.complete(&prompt, &RELEVANCE_OPTIONS).await?; - parse_relevance_response(&response) -} -``` - -**Runbook Matching (RFC 0002):** -```rust -pub async fn match_action_semantic( - llm: &dyn LlmProvider, - query: &str, - actions: &[String], -) -> Result> { - // Use LLM to find best semantic match -} -``` - -### 9. Cargo Features & Build - -```toml -[features] -default = ["ollama"] -ollama = [] # Embeds Ollama binary - -[dependencies] -reqwest = { version = "0.12", features = ["json"] } # Ollama HTTP client -tokio = { version = "1", features = ["process"] } # Process management - -[build-dependencies] -# Download Ollama binary at build time -``` - -**Build Process:** - -```rust -// build.rs -const OLLAMA_VERSION: &str = "0.5.4"; - -fn main() { - let target = std::env::var("TARGET").unwrap(); - - let (ollama_url, sha256) = match target.as_str() { - // macOS (Universal - works on Intel and Apple Silicon) - t if t.contains("darwin") => - (format!("https://github.com/ollama/ollama/releases/download/v{}/ollama-darwin", OLLAMA_VERSION), - "abc123..."), - - // Linux x86_64 - t if t.contains("x86_64") && t.contains("linux") => - (format!("https://github.com/ollama/ollama/releases/download/v{}/ollama-linux-amd64", OLLAMA_VERSION), - "def456..."), - - // Linux ARM64 (Raspberry Pi 4/5, AWS Graviton, etc.) - t if t.contains("aarch64") && t.contains("linux") => - (format!("https://github.com/ollama/ollama/releases/download/v{}/ollama-linux-arm64", OLLAMA_VERSION), - "ghi789..."), - - // Windows x86_64 - t if t.contains("windows") => - (format!("https://github.com/ollama/ollama/releases/download/v{}/ollama-windows-amd64.exe", OLLAMA_VERSION), - "jkl012..."), - - _ => panic!("Unsupported target: {}", target), - }; - - download_and_verify(&ollama_url, sha256); - println!("cargo:rerun-if-changed=build.rs"); -} -``` - -**Supported Platforms:** - -| Platform | Architecture | Ollama Binary | -|----------|--------------|---------------| -| macOS | x86_64 + ARM64 | ollama-darwin (universal) | -| Linux | x86_64 | ollama-linux-amd64 | -| Linux | ARM64 | ollama-linux-arm64 | -| Windows | x86_64 | ollama-windows-amd64.exe | - -**ARM64 Linux Use Cases:** -- Raspberry Pi 4/5 (8GB+ recommended) -- AWS Graviton instances -- NVIDIA Jetson -- Apple Silicon Linux VMs - -**Binary Size:** - -| Component | Size | -|-----------|------| -| Blue CLI | ~5 MB | -| Ollama binary | ~50 MB | -| **Total** | ~55 MB | - -Models downloaded separately on first use. - -### 10. Performance Expectations - -**Apple Silicon (M4 Max, 128GB, Metal/MPS):** - -| Metric | Qwen2.5-7B | Qwen2.5-32B | -|--------|------------|-------------| -| Model load | 2-3 sec | 5-10 sec | -| Prompt processing | ~150 tok/s | ~100 tok/s | -| Generation | ~80 tok/s | ~50 tok/s | -| ADR relevance | 100-200ms | 200-400ms | - -**NVIDIA GPU (RTX 4090, CUDA):** - -| Metric | Qwen2.5-7B | Qwen2.5-32B | -|--------|------------|-------------| -| Model load | 1-2 sec | 3-5 sec | -| Prompt processing | ~200 tok/s | ~120 tok/s | -| Generation | ~100 tok/s | ~60 tok/s | -| ADR relevance | 80-150ms | 150-300ms | - -**CPU Only (fallback):** - -| Metric | Qwen2.5-7B | Qwen2.5-32B | -|--------|------------|-------------| -| Generation | ~10 tok/s | ~3 tok/s | -| ADR relevance | 1-2 sec | 5-10 sec | - -Metal/MPS on Apple Silicon is first-class - not a fallback. - -### 11. Memory Validation - -Ollama handles memory management, but Blue validates before pull: - -```rust -impl EmbeddedOllama { - pub async fn validate_can_pull(&self, model: &str) -> Result<()> { - let model_size = self.get_model_size(model).await?; - let available = sys_info::mem_info()?.avail * 1024; - let buffer = model_size / 5; // 20% buffer - - if available < model_size + buffer { - return Err(LlmError::InsufficientMemory { - model: model.to_string(), - required: model_size + buffer, - available, - suggestion: format!( - "Close some applications or use a smaller model. \ - Try: blue_model_pull name=\"qwen2.5:7b\"" - ), - }); - } - Ok(()) - } -} -``` - -**Ollama's Own Handling:** - -Ollama gracefully handles memory pressure by unloading models. Blue's validation is advisory. - -### 12. Build Requirements - -**Blue Build (all platforms):** -```bash -# Just Rust toolchain -cargo build --release -``` - -Blue's build.rs downloads the pre-built Ollama binary for the target platform. No C++ compiler needed. - -**Runtime GPU Support:** - -Ollama bundles GPU support. User just needs drivers: - -**macOS (Metal):** -- Works out of box on Apple Silicon (M1/M2/M3/M4) -- No additional setup needed - -**Linux (CUDA):** -```bash -# NVIDIA drivers (CUDA Toolkit not needed for inference) -nvidia-smi # Verify driver installed -``` - -**Linux (ROCm):** -```bash -# AMD GPU support -rocminfo # Verify ROCm installed -``` - -**Windows:** -- NVIDIA: Just need GPU drivers -- Works on CPU if no GPU - -**Ollama handles everything else** - users don't need to install CUDA Toolkit, cuDNN, etc. - -## Security Considerations - -1. **Ollama binary integrity**: Verify SHA256 of bundled Ollama binary at build time -2. **Model provenance**: Ollama registry handles model verification -3. **Local only by default**: Ollama binds to localhost:11434, not exposed -4. **Prompt injection**: Sanitize user input before prompts -5. **Memory**: Ollama handles memory management -6. **No secrets in prompts**: ADR relevance only sends context strings -7. **Process isolation**: Ollama runs as subprocess, not linked - -**Network Binding:** - -```rust -impl EmbeddedOllama { - pub async fn start(&mut self) -> Result<()> { - let mut cmd = Command::new(Self::bundled_binary_path()); - - // Bind to localhost only - not accessible from network - cmd.env("OLLAMA_HOST", "127.0.0.1:11434"); - - // ... - } -} -``` - -**Goose Access:** - -Goose connects to `localhost:11434` - works because it's on the same machine. Remote access requires explicit `OLLAMA_HOST=0.0.0.0:11434` override. - -### Port Conflict Handling - -**Scenario:** User already has Ollama running on port 11434. - -```rust -impl EmbeddedOllama { - pub async fn start(&mut self) -> Result<()> { - // Check if port 11434 is in use - if Self::port_in_use(11434) { - // Check if it's Ollama - if Self::is_ollama_running().await? { - // Use existing Ollama instance - self.mode = OllamaMode::External; - return Ok(()); - } else { - // Something else on port - use alternate - self.port = Self::find_free_port(11435..11500)?; - } - } - - // Start embedded Ollama on chosen port - self.start_embedded().await - } -} -``` - -| Situation | Behavior | -|-----------|----------| -| Port 11434 free | Start embedded Ollama | -| Ollama already running | Use existing (no duplicate) | -| Other service on port | Use alternate port (11435+) | - -**Config override:** - -```yaml -# .blue/config.yaml -llm: - local: - ollama_port: 11500 # Force specific port - use_external: true # Never start embedded, use existing -``` - -### Binary Verification - -**Build-time verification:** - -```rust -// build.rs -const OLLAMA_SHA256: &str = "abc123..."; // Per-platform hashes - -fn download_ollama() { - let bytes = download(OLLAMA_URL)?; - let hash = sha256(&bytes); - - if hash != OLLAMA_SHA256 { - panic!("Ollama binary hash mismatch! Expected {}, got {}", OLLAMA_SHA256, hash); - } - - write_binary(bytes)?; -} -``` - -**Runtime verification:** - -```rust -impl EmbeddedOllama { - fn verify_binary(&self) -> Result<()> { - let expected = include_str!("ollama.sha256"); - let actual = sha256_file(Self::bundled_binary_path())?; - - if actual != expected { - return Err(LlmError::BinaryTampered { - expected: expected.to_string(), - actual, - }); - } - Ok(()) - } - - pub async fn start(&mut self) -> Result<()> { - self.verify_binary()?; // Check before every start - // ... - } -} -``` - -### Air-Gapped Builds - -For environments without internet during build: - -```bash -# 1. Download Ollama binary manually -curl -L https://github.com/ollama/ollama/releases/download/v0.5.4/ollama-darwin \ - -o vendor/ollama-darwin - -# 2. Build with BLUE_OLLAMA_PATH -BLUE_OLLAMA_PATH=vendor/ollama-darwin cargo build --release -``` - -```rust -// build.rs -fn get_ollama_binary() -> Vec { - if let Ok(path) = std::env::var("BLUE_OLLAMA_PATH") { - // Use pre-downloaded binary - std::fs::read(path).expect("Failed to read BLUE_OLLAMA_PATH") - } else { - // Download from GitHub - download_ollama() - } -} -``` - -## Implementation Phases - -**Phase 1: Embedded Ollama** -1. Add build.rs to download Ollama binary per platform -2. Create `blue-ollama` crate for embedded server management -3. Implement `EmbeddedOllama::start()` and `stop()` -4. Add `blue daemon start/stop` commands - -**Phase 2: LLM Provider** -5. Add `LlmProvider` trait to blue-core -6. Implement `OllamaLlm` using HTTP client -7. Add `blue_model_pull`, `blue_model_list` tools -8. Implement auto-pull on first use - -**Phase 3: Semantic Integration** -9. Integrate with ADR relevance (RFC 0004) -10. Add semantic runbook matching (RFC 0002) -11. Add fallback chain: Ollama → API → keywords - -**Phase 4: Goose Integration** -12. Add `blue agent` command to launch Goose -13. Document Goose + Blue setup -14. Ship example configs - -## CI/CD Matrix - -Test embedded Ollama on all platforms: - -```yaml -# .github/workflows/ci.yml -jobs: - test-ollama: - strategy: - matrix: - include: - - os: macos-latest - ollama_binary: ollama-darwin - - os: ubuntu-latest - ollama_binary: ollama-linux-amd64 - - os: windows-latest - ollama_binary: ollama-windows-amd64.exe - - runs-on: ${{ matrix.os }} - steps: - - uses: actions/checkout@v4 - - - name: Build Blue (downloads Ollama binary) - run: cargo build --release - - - name: Verify Ollama binary embedded - run: | - # Check binary exists in expected location - ls -la target/release/ollama* - - - name: Test daemon start/stop - run: | - cargo run -- daemon start - sleep 5 - curl -s http://localhost:11434/api/version - cargo run -- daemon stop - - - name: Test with mock model (no download) - run: cargo test ollama::mock - - # GPU tests run on self-hosted runners - test-gpu: - runs-on: [self-hosted, gpu] - steps: - - uses: actions/checkout@v4 - - name: Test CUDA detection - run: | - cargo build --release - cargo run -- daemon start - # Verify GPU detected - curl -s http://localhost:11434/api/version | jq .gpu - cargo run -- daemon stop -``` - -**Note:** Full model integration tests run nightly (large downloads). - -## Test Plan - -**Embedded Ollama:** -- [ ] `blue daemon start` launches embedded Ollama -- [ ] `blue daemon stop` cleanly shuts down -- [ ] Ollama detects CUDA when available -- [ ] Ollama detects Metal on macOS -- [ ] Falls back to CPU when no GPU -- [ ] Health check returns backend type - -**Model Management:** -- [ ] `blue_model_pull` downloads from Ollama registry -- [ ] `blue_model_list` shows pulled models -- [ ] `blue_model_remove` deletes model -- [ ] Auto-pull on first completion if model missing -- [ ] Progress indicator during pull - -**LLM Provider:** -- [ ] `OllamaLlm::complete()` returns valid response -- [ ] Fallback chain: Ollama → API → keywords -- [ ] `--no-ai` flag skips LLM entirely -- [ ] Configuration parsing from .blue/config.yaml - -**Semantic Integration:** -- [ ] ADR relevance uses embedded Ollama -- [ ] Runbook matching uses semantic search -- [ ] Response includes method used (ollama/api/keywords) - -**Goose Integration:** -- [ ] `blue agent` starts Goose with Blue extension -- [ ] Goose connects to Blue's embedded Ollama -- [ ] Goose can use Blue MCP tools -- [ ] Model shared between Blue tasks and Goose - -**Multi-Session:** -- [ ] Multiple Blue MCP sessions share one Ollama -- [ ] Concurrent completions handled correctly -- [ ] Daemon persists across shell sessions - -**Port Conflict:** -- [ ] Detects existing Ollama on port 11434 -- [ ] Uses existing Ollama instead of starting new -- [ ] Uses alternate port if non-Ollama on 11434 -- [ ] `use_external: true` config works - -**Health & Recovery:** -- [ ] Health check detects unhealthy Ollama -- [ ] Auto-restart on crash -- [ ] Falls back to API after 3 restart failures -- [ ] Graceful shutdown waits for requests - -**Binary Verification:** -- [ ] Build fails if Ollama hash mismatch -- [ ] Runtime verification before start -- [ ] Tampered binary: clear error message -- [ ] Air-gapped build with BLUE_OLLAMA_PATH works - -**CI Matrix:** -- [ ] macOS build includes darwin Ollama binary -- [ ] Linux x86_64 build includes amd64 binary -- [ ] Linux ARM64 build includes arm64 binary -- [ ] Windows build includes windows binary -- [ ] Integration tests with mock Ollama server - ---- - -*"Right then. Let's get to it."* - -— Blue diff --git a/.blue/repos/blue/docs/spikes/2026-01-24-adr-adherence.md b/.blue/repos/blue/docs/spikes/2026-01-24-adr-adherence.md deleted file mode 100644 index e1e6a72..0000000 --- a/.blue/repos/blue/docs/spikes/2026-01-24-adr-adherence.md +++ /dev/null @@ -1,17 +0,0 @@ -# Spike: Adr Adherence - -| | | -|---|---| -| **Status** | Complete | -| **Date** | 2026-01-24 | -| **Time Box** | 2 hours | - ---- - -## Question - -How can Blue help ensure work adheres to ADRs? What mechanisms could check, remind, or enforce architectural decisions? - ---- - -*Investigation notes by Blue* diff --git a/.blue/repos/blue/docs/spikes/2026-01-24-agentic-cli-integration.md b/.blue/repos/blue/docs/spikes/2026-01-24-agentic-cli-integration.md deleted file mode 100644 index bb87670..0000000 --- a/.blue/repos/blue/docs/spikes/2026-01-24-agentic-cli-integration.md +++ /dev/null @@ -1,169 +0,0 @@ -# Spike: Agentic Cli Integration - -| | | -|---|---| -| **Status** | In Progress | -| **Date** | 2026-01-24 | -| **Time Box** | 2 hours | - ---- - -## Question - -Which commercial-compatible local agentic coding CLI (Aider, Goose, OpenCode) can be integrated into Blue CLI, and what's the best integration pattern? - ---- - -## Findings - -### Candidates Evaluated - -| Tool | License | Language | MCP Support | Integration Pattern | -|------|---------|----------|-------------|---------------------| -| **Goose** | Apache-2.0 | Rust | Native | MCP client/server, subprocess | -| **Aider** | Apache-2.0 | Python | Via extensions | Subprocess, CLI flags | -| **OpenCode** | MIT | Go | Native | Go SDK, subprocess | - -### Goose (Recommended) - -**Why Goose wins:** - -1. **Same language as Blue** - Rust-based, can share types and potentially link as library -2. **Native MCP support** - Goose is built on MCP (co-developed with Anthropic). Blue already speaks MCP. -3. **Apache-2.0** - Commercial-compatible with patent grant -4. **Block backing** - Maintained by Block (Square/Cash App), contributed to Linux Foundation's Agentic AI Foundation in Dec 2025 -5. **25+ LLM providers** - Works with Ollama, OpenAI, Anthropic, local models - -**Integration patterns:** - -``` -Option A: MCP Extension (Lowest friction) -┌─────────────────────────────────────────────┐ -│ Goose CLI │ -│ ↓ (MCP client) │ -│ Blue MCP Server (existing blue-mcp) │ -│ ↓ │ -│ Blue tools: rfc_create, worktree, etc. │ -└─────────────────────────────────────────────┘ - -Option B: Blue as Goose Extension -┌─────────────────────────────────────────────┐ -│ Blue CLI │ -│ ↓ (spawns) │ -│ Goose (subprocess) │ -│ ↓ (MCP client) │ -│ Blue MCP Server │ -└─────────────────────────────────────────────┘ - -Option C: Embedded (Future) -┌─────────────────────────────────────────────┐ -│ Blue CLI │ -│ ↓ (links) │ -│ goose-core (Rust crate) │ -│ ↓ │ -│ Local LLM / API │ -└─────────────────────────────────────────────┘ -``` - -**Recommendation: Option A first** - -Goose already works as an MCP client. Blue already has an MCP server (`blue mcp`). The integration is: - -```bash -# User installs goose -brew install block/tap/goose - -# User configures Blue as Goose extension -# In ~/.config/goose/config.yaml: -extensions: - blue: - type: stdio - command: blue mcp -``` - -This requires **zero code changes** to Blue. Users get agentic coding with Blue's workflow tools immediately. - -### Aider - -**Pros:** -- Mature, battle-tested (Apache-2.0) -- Git-native with smart commits -- Strong local model support via Ollama - -**Cons:** -- Python-based (foreign to Rust codebase) -- CLI scripting API is "not officially supported" -- No native MCP (would need wrapper) - -**Integration pattern:** Subprocess with `--message` flag for non-interactive use. - -```rust -// Hypothetical -let output = Command::new("aider") - .args(["--message", "implement the function", "--yes-always"]) - .output()?; -``` - -**Verdict:** Viable but more friction than Goose. - -### OpenCode - -**Pros:** -- MIT license (most permissive) -- Go SDK available -- Native MCP support -- Growing fast (45K+ GitHub stars) - -**Cons:** -- Go-based (FFI overhead to call from Rust) -- Newer, less mature than Aider -- SDK is for Go clients, not embedding - -**Integration pattern:** Go SDK or subprocess. - -**Verdict:** Good option if Goose doesn't work out. - -### Local LLM Backend - -All three support Ollama for local models: - -```bash -# Install Ollama -brew install ollama - -# Pull a coding model (Apache-2.0 licensed) -ollama pull qwen2.5-coder:32b # 19GB, best quality -ollama pull qwen2.5-coder:7b # 4.4GB, faster -ollama pull deepseek-coder-v2 # Alternative -``` - -Goose config for local: -```yaml -# ~/.config/goose/config.yaml -provider: ollama -model: qwen2.5-coder:32b -``` - -## Outcome - -**Recommends implementation** with Goose as the integration target. - -### Immediate (Zero code): -1. Document Blue + Goose setup in docs/ -2. Ship example `goose-extension.yaml` config - -### Short-term (Minimal code): -1. Add `blue agent` subcommand that launches Goose with Blue extension pre-configured -2. Add Blue-specific prompts/instructions for Goose - -### Medium-term (More code): -1. Investigate goose-core Rust crate for tighter integration -2. Consider Blue daemon serving as persistent MCP host - -## Sources - -- [Goose GitHub](https://github.com/block/goose) -- [Goose Architecture](https://block.github.io/goose/docs/goose-architecture/) -- [Aider Scripting](https://aider.chat/docs/scripting.html) -- [OpenCode Go SDK](https://pkg.go.dev/github.com/sst/opencode-sdk-go) -- [Goose MCP Deep Dive](https://dev.to/lymah/deep-dive-into-gooses-extension-system-and-model-context-protocol-mcp-3ehl) diff --git a/.blue/repos/blue/docs/spikes/2026-01-24-local-llm-integration.md b/.blue/repos/blue/docs/spikes/2026-01-24-local-llm-integration.md deleted file mode 100644 index 6e936f7..0000000 --- a/.blue/repos/blue/docs/spikes/2026-01-24-local-llm-integration.md +++ /dev/null @@ -1,17 +0,0 @@ -# Spike: Local Llm Integration - -| | | -|---|---| -| **Status** | In Progress | -| **Date** | 2026-01-24 | -| **Time Box** | 2 hours | - ---- - -## Question - -Which commercial-compatible local LLM CLI tool can be integrated into Blue CLI, and what's the best integration approach? - ---- - -*Investigation notes by Blue* diff --git a/.blue/repos/blue/docs/spikes/2026-01-24-per-repo-blue-folder.md b/.blue/repos/blue/docs/spikes/2026-01-24-per-repo-blue-folder.md deleted file mode 100644 index 6f31a23..0000000 --- a/.blue/repos/blue/docs/spikes/2026-01-24-per-repo-blue-folder.md +++ /dev/null @@ -1,17 +0,0 @@ -# Spike: Per Repo Blue Folder - -| | | -|---|---| -| **Status** | Complete | -| **Date** | 2026-01-24 | -| **Time Box** | 1 hour | - ---- - -## Question - -Should each repo have its own .blue folder with docs, or centralize in one location? What are the tradeoffs and what changes are needed? - ---- - -*Investigation notes by Blue* diff --git a/.blue/repos/blue/docs/spikes/2026-01-24-runbook-driven-actions.md b/.blue/repos/blue/docs/spikes/2026-01-24-runbook-driven-actions.md deleted file mode 100644 index 8c110ef..0000000 --- a/.blue/repos/blue/docs/spikes/2026-01-24-runbook-driven-actions.md +++ /dev/null @@ -1,17 +0,0 @@ -# Spike: Runbook Driven Actions - -| | | -|---|---| -| **Status** | Complete | -| **Date** | 2026-01-24 | -| **Time Box** | 2 hours | - ---- - -## Question - -How can runbooks guide Claude Code through repo actions (docker builds, deploys, tests) so it follows the documented steps rather than guessing? - ---- - -*Investigation notes by Blue* diff --git a/.blue/repos/blue/docs/spikes/2026-01-24-sqlite-storage-expansion.md b/.blue/repos/blue/docs/spikes/2026-01-24-sqlite-storage-expansion.md deleted file mode 100644 index 1045a97..0000000 --- a/.blue/repos/blue/docs/spikes/2026-01-24-sqlite-storage-expansion.md +++ /dev/null @@ -1,17 +0,0 @@ -# Spike: Sqlite Storage Expansion - -| | | -|---|---| -| **Status** | Complete | -| **Date** | 2026-01-24 | -| **Time Box** | 2 hours | - ---- - -## Question - -What changes are needed to store spikes and plans in SQLite like RFCs, and store dialogue metadata (but not content) in SQLite? - ---- - -*Investigation notes by Blue* diff --git a/crates/blue-core/src/store.rs b/crates/blue-core/src/store.rs index 4660284..8c9d48d 100644 --- a/crates/blue-core/src/store.rs +++ b/crates/blue-core/src/store.rs @@ -10,7 +10,7 @@ use rusqlite::{params, Connection, OptionalExtension, Transaction, TransactionBe use tracing::{debug, info, warn}; /// Current schema version -const SCHEMA_VERSION: i32 = 2; +const SCHEMA_VERSION: i32 = 3; /// Core database schema const SCHEMA: &str = r#" @@ -27,11 +27,13 @@ const SCHEMA: &str = r#" file_path TEXT, created_at TEXT NOT NULL, updated_at TEXT NOT NULL, + deleted_at TEXT, UNIQUE(doc_type, title) ); CREATE INDEX IF NOT EXISTS idx_documents_type ON documents(doc_type); CREATE INDEX IF NOT EXISTS idx_documents_status ON documents(doc_type, status); + CREATE INDEX IF NOT EXISTS idx_documents_deleted ON documents(deleted_at) WHERE deleted_at IS NOT NULL; CREATE TABLE IF NOT EXISTS document_links ( id INTEGER PRIMARY KEY AUTOINCREMENT, @@ -266,6 +268,7 @@ pub struct Document { pub file_path: Option, pub created_at: Option, pub updated_at: Option, + pub deleted_at: Option, } impl Document { @@ -280,8 +283,14 @@ impl Document { file_path: None, created_at: None, updated_at: None, + deleted_at: None, } } + + /// Check if document is soft-deleted + pub fn is_deleted(&self) -> bool { + self.deleted_at.is_some() + } } /// A task in a document's plan @@ -604,6 +613,10 @@ impl DocumentStore { Some(v) if v == SCHEMA_VERSION => { debug!("Database is up to date (version {})", v); } + Some(v) if v < SCHEMA_VERSION => { + info!("Migrating database from version {} to {}", v, SCHEMA_VERSION); + self.run_migrations(v)?; + } Some(v) => { warn!( "Schema version {} found, expected {}. Migrations may be needed.", @@ -615,6 +628,39 @@ impl DocumentStore { Ok(()) } + /// Run migrations from old version to current + fn run_migrations(&self, from_version: i32) -> Result<(), StoreError> { + // Migration from v2 to v3: Add deleted_at column + if from_version < 3 { + debug!("Adding deleted_at column to documents table"); + // Check if column exists first + let has_column: bool = self.conn.query_row( + "SELECT COUNT(*) FROM pragma_table_info('documents') WHERE name = 'deleted_at'", + [], + |row| Ok(row.get::<_, i64>(0)? > 0), + )?; + + if !has_column { + self.conn.execute( + "ALTER TABLE documents ADD COLUMN deleted_at TEXT", + [], + )?; + self.conn.execute( + "CREATE INDEX IF NOT EXISTS idx_documents_deleted ON documents(deleted_at) WHERE deleted_at IS NOT NULL", + [], + )?; + } + } + + // Update schema version + self.conn.execute( + "UPDATE schema_version SET version = ?1", + params![SCHEMA_VERSION], + )?; + + Ok(()) + } + /// Execute with retry on busy fn with_retry(&self, f: F) -> Result where @@ -669,8 +715,8 @@ impl DocumentStore { pub fn get_document(&self, doc_type: DocType, title: &str) -> Result { self.conn .query_row( - "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at - FROM documents WHERE doc_type = ?1 AND title = ?2", + "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at + FROM documents WHERE doc_type = ?1 AND title = ?2 AND deleted_at IS NULL", params![doc_type.as_str(), title], |row| { Ok(Document { @@ -682,6 +728,7 @@ impl DocumentStore { file_path: row.get(5)?, created_at: row.get(6)?, updated_at: row.get(7)?, + deleted_at: row.get(8)?, }) }, ) @@ -691,11 +738,11 @@ impl DocumentStore { }) } - /// Get a document by ID + /// Get a document by ID (including soft-deleted) pub fn get_document_by_id(&self, id: i64) -> Result { self.conn .query_row( - "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at + "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at FROM documents WHERE id = ?1", params![id], |row| { @@ -708,6 +755,7 @@ impl DocumentStore { file_path: row.get(5)?, created_at: row.get(6)?, updated_at: row.get(7)?, + deleted_at: row.get(8)?, }) }, ) @@ -727,8 +775,8 @@ impl DocumentStore { ) -> Result { self.conn .query_row( - "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at - FROM documents WHERE doc_type = ?1 AND number = ?2", + "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at + FROM documents WHERE doc_type = ?1 AND number = ?2 AND deleted_at IS NULL", params![doc_type.as_str(), number], |row| { Ok(Document { @@ -740,6 +788,7 @@ impl DocumentStore { file_path: row.get(5)?, created_at: row.get(6)?, updated_at: row.get(7)?, + deleted_at: row.get(8)?, }) }, ) @@ -773,8 +822,8 @@ impl DocumentStore { // Try substring match let pattern = format!("%{}%", query.to_lowercase()); if let Ok(doc) = self.conn.query_row( - "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at - FROM documents WHERE doc_type = ?1 AND LOWER(title) LIKE ?2 + "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at + FROM documents WHERE doc_type = ?1 AND LOWER(title) LIKE ?2 AND deleted_at IS NULL ORDER BY LENGTH(title) ASC LIMIT 1", params![doc_type.as_str(), pattern], |row| { @@ -787,6 +836,7 @@ impl DocumentStore { file_path: row.get(5)?, created_at: row.get(6)?, updated_at: row.get(7)?, + deleted_at: row.get(8)?, }) }, ) { @@ -848,11 +898,11 @@ impl DocumentStore { }) } - /// List all documents of a given type + /// List all documents of a given type (excludes soft-deleted) pub fn list_documents(&self, doc_type: DocType) -> Result, StoreError> { let mut stmt = self.conn.prepare( - "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at - FROM documents WHERE doc_type = ?1 ORDER BY number DESC, title ASC", + "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at + FROM documents WHERE doc_type = ?1 AND deleted_at IS NULL ORDER BY number DESC, title ASC", )?; let rows = stmt.query_map(params![doc_type.as_str()], |row| { @@ -865,6 +915,7 @@ impl DocumentStore { file_path: row.get(5)?, created_at: row.get(6)?, updated_at: row.get(7)?, + deleted_at: row.get(8)?, }) })?; @@ -872,15 +923,15 @@ impl DocumentStore { .map_err(StoreError::Database) } - /// List documents by status + /// List documents by status (excludes soft-deleted) pub fn list_documents_by_status( &self, doc_type: DocType, status: &str, ) -> Result, StoreError> { let mut stmt = self.conn.prepare( - "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at - FROM documents WHERE doc_type = ?1 AND status = ?2 ORDER BY number DESC, title ASC", + "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at + FROM documents WHERE doc_type = ?1 AND status = ?2 AND deleted_at IS NULL ORDER BY number DESC, title ASC", )?; let rows = stmt.query_map(params![doc_type.as_str(), status], |row| { @@ -893,6 +944,7 @@ impl DocumentStore { file_path: row.get(5)?, created_at: row.get(6)?, updated_at: row.get(7)?, + deleted_at: row.get(8)?, }) })?; @@ -900,7 +952,7 @@ impl DocumentStore { .map_err(StoreError::Database) } - /// Delete a document + /// Delete a document permanently pub fn delete_document(&self, doc_type: DocType, title: &str) -> Result<(), StoreError> { self.with_retry(|| { let deleted = self.conn.execute( @@ -914,6 +966,148 @@ impl DocumentStore { }) } + /// Soft-delete a document (set deleted_at timestamp) + pub fn soft_delete_document(&self, doc_type: DocType, title: &str) -> Result<(), StoreError> { + self.with_retry(|| { + let now = chrono::Utc::now().to_rfc3339(); + let updated = self.conn.execute( + "UPDATE documents SET deleted_at = ?1, updated_at = ?1 + WHERE doc_type = ?2 AND title = ?3 AND deleted_at IS NULL", + params![now, doc_type.as_str(), title], + )?; + if updated == 0 { + return Err(StoreError::NotFound(title.to_string())); + } + Ok(()) + }) + } + + /// Restore a soft-deleted document + pub fn restore_document(&self, doc_type: DocType, title: &str) -> Result<(), StoreError> { + self.with_retry(|| { + let now = chrono::Utc::now().to_rfc3339(); + let updated = self.conn.execute( + "UPDATE documents SET deleted_at = NULL, updated_at = ?1 + WHERE doc_type = ?2 AND title = ?3 AND deleted_at IS NOT NULL", + params![now, doc_type.as_str(), title], + )?; + if updated == 0 { + return Err(StoreError::NotFound(format!( + "soft-deleted {} '{}'", + doc_type.as_str(), + title + ))); + } + Ok(()) + }) + } + + /// Get a soft-deleted document by type and title + pub fn get_deleted_document(&self, doc_type: DocType, title: &str) -> Result { + self.conn + .query_row( + "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at + FROM documents WHERE doc_type = ?1 AND title = ?2 AND deleted_at IS NOT NULL", + params![doc_type.as_str(), title], + |row| { + Ok(Document { + id: Some(row.get(0)?), + doc_type: DocType::from_str(row.get::<_, String>(1)?.as_str()).unwrap(), + number: row.get(2)?, + title: row.get(3)?, + status: row.get(4)?, + file_path: row.get(5)?, + created_at: row.get(6)?, + updated_at: row.get(7)?, + deleted_at: row.get(8)?, + }) + }, + ) + .map_err(|e| match e { + rusqlite::Error::QueryReturnedNoRows => StoreError::NotFound(format!( + "soft-deleted {} '{}'", + doc_type.as_str(), + title + )), + e => StoreError::Database(e), + }) + } + + /// List soft-deleted documents + pub fn list_deleted_documents(&self, doc_type: Option) -> Result, StoreError> { + let query = match doc_type { + Some(dt) => format!( + "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at + FROM documents WHERE doc_type = '{}' AND deleted_at IS NOT NULL + ORDER BY deleted_at DESC", + dt.as_str() + ), + None => "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at + FROM documents WHERE deleted_at IS NOT NULL + ORDER BY deleted_at DESC".to_string(), + }; + + let mut stmt = self.conn.prepare(&query)?; + let rows = stmt.query_map([], |row| { + Ok(Document { + id: Some(row.get(0)?), + doc_type: DocType::from_str(row.get::<_, String>(1)?.as_str()).unwrap(), + number: row.get(2)?, + title: row.get(3)?, + status: row.get(4)?, + file_path: row.get(5)?, + created_at: row.get(6)?, + updated_at: row.get(7)?, + deleted_at: row.get(8)?, + }) + })?; + + rows.collect::, _>>() + .map_err(StoreError::Database) + } + + /// Permanently delete documents that have been soft-deleted for more than N days + pub fn purge_old_deleted_documents(&self, days: i64) -> Result { + self.with_retry(|| { + let cutoff = chrono::Utc::now() - chrono::Duration::days(days); + let cutoff_str = cutoff.to_rfc3339(); + + let deleted = self.conn.execute( + "DELETE FROM documents WHERE deleted_at IS NOT NULL AND deleted_at < ?1", + params![cutoff_str], + )?; + + Ok(deleted) + }) + } + + /// Check if a document has ADR dependents (documents that reference it via rfc_to_adr link) + pub fn has_adr_dependents(&self, document_id: i64) -> Result, StoreError> { + let mut stmt = self.conn.prepare( + "SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at, d.deleted_at + FROM documents d + JOIN document_links l ON l.source_id = d.id + WHERE l.target_id = ?1 AND l.link_type = 'rfc_to_adr' AND d.deleted_at IS NULL", + )?; + + let rows = stmt.query_map(params![document_id], |row| { + Ok(Document { + id: Some(row.get(0)?), + doc_type: DocType::from_str(row.get::<_, String>(1)?.as_str()).unwrap(), + number: row.get(2)?, + title: row.get(3)?, + status: row.get(4)?, + file_path: row.get(5)?, + created_at: row.get(6)?, + updated_at: row.get(7)?, + deleted_at: row.get(8)?, + }) + })?; + + rows.collect::, _>>() + .map_err(StoreError::Database) + } + /// Get the next document number for a type pub fn next_number(&self, doc_type: DocType) -> Result { let max: Option = self.conn.query_row( @@ -944,7 +1138,7 @@ impl DocumentStore { }) } - /// Get linked documents + /// Get linked documents (excludes soft-deleted) pub fn get_linked_documents( &self, source_id: i64, @@ -952,16 +1146,16 @@ impl DocumentStore { ) -> Result, StoreError> { let query = match link_type { Some(lt) => format!( - "SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at + "SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at, d.deleted_at FROM documents d JOIN document_links l ON l.target_id = d.id - WHERE l.source_id = ?1 AND l.link_type = '{}'", + WHERE l.source_id = ?1 AND l.link_type = '{}' AND d.deleted_at IS NULL", lt.as_str() ), - None => "SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at + None => "SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at, d.deleted_at FROM documents d JOIN document_links l ON l.target_id = d.id - WHERE l.source_id = ?1".to_string(), + WHERE l.source_id = ?1 AND d.deleted_at IS NULL".to_string(), }; let mut stmt = self.conn.prepare(&query)?; @@ -975,6 +1169,7 @@ impl DocumentStore { file_path: row.get(5)?, created_at: row.get(6)?, updated_at: row.get(7)?, + deleted_at: row.get(8)?, }) })?; @@ -1140,7 +1335,7 @@ impl DocumentStore { // ==================== Search Operations ==================== - /// Search documents using FTS5 + /// Search documents using FTS5 (excludes soft-deleted) pub fn search_documents( &self, query: &str, @@ -1153,19 +1348,19 @@ impl DocumentStore { let sql = match doc_type { Some(dt) => format!( "SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, - d.created_at, d.updated_at, bm25(documents_fts) as score + d.created_at, d.updated_at, d.deleted_at, bm25(documents_fts) as score FROM documents_fts fts JOIN documents d ON d.id = fts.rowid - WHERE documents_fts MATCH ?1 AND d.doc_type = '{}' + WHERE documents_fts MATCH ?1 AND d.doc_type = '{}' AND d.deleted_at IS NULL ORDER BY score LIMIT ?2", dt.as_str() ), None => "SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, - d.created_at, d.updated_at, bm25(documents_fts) as score + d.created_at, d.updated_at, d.deleted_at, bm25(documents_fts) as score FROM documents_fts fts JOIN documents d ON d.id = fts.rowid - WHERE documents_fts MATCH ?1 + WHERE documents_fts MATCH ?1 AND d.deleted_at IS NULL ORDER BY score LIMIT ?2" .to_string(), @@ -1183,8 +1378,9 @@ impl DocumentStore { file_path: row.get(5)?, created_at: row.get(6)?, updated_at: row.get(7)?, + deleted_at: row.get(8)?, }, - score: row.get(8)?, + score: row.get(9)?, snippet: None, }) })?; diff --git a/crates/blue-mcp/src/handlers/delete.rs b/crates/blue-mcp/src/handlers/delete.rs new file mode 100644 index 0000000..3173f35 --- /dev/null +++ b/crates/blue-mcp/src/handlers/delete.rs @@ -0,0 +1,344 @@ +//! Document deletion handlers for Blue MCP +//! +//! Implements soft-delete with 7-day retention and restore capability. + +use serde_json::{json, Value}; +use std::fs; +use std::path::Path; + +use blue_core::store::DocType; +use blue_core::ProjectState; + +use crate::ServerError; + +/// Check what would be deleted (dry run) +pub fn handle_delete_dry_run( + state: &ProjectState, + doc_type: DocType, + title: &str, +) -> Result { + let doc = state + .store + .find_document(doc_type, title) + .map_err(|e| ServerError::StateLoadFailed(e.to_string()))?; + + let doc_id = doc.id.unwrap(); + + // Check for ADR dependents + let adr_dependents = state + .store + .has_adr_dependents(doc_id) + .map_err(|e| ServerError::StateLoadFailed(e.to_string()))?; + + // Check for active sessions + let active_session = state + .store + .get_active_session(&doc.title) + .map_err(|e| ServerError::StateLoadFailed(e.to_string()))?; + + // Check for worktree + let worktree = state + .store + .get_worktree(doc_id) + .map_err(|e| ServerError::StateLoadFailed(e.to_string()))?; + + // Find companion files + let mut companion_files = Vec::new(); + if let Some(ref file_path) = doc.file_path { + let base_path = Path::new(file_path); + if let Some(stem) = base_path.file_stem() { + if let Some(parent) = base_path.parent() { + let stem_str = stem.to_string_lossy(); + // Check for .plan.md, .dialogue.md + for suffix in &[".plan.md", ".dialogue.md", ".draft.md"] { + let companion = parent.join(format!("{}{}", stem_str, suffix)); + if companion.exists() { + companion_files.push(companion.display().to_string()); + } + } + } + } + } + + let mut warnings = Vec::new(); + let mut blockers = Vec::new(); + + // ADR dependents are permanent blockers + if !adr_dependents.is_empty() { + let adr_titles: Vec<_> = adr_dependents.iter().map(|d| d.title.clone()).collect(); + blockers.push(format!( + "Has ADR dependents: {}. ADRs are permanent records and cannot be cascade-deleted.", + adr_titles.join(", ") + )); + } + + // Non-draft status requires force + if doc.status != "draft" { + warnings.push(format!( + "Status is '{}'. Use force=true to delete non-draft documents.", + doc.status + )); + } + + // Active session requires force + if let Some(session) = &active_session { + warnings.push(format!( + "Has active {} session started at {}. Use force=true to override.", + session.session_type.as_str(), + session.started_at + )); + } + + Ok(json!({ + "dry_run": true, + "document": { + "type": doc_type.as_str(), + "title": doc.title, + "status": doc.status, + "file_path": doc.file_path, + }, + "would_delete": { + "primary_file": doc.file_path, + "companion_files": companion_files, + "worktree": worktree.map(|w| w.worktree_path), + }, + "blockers": blockers, + "warnings": warnings, + "can_proceed": blockers.is_empty(), + })) +} + +/// Delete a document with safety checks +pub fn handle_delete( + state: &mut ProjectState, + doc_type: DocType, + title: &str, + force: bool, + permanent: bool, +) -> Result { + // Find the document + let doc = state + .store + .find_document(doc_type, title) + .map_err(|e| ServerError::StateLoadFailed(e.to_string()))?; + + let doc_id = doc.id.unwrap(); + + // Check for ADR dependents - this is a permanent blocker + let adr_dependents = state + .store + .has_adr_dependents(doc_id) + .map_err(|e| ServerError::StateLoadFailed(e.to_string()))?; + + if !adr_dependents.is_empty() { + let adr_titles: Vec<_> = adr_dependents.iter().map(|d| d.title.clone()).collect(); + return Ok(json!({ + "status": "blocked", + "message": format!( + "Cannot delete {} '{}'.\n\nThis document has ADR dependents: {}.\nADRs are permanent architectural records and cannot be cascade-deleted.\n\nTo proceed:\n1. Update the ADR(s) to remove the reference, or\n2. Mark this document as 'superseded' instead of deleting", + doc_type.as_str(), + doc.title, + adr_titles.join(", ") + ), + "adr_dependents": adr_titles, + })); + } + + // Check status - non-draft requires force + if doc.status != "draft" && !force { + let status_msg = match doc.status.as_str() { + "accepted" => "This document has been accepted.", + "in-progress" => "This document has active work.", + "implemented" => "This document is a historical record.", + _ => "This document is not in draft status.", + }; + + return Ok(json!({ + "status": "requires_force", + "message": format!( + "Cannot delete {} '{}'.\n\nStatus: {}\n{}\n\nUse force=true to delete anyway.", + doc_type.as_str(), + doc.title, + doc.status, + status_msg + ), + "current_status": doc.status, + })); + } + + // Check for active session - requires force + let active_session = state + .store + .get_active_session(&doc.title) + .map_err(|e| ServerError::StateLoadFailed(e.to_string()))?; + + if active_session.is_some() && !force { + let session = active_session.unwrap(); + return Ok(json!({ + "status": "requires_force", + "message": format!( + "Cannot delete {} '{}'.\n\nHas active {} session started at {}.\n\nUse force=true to delete anyway, which will end the session.", + doc_type.as_str(), + doc.title, + session.session_type.as_str(), + session.started_at + ), + "active_session": { + "type": session.session_type.as_str(), + "started_at": session.started_at, + }, + })); + } + + // End any active session + if active_session.is_some() { + let _ = state.store.end_session(&doc.title); + } + + // Remove worktree if exists + let mut worktree_removed = false; + if let Ok(Some(worktree)) = state.store.get_worktree(doc_id) { + // Remove from filesystem + let worktree_path = Path::new(&worktree.worktree_path); + if worktree_path.exists() { + // Use git worktree remove + let _ = std::process::Command::new("git") + .args(["worktree", "remove", "--force", &worktree.worktree_path]) + .output(); + } + // Remove from database + let _ = state.store.remove_worktree(doc_id); + worktree_removed = true; + } + + // Delete companion files + let mut files_deleted = Vec::new(); + if let Some(ref file_path) = doc.file_path { + let base_path = Path::new(file_path); + if let Some(stem) = base_path.file_stem() { + if let Some(parent) = base_path.parent() { + let stem_str = stem.to_string_lossy(); + for suffix in &[".plan.md", ".dialogue.md", ".draft.md"] { + let companion = parent.join(format!("{}{}", stem_str, suffix)); + if companion.exists() { + if fs::remove_file(&companion).is_ok() { + files_deleted.push(companion.display().to_string()); + } + } + } + } + } + + // Delete primary file + if base_path.exists() { + if fs::remove_file(base_path).is_ok() { + files_deleted.push(file_path.clone()); + } + } + } + + // Soft or permanent delete + if permanent { + state + .store + .delete_document(doc_type, &doc.title) + .map_err(|e| ServerError::StateLoadFailed(e.to_string()))?; + } else { + state + .store + .soft_delete_document(doc_type, &doc.title) + .map_err(|e| ServerError::StateLoadFailed(e.to_string()))?; + } + + let action = if permanent { + "permanently deleted" + } else { + "soft-deleted (recoverable for 7 days)" + }; + + Ok(json!({ + "status": "success", + "message": format!("{} '{}' {}.", doc_type.as_str().to_uppercase(), doc.title, action), + "doc_type": doc_type.as_str(), + "title": doc.title, + "permanent": permanent, + "files_deleted": files_deleted, + "worktree_removed": worktree_removed, + "restore_command": if !permanent { + Some(format!("blue restore {} {}", doc_type.as_str(), doc.title)) + } else { + None + }, + })) +} + +/// Restore a soft-deleted document +pub fn handle_restore( + state: &mut ProjectState, + doc_type: DocType, + title: &str, +) -> Result { + // Check if document exists and is soft-deleted + let doc = state + .store + .get_deleted_document(doc_type, title) + .map_err(|e| ServerError::StateLoadFailed(e.to_string()))?; + + // Restore the document + state + .store + .restore_document(doc_type, &doc.title) + .map_err(|e| ServerError::StateLoadFailed(e.to_string()))?; + + Ok(json!({ + "status": "success", + "message": format!("{} '{}' restored.", doc_type.as_str().to_uppercase(), doc.title), + "doc_type": doc_type.as_str(), + "title": doc.title, + "note": "Files were deleted and will need to be recreated if needed.", + })) +} + +/// List soft-deleted documents +pub fn handle_list_deleted( + state: &ProjectState, + doc_type: Option, +) -> Result { + let deleted = state + .store + .list_deleted_documents(doc_type) + .map_err(|e| ServerError::StateLoadFailed(e.to_string()))?; + + let docs: Vec<_> = deleted + .iter() + .map(|d| { + json!({ + "type": d.doc_type.as_str(), + "title": d.title, + "status": d.status, + "deleted_at": d.deleted_at, + }) + }) + .collect(); + + Ok(json!({ + "status": "success", + "count": docs.len(), + "deleted_documents": docs, + "note": "Documents are auto-purged 7 days after deletion. Use blue_restore to recover.", + })) +} + +/// Purge old soft-deleted documents +pub fn handle_purge_deleted(state: &mut ProjectState, days: i64) -> Result { + let purged = state + .store + .purge_old_deleted_documents(days) + .map_err(|e| ServerError::StateLoadFailed(e.to_string()))?; + + Ok(json!({ + "status": "success", + "message": format!("Purged {} documents older than {} days.", purged, days), + "purged_count": purged, + })) +} diff --git a/crates/blue-mcp/src/handlers/mod.rs b/crates/blue-mcp/src/handlers/mod.rs index 9f1b1b1..3f4fe94 100644 --- a/crates/blue-mcp/src/handlers/mod.rs +++ b/crates/blue-mcp/src/handlers/mod.rs @@ -5,6 +5,7 @@ pub mod adr; pub mod audit; pub mod decision; +pub mod delete; pub mod dialogue; pub mod dialogue_lint; pub mod env; diff --git a/crates/blue-mcp/src/handlers/postmortem.rs b/crates/blue-mcp/src/handlers/postmortem.rs index d2d1888..c8c1fe8 100644 --- a/crates/blue-mcp/src/handlers/postmortem.rs +++ b/crates/blue-mcp/src/handlers/postmortem.rs @@ -99,6 +99,7 @@ pub fn handle_create(state: &mut ProjectState, args: &Value) -> Result Result Result { - let title = args - .get("title") - .and_then(|v| v.as_str()) - .ok_or(ServerError::InvalidParams)?; + let rfc = args.get("rfc").and_then(|v| v.as_str()); + + // If RFC is provided, format title as "RFC NNNN: Title Case Name" + let title = if let Some(rfc_title) = rfc { + let (stripped, number) = strip_rfc_number_prefix(rfc_title); + let title_case = to_title_case(&stripped); + if let Some(n) = number { + format!("RFC {:04}: {}", n, title_case) + } else { + title_case + } + } else { + args.get("title") + .and_then(|v| v.as_str()) + .ok_or(ServerError::InvalidParams)? + .to_string() + }; let base = args .get("base") @@ -510,3 +531,19 @@ fn update_checkbox_in_body(body: &str, item_selector: &str) -> Result<(String, S ))), } } + +/// Convert kebab-case to Title Case +/// +/// Example: "consistent-branch-naming" -> "Consistent Branch Naming" +fn to_title_case(s: &str) -> String { + s.split('-') + .map(|word| { + let mut chars = word.chars(); + match chars.next() { + None => String::new(), + Some(first) => first.to_uppercase().chain(chars).collect(), + } + }) + .collect::>() + .join(" ") +} diff --git a/crates/blue-mcp/src/handlers/runbook.rs b/crates/blue-mcp/src/handlers/runbook.rs index 91c4fd6..7ae74cf 100644 --- a/crates/blue-mcp/src/handlers/runbook.rs +++ b/crates/blue-mcp/src/handlers/runbook.rs @@ -85,6 +85,7 @@ pub fn handle_create(state: &mut ProjectState, args: &Value) -> Result (String, Option) { + // Match pattern: NNNN-rest-of-title + if title.len() > 5 && title.chars().take(4).all(|c| c.is_ascii_digit()) && title.chars().nth(4) == Some('-') { + let number: Option = title[..4].parse().ok(); + let stripped = title[5..].to_string(); + (stripped, number) + } else { + (title.to_string(), None) + } +} + /// Handle blue_worktree_create pub fn handle_create(state: &ProjectState, args: &Value) -> Result { let title = args @@ -44,9 +64,10 @@ pub fn handle_create(state: &ProjectState, args: &Value) -> Result Result Result crate::handlers::llm::handle_model_pull(&call.arguments.unwrap_or_default()), "blue_model_remove" => crate::handlers::llm::handle_model_remove(&call.arguments.unwrap_or_default()), "blue_model_warmup" => crate::handlers::llm::handle_model_warmup(&call.arguments.unwrap_or_default()), + // RFC 0006: Delete tools + "blue_delete" => self.handle_delete(&call.arguments), + "blue_restore" => self.handle_restore(&call.arguments), + "blue_deleted_list" => self.handle_deleted_list(&call.arguments), + "blue_purge_deleted" => self.handle_purge_deleted(&call.arguments), _ => Err(ServerError::ToolNotFound(call.name)), }?; @@ -2787,6 +2889,88 @@ impl BlueServer { .and_then(|v| v.as_str()); crate::handlers::realm::handle_notifications_list(self.cwd.as_deref(), state) } + + // RFC 0006: Delete handlers + + fn handle_delete(&mut self, args: &Option) -> Result { + let args = args.as_ref().ok_or(ServerError::InvalidParams)?; + + let doc_type_str = args + .get("doc_type") + .and_then(|v| v.as_str()) + .ok_or(ServerError::InvalidParams)?; + let doc_type = DocType::from_str(doc_type_str) + .ok_or(ServerError::InvalidParams)?; + + let title = args + .get("title") + .and_then(|v| v.as_str()) + .ok_or(ServerError::InvalidParams)?; + + let dry_run = args + .get("dry_run") + .and_then(|v| v.as_bool()) + .unwrap_or(false); + + let force = args + .get("force") + .and_then(|v| v.as_bool()) + .unwrap_or(false); + + let permanent = args + .get("permanent") + .and_then(|v| v.as_bool()) + .unwrap_or(false); + + if dry_run { + let state = self.ensure_state()?; + crate::handlers::delete::handle_delete_dry_run(state, doc_type, title) + } else { + let state = self.ensure_state_mut()?; + crate::handlers::delete::handle_delete(state, doc_type, title, force, permanent) + } + } + + fn handle_restore(&mut self, args: &Option) -> Result { + let args = args.as_ref().ok_or(ServerError::InvalidParams)?; + + let doc_type_str = args + .get("doc_type") + .and_then(|v| v.as_str()) + .ok_or(ServerError::InvalidParams)?; + let doc_type = DocType::from_str(doc_type_str) + .ok_or(ServerError::InvalidParams)?; + + let title = args + .get("title") + .and_then(|v| v.as_str()) + .ok_or(ServerError::InvalidParams)?; + + let state = self.ensure_state_mut()?; + crate::handlers::delete::handle_restore(state, doc_type, title) + } + + fn handle_deleted_list(&mut self, args: &Option) -> Result { + let doc_type = args + .as_ref() + .and_then(|a| a.get("doc_type")) + .and_then(|v| v.as_str()) + .and_then(DocType::from_str); + + let state = self.ensure_state()?; + crate::handlers::delete::handle_list_deleted(state, doc_type) + } + + fn handle_purge_deleted(&mut self, args: &Option) -> Result { + let days = args + .as_ref() + .and_then(|a| a.get("days")) + .and_then(|v| v.as_i64()) + .unwrap_or(7); + + let state = self.ensure_state_mut()?; + crate::handlers::delete::handle_purge_deleted(state, days) + } } impl Default for BlueServer { diff --git a/docs/cli/README.md b/docs/cli/README.md index 2cb9e8b..9d369c2 100644 --- a/docs/cli/README.md +++ b/docs/cli/README.md @@ -64,4 +64,10 @@ Run Blue as an MCP server for Claude integration: blue mcp ``` -Configure in Claude settings to enable Blue tools. +This exposes 8 realm coordination tools to Claude: +- `realm_status`, `realm_check`, `contract_get` +- `session_start`, `session_stop` +- `realm_worktree_create`, `realm_pr_status` +- `notifications_list` + +See [../mcp/README.md](../mcp/README.md) for tool reference and [../mcp/integration.md](../mcp/integration.md) for setup guide.