feat: implement RFC 0006 (soft-delete) and RFC 0007 (branch naming)
RFC 0006 - Document Deletion Tools: - Add soft-delete with 7-day retention before permanent deletion - Add blue_delete, blue_restore, blue_deleted_list, blue_purge_deleted tools - Add deleted_at column to documents table (schema v3) - Block deletion of documents with ADR dependents - Support dry_run, force, and permanent options RFC 0007 - Consistent Branch Naming: - Strip RFC number prefix from branch/worktree names - Branch format: feature-description (not rfc/NNNN-feature-description) - PR title format: RFC NNNN: Feature Description - Add strip_rfc_number_prefix helper with tests Also: - Remove orphan .blue/repos/ and .blue/data/ directories - Fix docs path resolution bug (spike documented) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
parent
28898556cd
commit
489942cd35
23 changed files with 1018 additions and 2281 deletions
Binary file not shown.
85
.blue/docs/rfcs/0007-consistent-branch-naming.md
Normal file
85
.blue/docs/rfcs/0007-consistent-branch-naming.md
Normal file
|
|
@ -0,0 +1,85 @@
|
|||
# RFC 0007: Consistent Branch Naming
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Implemented |
|
||||
| **Date** | 2026-01-24 |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
Branch names and worktrees for RFC implementation are inconsistent. Some use the full RFC name with number prefix, others use arbitrary names. This makes it hard to correlate branches with their source RFCs and clutters the git history.
|
||||
|
||||
## Problem
|
||||
|
||||
Currently when implementing an RFC:
|
||||
- Branch names vary: `rfc-0005`, `feature/local-llm`, `0005-local-llm-integration`, etc.
|
||||
- Worktree directories follow no convention
|
||||
- No clear way to find which branch implements which RFC
|
||||
- PR titles don't consistently reference the RFC number
|
||||
|
||||
## Proposal
|
||||
|
||||
### Naming Convention
|
||||
|
||||
For an RFC file named `NNNN-feature-description.md`:
|
||||
|
||||
| Artifact | Name |
|
||||
|----------|------|
|
||||
| RFC file | `NNNN-feature-description.md` |
|
||||
| Branch | `feature-description` |
|
||||
| Worktree | `feature-description` |
|
||||
| PR title | `RFC NNNN: Feature Description` |
|
||||
|
||||
### Examples
|
||||
|
||||
| RFC File | Branch | Worktree |
|
||||
|----------|--------|----------|
|
||||
| `0005-local-llm-integration.md` | `local-llm-integration` | `local-llm-integration` |
|
||||
| `0006-document-deletion-tools.md` | `document-deletion-tools` | `document-deletion-tools` |
|
||||
| `0007-consistent-branch-naming.md` | `consistent-branch-naming` | `consistent-branch-naming` |
|
||||
|
||||
### Rationale
|
||||
|
||||
**Why strip the number prefix?**
|
||||
- Branch names stay short and readable
|
||||
- The RFC number is metadata, not the feature identity
|
||||
- `git branch` output is cleaner
|
||||
- Tab completion is easier
|
||||
|
||||
**Why keep feature-description?**
|
||||
- Direct correlation to RFC title
|
||||
- Descriptive without being verbose
|
||||
- Consistent kebab-case convention
|
||||
|
||||
### Implementation
|
||||
|
||||
1. Update `blue_worktree_create` to derive branch name from RFC title (strip number prefix)
|
||||
2. Update `blue_pr_create` to include RFC number in PR title
|
||||
3. ~~Add validation to reject branches with number prefixes~~ (deferred - convention is enforced by tooling)
|
||||
4. Document convention in CLAUDE.md
|
||||
|
||||
### Migration
|
||||
|
||||
Existing branches don't need to change. Convention applies to new work only.
|
||||
|
||||
## Test Plan
|
||||
|
||||
- [x] `blue worktree create` uses `feature-description` format
|
||||
- [x] Branch name derived correctly from RFC title
|
||||
- [x] PR title includes RFC number when `rfc` parameter provided
|
||||
- [ ] ~~Validation rejects `NNNN-*` branch names with helpful message~~ (deferred)
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
- [x] Update worktree handler to strip RFC number from branch name
|
||||
- [x] Update PR handler to format title as `RFC NNNN: Title`
|
||||
- [x] Add `strip_rfc_number_prefix` helper function with tests
|
||||
- [ ] Update documentation (CLAUDE.md)
|
||||
|
||||
---
|
||||
|
||||
*"Names matter. Make them count."*
|
||||
|
||||
— Blue
|
||||
58
.blue/docs/spikes/2026-01-24-docs-path-resolution-bug.md
Normal file
58
.blue/docs/spikes/2026-01-24-docs-path-resolution-bug.md
Normal file
|
|
@ -0,0 +1,58 @@
|
|||
# Spike: Docs Path Resolution Bug
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Completed |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Outcome** | Answered |
|
||||
|
||||
---
|
||||
|
||||
## Question
|
||||
|
||||
Why does blue_rfc_create write to .blue/repos/blue/docs/rfcs/ instead of .blue/docs/rfcs/?
|
||||
|
||||
## Root Cause
|
||||
|
||||
The bug was caused by coexistence of OLD and NEW directory structures:
|
||||
- OLD: `.blue/repos/blue/docs/`, `.blue/data/blue/blue.db`
|
||||
- NEW: `.blue/docs/`, `.blue/blue.db`
|
||||
|
||||
When `detect_blue()` runs:
|
||||
1. It sees `.blue/repos/` or `.blue/data/` exist
|
||||
2. Calls `migrate_to_new_structure()`
|
||||
3. Migration is a NO-OP because new paths already exist
|
||||
4. Returns `BlueHome::new(root)` which sets correct paths
|
||||
|
||||
**However**, the MCP server caches `ProjectState` in `self.state`. If the server was started when old structure was the only structure, the cached state has old paths. The state only resets when `cwd` changes.
|
||||
|
||||
## Evidence
|
||||
|
||||
1. RFC created at `.blue/repos/blue/docs/rfcs/` (wrong)
|
||||
2. Spike created at `.blue/docs/spikes/` (correct)
|
||||
3. `detect_blue()` now returns correct paths
|
||||
4. Old DB (`.blue/data/blue/blue.db`) was modified at 16:28
|
||||
5. New DB (`.blue/blue.db`) was modified at 16:01
|
||||
|
||||
The RFC was stored in the old database because the MCP server had cached the old state.
|
||||
|
||||
## Fix Applied
|
||||
|
||||
Removed old structure directories:
|
||||
```bash
|
||||
rm -rf .blue/repos .blue/data
|
||||
```
|
||||
|
||||
This prevents the migration code path from triggering and ensures only new paths are used.
|
||||
|
||||
## Recommendations
|
||||
|
||||
1. Migration should DELETE old directories after migration completes, not leave them as orphans
|
||||
2. Or: `detect_blue()` should always use new paths and ignore old structure once new structure exists
|
||||
3. Consider adding a version marker file (`.blue/version`) to distinguish structure versions
|
||||
|
||||
---
|
||||
|
||||
*"The old paths were ghosts. We exorcised them."*
|
||||
|
||||
— Blue
|
||||
|
|
@ -1,198 +0,0 @@
|
|||
# RFC 0001: Dialogue SQLite Metadata
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Draft |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Source Spike** | sqlite-storage-expansion |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
Dialogue files (.dialogue.md) are not indexed in SQLite. Can't query them, link them to RFCs, or track relationships. Need to add DocType::Dialogue and store metadata while keeping content in markdown.
|
||||
|
||||
## Background
|
||||
|
||||
Dialogues are transcripts of conversations - different from RFCs/spikes which are living documents with status transitions.
|
||||
|
||||
Current state:
|
||||
- Dialogues exist as `.dialogue.md` files in `docs/dialogues/`
|
||||
- No SQLite tracking
|
||||
- No way to search or link them
|
||||
|
||||
## Proposal
|
||||
|
||||
### 1. Add DocType::Dialogue
|
||||
|
||||
```rust
|
||||
pub enum DocType {
|
||||
Rfc,
|
||||
Spike,
|
||||
Adr,
|
||||
Decision,
|
||||
Prd,
|
||||
Postmortem,
|
||||
Runbook,
|
||||
Dialogue, // NEW
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Dialogue Metadata (SQLite)
|
||||
|
||||
Store in `documents` table:
|
||||
- `doc_type`: "dialogue"
|
||||
- `title`: Dialogue title
|
||||
- `status`: "complete" (dialogues don't have status transitions)
|
||||
- `file_path`: Path to .dialogue.md file
|
||||
|
||||
Store in `metadata` table:
|
||||
- `date`: When dialogue occurred
|
||||
- `participants`: Who was involved (e.g., "Claude, Eric")
|
||||
- `linked_rfc`: RFC this dialogue relates to (optional)
|
||||
- `topic`: Short description of what was discussed
|
||||
|
||||
### 3. New Tool: `blue_dialogue_create`
|
||||
|
||||
```
|
||||
blue_dialogue_create title="realm-design-session" linked_rfc="cross-repo-realms"
|
||||
```
|
||||
|
||||
Creates:
|
||||
- Entry in documents table
|
||||
- Metadata entries
|
||||
- Skeleton .dialogue.md file
|
||||
|
||||
### 4. Dialogue File Format
|
||||
|
||||
```markdown
|
||||
# Dialogue: Realm Design Session
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Participants** | Claude, Eric |
|
||||
| **Topic** | Designing cross-repo coordination |
|
||||
| **Linked RFC** | [cross-repo-realms](../rfcs/0001-cross-repo-realms.md) |
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
[Why this dialogue happened]
|
||||
|
||||
## Key Decisions
|
||||
|
||||
- Decision 1
|
||||
- Decision 2
|
||||
|
||||
## Transcript
|
||||
|
||||
[Full conversation or summary]
|
||||
|
||||
---
|
||||
|
||||
*Extracted by Blue*
|
||||
```
|
||||
|
||||
### 5. Keep Content in Markdown
|
||||
|
||||
Unlike other doc types, dialogue content stays primarily in markdown:
|
||||
- Full transcripts can be large
|
||||
- Human-readable format preferred
|
||||
- Git diff friendly
|
||||
|
||||
SQLite stores metadata only for:
|
||||
- Fast searching
|
||||
- Relationship tracking
|
||||
- Listing/filtering
|
||||
|
||||
### 6. New Tool: `blue_dialogue_get`
|
||||
|
||||
```
|
||||
blue_dialogue_get title="realm-design-session"
|
||||
```
|
||||
|
||||
Returns dialogue metadata and file path.
|
||||
|
||||
### 7. New Tool: `blue_dialogue_list`
|
||||
|
||||
```
|
||||
blue_dialogue_list linked_rfc="cross-repo-realms"
|
||||
```
|
||||
|
||||
Returns all dialogues, optionally filtered by linked RFC.
|
||||
|
||||
### 8. Integration with `blue_extract_dialogue`
|
||||
|
||||
Existing `blue_extract_dialogue` extracts text from Claude JSONL. Extend to:
|
||||
|
||||
```
|
||||
blue_extract_dialogue task_id="abc123" save_as="realm-design-session" linked_rfc="cross-repo-realms"
|
||||
```
|
||||
|
||||
- Extract dialogue from JSONL
|
||||
- Create .dialogue.md file
|
||||
- Register in SQLite with metadata
|
||||
|
||||
### 9. Migration of Existing Dialogues
|
||||
|
||||
On first run, scan `docs/dialogues/` for `.dialogue.md` files:
|
||||
- Parse frontmatter for metadata
|
||||
- Register in documents table
|
||||
- Preserve file locations
|
||||
|
||||
## Security Note
|
||||
|
||||
Dialogues may contain sensitive information discussed during development. Before committing:
|
||||
- Review for credentials, API keys, or secrets
|
||||
- Use `[REDACTED]` for sensitive values
|
||||
- Consider if full transcript is needed vs summary
|
||||
|
||||
## Example Transcript Section
|
||||
|
||||
```markdown
|
||||
## Transcript
|
||||
|
||||
**Eric**: How should we handle authentication for the API?
|
||||
|
||||
**Claude**: I'd recommend JWT tokens with short expiry. Here's why:
|
||||
1. Stateless - no session storage needed
|
||||
2. Can include claims for authorization
|
||||
3. Easy to invalidate by changing signing key
|
||||
|
||||
**Eric**: What about refresh tokens?
|
||||
|
||||
**Claude**: Store refresh tokens in httpOnly cookies. When access token expires,
|
||||
use refresh endpoint to get new pair. This balances security with UX.
|
||||
|
||||
**Decision**: Use JWT + refresh token pattern.
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
1. Add `DocType::Dialogue` to enum
|
||||
2. Create `blue_dialogue_create` handler
|
||||
3. Create `blue_dialogue_list` handler
|
||||
4. Update `blue_search` to include dialogues
|
||||
5. Add dialogue markdown generation
|
||||
|
||||
## Test Plan
|
||||
|
||||
- [ ] Create dialogue with metadata
|
||||
- [ ] Link dialogue to RFC
|
||||
- [ ] Dialogue without linked RFC works
|
||||
- [ ] Search finds dialogues by title/topic
|
||||
- [ ] List dialogues by RFC works
|
||||
- [ ] List all dialogues works
|
||||
- [ ] Get specific dialogue returns metadata
|
||||
- [ ] Dialogue content stays in markdown
|
||||
- [ ] Metadata stored in SQLite
|
||||
- [ ] Existing dialogues migrated on first run
|
||||
- [ ] Extract dialogue from JSONL creates proper entry
|
||||
|
||||
---
|
||||
|
||||
*"Right then. Let's get to it."*
|
||||
|
||||
— Blue
|
||||
|
|
@ -1,227 +0,0 @@
|
|||
# RFC 0002: Runbook Action Lookup
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Draft |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Source Spike** | runbook-driven-actions |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
No way to discover and follow runbooks when performing repo actions. Claude guesses instead of following documented procedures for docker builds, deploys, releases, etc.
|
||||
|
||||
## Proposal
|
||||
|
||||
### 1. Action Tags in Runbooks
|
||||
|
||||
Add `actions` field to runbook frontmatter:
|
||||
|
||||
```markdown
|
||||
# Runbook: Docker Build
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Active |
|
||||
| **Actions** | docker build, build image, container build |
|
||||
```
|
||||
|
||||
Store actions in SQLite metadata table for fast lookup.
|
||||
|
||||
### 2. New Tool: `blue_runbook_lookup`
|
||||
|
||||
```
|
||||
blue_runbook_lookup action="docker build"
|
||||
```
|
||||
|
||||
Returns structured response:
|
||||
|
||||
```json
|
||||
{
|
||||
"found": true,
|
||||
"runbook": {
|
||||
"title": "Docker Build",
|
||||
"file": ".blue/docs/runbooks/docker-build.md",
|
||||
"actions": ["docker build", "build image", "container build"],
|
||||
"operations": [
|
||||
{
|
||||
"name": "Build Production Image",
|
||||
"steps": ["...", "..."],
|
||||
"verification": "docker images | grep myapp",
|
||||
"rollback": "docker rmi myapp:latest"
|
||||
}
|
||||
]
|
||||
},
|
||||
"hint": "Follow the steps above. Use verification to confirm success."
|
||||
}
|
||||
```
|
||||
|
||||
If no match: `{ "found": false, "hint": "No runbook found. Proceed with caution." }`
|
||||
|
||||
### 3. New Tool: `blue_runbook_actions`
|
||||
|
||||
List all registered actions:
|
||||
|
||||
```
|
||||
blue_runbook_actions
|
||||
```
|
||||
|
||||
Returns:
|
||||
```json
|
||||
{
|
||||
"actions": [
|
||||
{ "action": "docker build", "runbook": "Docker Build" },
|
||||
{ "action": "deploy staging", "runbook": "Deployment" },
|
||||
{ "action": "run tests", "runbook": "Testing" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Matching Algorithm
|
||||
|
||||
Word-based matching with priority:
|
||||
|
||||
1. **Exact match** - "docker build" matches "docker build" (100%)
|
||||
2. **All words match** - "docker" matches "docker build" (90%)
|
||||
3. **Partial words** - "build" matches "docker build" (80%)
|
||||
|
||||
If multiple runbooks match, return highest priority. Ties broken by most specific (more words in action).
|
||||
|
||||
### 5. Schema
|
||||
|
||||
```sql
|
||||
-- In metadata table
|
||||
INSERT INTO metadata (document_id, key, value)
|
||||
VALUES (runbook_id, 'action', 'docker build');
|
||||
|
||||
-- Multiple actions = multiple rows
|
||||
INSERT INTO metadata (document_id, key, value)
|
||||
VALUES (runbook_id, 'action', 'build image');
|
||||
```
|
||||
|
||||
### 6. Update `blue_runbook_create`
|
||||
|
||||
```
|
||||
blue_runbook_create title="Docker Build" actions=["docker build", "build image"]
|
||||
```
|
||||
|
||||
- Accept `actions` array parameter
|
||||
- Store each action in metadata table
|
||||
- Include in generated markdown
|
||||
|
||||
### 7. CLAUDE.md Guidance
|
||||
|
||||
Document the pattern for repos:
|
||||
|
||||
```markdown
|
||||
## Runbooks
|
||||
|
||||
Before executing build, deploy, or release operations:
|
||||
|
||||
1. Check for runbook: `blue_runbook_lookup action="docker build"`
|
||||
2. If found, follow the documented steps
|
||||
3. Use verification commands to confirm success
|
||||
4. If something fails, check rollback procedures
|
||||
|
||||
Available actions: `blue_runbook_actions`
|
||||
```
|
||||
|
||||
## Security Note
|
||||
|
||||
Runbooks should **never** contain actual credentials or secrets. Use placeholders:
|
||||
|
||||
```markdown
|
||||
**Steps**:
|
||||
1. Export credentials: `export API_KEY=$YOUR_API_KEY`
|
||||
2. Run deploy: `./deploy.sh`
|
||||
```
|
||||
|
||||
Not:
|
||||
```markdown
|
||||
**Steps**:
|
||||
1. Run deploy: `API_KEY=abc123 ./deploy.sh` # WRONG!
|
||||
```
|
||||
|
||||
## Example Runbook
|
||||
|
||||
```markdown
|
||||
# Runbook: Docker Build
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Active |
|
||||
| **Actions** | docker build, build image, container build |
|
||||
| **Owner** | Platform Team |
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
Build and tag Docker images for the application.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- [ ] Docker installed and running
|
||||
- [ ] Access to container registry
|
||||
- [ ] `.env` file configured
|
||||
|
||||
## Common Operations
|
||||
|
||||
### Operation: Build Production Image
|
||||
|
||||
**When to use**: Preparing for deployment
|
||||
|
||||
**Steps**:
|
||||
1. Ensure on correct branch: `git branch --show-current`
|
||||
2. Pull latest: `git pull origin main`
|
||||
3. Build image: `docker build -t myapp:$(git rev-parse --short HEAD) .`
|
||||
4. Tag as latest: `docker tag myapp:$(git rev-parse --short HEAD) myapp:latest`
|
||||
|
||||
**Verification**:
|
||||
```bash
|
||||
docker images | grep myapp
|
||||
docker run --rm myapp:latest --version
|
||||
```
|
||||
|
||||
**Rollback**:
|
||||
```bash
|
||||
docker rmi myapp:latest
|
||||
docker tag myapp:previous myapp:latest
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Symptom: Build fails with "no space left"
|
||||
|
||||
**Resolution**:
|
||||
1. `docker system prune -a`
|
||||
2. Retry build
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
1. Add `actions` parameter to `blue_runbook_create`
|
||||
2. Store actions in metadata table
|
||||
3. Implement `blue_runbook_lookup` with matching algorithm
|
||||
4. Implement `blue_runbook_actions` for discovery
|
||||
5. Parse runbook markdown to extract operations
|
||||
6. Update runbook markdown generation
|
||||
|
||||
## Test Plan
|
||||
|
||||
- [ ] Create runbook with actions tags
|
||||
- [ ] Lookup by exact action match
|
||||
- [ ] Lookup by partial match (word subset)
|
||||
- [ ] No match returns gracefully
|
||||
- [ ] Multiple runbooks - highest priority wins
|
||||
- [ ] List all actions works
|
||||
- [ ] Actions stored in SQLite metadata
|
||||
- [ ] Operations parsed from markdown correctly
|
||||
- [ ] Malformed runbook returns partial data gracefully
|
||||
|
||||
---
|
||||
|
||||
*"Right then. Let's get to it."*
|
||||
|
||||
— Blue
|
||||
|
|
@ -1,155 +0,0 @@
|
|||
# RFC 0003: Per Repo Blue Folders
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Draft |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Source Spike** | per-repo-blue-folder |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
Currently all docs flow to one central .blue folder. Each repo should have its own .blue folder so docs live with code and git tracking works naturally.
|
||||
|
||||
## Current Behavior
|
||||
|
||||
```
|
||||
blue/ # Central repo
|
||||
├── .blue/
|
||||
│ ├── repos/
|
||||
│ │ ├── blue/docs/... # Blue's docs
|
||||
│ │ └── other-repo/docs/ # Other repo's docs (wrong!)
|
||||
│ └── data/
|
||||
│ └── blue/blue.db
|
||||
```
|
||||
|
||||
All repos' docs end up in the blue repo's `.blue/repos/`.
|
||||
|
||||
## Proposed Behavior
|
||||
|
||||
```
|
||||
repo-a/
|
||||
├── .blue/
|
||||
│ ├── docs/
|
||||
│ │ ├── rfcs/
|
||||
│ │ ├── spikes/
|
||||
│ │ └── runbooks/
|
||||
│ └── blue.db
|
||||
└── src/...
|
||||
|
||||
repo-b/
|
||||
├── .blue/
|
||||
│ ├── docs/...
|
||||
│ └── blue.db
|
||||
└── src/...
|
||||
```
|
||||
|
||||
Each repo has its own `.blue/` with its own docs and database.
|
||||
|
||||
## Changes Required
|
||||
|
||||
### 1. Simplify BlueHome structure
|
||||
|
||||
```rust
|
||||
pub struct BlueHome {
|
||||
pub root: PathBuf, // Repo root
|
||||
pub blue_dir: PathBuf, // .blue/
|
||||
pub docs_path: PathBuf, // .blue/docs/
|
||||
pub db_path: PathBuf, // .blue/blue.db
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Change detect_blue behavior
|
||||
|
||||
- Find git repo root for current directory
|
||||
- Look for `.blue/` there (don't search upward beyond repo)
|
||||
- Auto-create on first blue command (no `blue init` required)
|
||||
|
||||
**Edge cases:**
|
||||
- No git repo: Create `.blue/` in current directory with warning
|
||||
- Monorepo: One `.blue/` at git root (packages share it)
|
||||
- Subdirectory: Always resolve to git root
|
||||
|
||||
### 3. Flatten docs structure
|
||||
|
||||
Before: `.blue/repos/<project>/docs/rfcs/`
|
||||
After: `.blue/docs/rfcs/`
|
||||
|
||||
No need for project subdirectory when per-repo.
|
||||
|
||||
### 4. Migration
|
||||
|
||||
Automatic on first run:
|
||||
|
||||
1. Detect old structure (`.blue/repos/` exists)
|
||||
2. Find docs for current project in `.blue/repos/<project>/docs/`
|
||||
3. Move to `.blue/docs/`
|
||||
4. Migrate database entries
|
||||
5. Clean up empty directories
|
||||
6. Log what was migrated
|
||||
|
||||
**Conflict resolution:** If docs exist in both locations, prefer newer by mtime.
|
||||
|
||||
## Git Tracking
|
||||
|
||||
Repos should commit their `.blue/` folder:
|
||||
|
||||
**Track:**
|
||||
- `.blue/docs/**` - RFCs, spikes, runbooks, etc.
|
||||
- `.blue/blue.db` - SQLite database (source of truth)
|
||||
- `.blue/config.yaml` - Configuration
|
||||
|
||||
**Gitignore:**
|
||||
- `.blue/*.db-shm` - SQLite shared memory (transient)
|
||||
- `.blue/*.db-wal` - SQLite write-ahead log (transient)
|
||||
|
||||
Recommended `.gitignore` addition:
|
||||
```
|
||||
# Blue transient files
|
||||
.blue/*.db-shm
|
||||
.blue/*.db-wal
|
||||
```
|
||||
|
||||
## Cross-Repo Coordination
|
||||
|
||||
The daemon/realm system (RFC 0001) handles cross-repo concerns:
|
||||
- Central daemon tracks active sessions
|
||||
- Realms coordinate contracts between repos
|
||||
- Each repo remains self-contained
|
||||
|
||||
## FAQ
|
||||
|
||||
**Q: Do I need to run `blue init`?**
|
||||
A: No. Blue auto-creates `.blue/` on first command.
|
||||
|
||||
**Q: What about my existing docs in the central location?**
|
||||
A: Auto-migrated on first run. Check git status to verify.
|
||||
|
||||
**Q: Should I commit `.blue/blue.db`?**
|
||||
A: Yes. It's the source of truth for your project's Blue state.
|
||||
|
||||
**Q: What if I'm in a monorepo?**
|
||||
A: One `.blue/` at the git root. All packages share it.
|
||||
|
||||
**Q: Can I use Blue without git?**
|
||||
A: Yes, but with a warning. `.blue/` created in current directory.
|
||||
|
||||
**Q: How do I see cross-repo status?**
|
||||
A: Use `blue realm_status` (requires daemon running).
|
||||
|
||||
## Test Plan
|
||||
|
||||
- [ ] New repo gets `.blue/` on first blue command
|
||||
- [ ] Docs created in repo's own `.blue/docs/`
|
||||
- [ ] Database at `.blue/blue.db`
|
||||
- [ ] Old structure migrated automatically
|
||||
- [ ] Realm/daemon still works across repos
|
||||
- [ ] No git repo falls back gracefully with warning
|
||||
- [ ] Monorepo uses single `.blue/` at root
|
||||
|
||||
---
|
||||
|
||||
*"Right then. Let's get to it."*
|
||||
|
||||
— Blue
|
||||
|
|
@ -1,363 +0,0 @@
|
|||
# RFC 0004: ADR Adherence
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Draft |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Source Spike** | adr-adherence |
|
||||
| **ADRs** | 0004 (Evidence), 0007 (Integrity), 0008 (Honor) |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
No mechanism to surface relevant ADRs during work, track ADR citations, or verify adherence to testable architectural decisions.
|
||||
|
||||
## Philosophy
|
||||
|
||||
**Guide, don't block.** ADRs are beliefs, not bureaucracy. Blue should:
|
||||
- Help you find relevant ADRs
|
||||
- Make citing ADRs easy
|
||||
- Verify testable ADRs optionally
|
||||
- Never require ADR approval to proceed
|
||||
|
||||
## Proposal
|
||||
|
||||
### Layer 1: Awareness (Passive)
|
||||
|
||||
#### `blue_adr_list`
|
||||
|
||||
List all ADRs with summaries:
|
||||
|
||||
```
|
||||
blue_adr_list
|
||||
```
|
||||
|
||||
Returns:
|
||||
```json
|
||||
{
|
||||
"adrs": [
|
||||
{ "number": 0, "title": "Never Give Up", "summary": "The only rule we need" },
|
||||
{ "number": 4, "title": "Evidence", "summary": "Show, don't tell" },
|
||||
...
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### `blue_adr_get`
|
||||
|
||||
Get full ADR content:
|
||||
|
||||
```
|
||||
blue_adr_get number=4
|
||||
```
|
||||
|
||||
Returns ADR markdown and metadata.
|
||||
|
||||
### Layer 2: Contextual Relevance (Active)
|
||||
|
||||
#### `blue_adr_relevant`
|
||||
|
||||
Given context, use AI to suggest relevant ADRs:
|
||||
|
||||
```
|
||||
blue_adr_relevant context="testing strategy"
|
||||
```
|
||||
|
||||
Returns:
|
||||
```json
|
||||
{
|
||||
"relevant": [
|
||||
{
|
||||
"number": 4,
|
||||
"title": "Evidence",
|
||||
"confidence": 0.95,
|
||||
"why": "Testing is the primary form of evidence that code works. This ADR's core principle 'show, don't tell' directly applies to test strategy decisions."
|
||||
},
|
||||
{
|
||||
"number": 7,
|
||||
"title": "Integrity",
|
||||
"confidence": 0.82,
|
||||
"why": "Tests verify structural wholeness - that the system holds together under various conditions."
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**AI-Powered Relevance:**
|
||||
|
||||
Keyword matching fails for philosophical ADRs. "Courage" won't match "deleting legacy code" even though ADR 0009 is highly relevant.
|
||||
|
||||
The AI evaluator:
|
||||
1. Receives the full context (RFC title, problem, code diff, etc.)
|
||||
2. Reads all ADR content (cached in prompt)
|
||||
3. Determines semantic relevance with reasoning
|
||||
4. Returns confidence scores and explanations
|
||||
|
||||
**Prompt Structure:**
|
||||
|
||||
```
|
||||
You are evaluating which ADRs are relevant to this work.
|
||||
|
||||
Context: {user_context}
|
||||
|
||||
ADRs:
|
||||
{all_adr_summaries}
|
||||
|
||||
For each ADR, determine:
|
||||
1. Is it relevant? (yes/no)
|
||||
2. Confidence (0.0-1.0)
|
||||
3. Why is it relevant? (1-2 sentences)
|
||||
|
||||
Only return ADRs with confidence > 0.7.
|
||||
```
|
||||
|
||||
**Model Selection:**
|
||||
- Use fast/cheap model (Haiku) for relevance checks
|
||||
- Results are suggestions, not authoritative
|
||||
- User can override or ignore
|
||||
|
||||
**Graceful Degradation:**
|
||||
|
||||
| Condition | Behavior |
|
||||
|-----------|----------|
|
||||
| API key configured, API up | AI relevance (default) |
|
||||
| API key configured, API down | Fallback to keywords + warning |
|
||||
| No API key | Keywords only (no warning) |
|
||||
| `--no-ai` flag | Keywords only (explicit) |
|
||||
|
||||
**Response Metadata:**
|
||||
|
||||
```json
|
||||
{
|
||||
"method": "ai", // or "keyword"
|
||||
"cached": false,
|
||||
"latency_ms": 287,
|
||||
"relevant": [...]
|
||||
}
|
||||
```
|
||||
|
||||
**Privacy:**
|
||||
- Only context string sent to API (not code, not files)
|
||||
- No PII should be in context string
|
||||
- User controls what context to send
|
||||
|
||||
#### RFC ADR Suggestions
|
||||
|
||||
When creating an RFC, Blue suggests relevant ADRs based on title/problem:
|
||||
|
||||
```
|
||||
blue_rfc_create title="testing-framework" ...
|
||||
|
||||
→ "Consider these ADRs: 0004 (Evidence), 0010 (No Dead Code)"
|
||||
```
|
||||
|
||||
#### ADR Citations in Documents
|
||||
|
||||
RFCs can cite ADRs in frontmatter:
|
||||
|
||||
```markdown
|
||||
| **ADRs** | 0004, 0007, 0010 |
|
||||
```
|
||||
|
||||
Or inline:
|
||||
|
||||
```markdown
|
||||
Per ADR 0004 (Evidence), we require test coverage > 80%.
|
||||
```
|
||||
|
||||
### Layer 3: Lightweight Verification (Optional)
|
||||
|
||||
#### `blue_adr_audit`
|
||||
|
||||
Scan for potential ADR violations. Only for testable ADRs:
|
||||
|
||||
```
|
||||
blue_adr_audit
|
||||
```
|
||||
|
||||
Returns:
|
||||
```json
|
||||
{
|
||||
"findings": [
|
||||
{
|
||||
"adr": 10,
|
||||
"title": "No Dead Code",
|
||||
"type": "warning",
|
||||
"message": "3 unused exports in src/utils.rs",
|
||||
"locations": ["src/utils.rs:45", "src/utils.rs:67", "src/utils.rs:89"]
|
||||
},
|
||||
{
|
||||
"adr": 4,
|
||||
"title": "Evidence",
|
||||
"type": "info",
|
||||
"message": "Test coverage at 72% (threshold: 80%)"
|
||||
}
|
||||
],
|
||||
"passed": [
|
||||
{ "adr": 5, "title": "Single Source", "message": "No duplicate definitions found" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Testable ADRs:**
|
||||
|
||||
| ADR | Check |
|
||||
|-----|-------|
|
||||
| 0004 Evidence | Test coverage, assertion ratios |
|
||||
| 0005 Single Source | Duplicate definitions, copy-paste detection |
|
||||
| 0010 No Dead Code | Unused exports, unreachable branches |
|
||||
|
||||
**Non-testable ADRs** (human judgment):
|
||||
|
||||
| ADR | Guidance |
|
||||
|-----|----------|
|
||||
| 0001 Purpose | Does this serve meaning? |
|
||||
| 0002 Presence | Are we actually here? |
|
||||
| 0009 Courage | Are we acting rightly? |
|
||||
| 0013 Overflow | Building from fullness? |
|
||||
|
||||
### Layer 4: Documentation Trail
|
||||
|
||||
#### ADR-Document Links
|
||||
|
||||
Store citations in `document_links` table:
|
||||
|
||||
```sql
|
||||
INSERT INTO document_links (source_id, target_id, link_type)
|
||||
VALUES (rfc_id, adr_doc_id, 'cites_adr');
|
||||
```
|
||||
|
||||
#### Search by ADR
|
||||
|
||||
```
|
||||
blue_search query="adr:0004"
|
||||
```
|
||||
|
||||
Returns all documents citing ADR 0004.
|
||||
|
||||
#### ADR "Referenced By"
|
||||
|
||||
```
|
||||
blue_adr_get number=4
|
||||
```
|
||||
|
||||
Includes:
|
||||
```json
|
||||
{
|
||||
"referenced_by": [
|
||||
{ "type": "rfc", "title": "testing-framework", "date": "2026-01-20" },
|
||||
{ "type": "decision", "title": "require-integration-tests", "date": "2026-01-15" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## ADR Metadata Enhancement
|
||||
|
||||
Add to each ADR:
|
||||
|
||||
```markdown
|
||||
## Applies When
|
||||
|
||||
- Writing or modifying tests
|
||||
- Reviewing pull requests
|
||||
- Evaluating technical claims
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
- Claiming code works without tests
|
||||
- Trusting documentation over running code
|
||||
- Accepting "it works on my machine"
|
||||
```
|
||||
|
||||
This gives the AI richer context for relevance matching. Anti-patterns are particularly useful - they help identify when work might be *violating* an ADR.
|
||||
|
||||
## Implementation
|
||||
|
||||
1. Add ADR document type and loader
|
||||
2. Implement `blue_adr_list` and `blue_adr_get`
|
||||
3. **Implement AI relevance evaluator:**
|
||||
- Load all ADRs into prompt context
|
||||
- Send context + ADRs to LLM (Haiku for speed/cost)
|
||||
- Parse structured response with confidence scores
|
||||
- Cache ADR summaries to minimize token usage
|
||||
4. Implement `blue_adr_relevant` using AI evaluator
|
||||
5. Add ADR citation parsing to RFC creation
|
||||
6. Implement `blue_adr_audit` for testable ADRs
|
||||
7. Add "referenced_by" to ADR responses
|
||||
8. Extend `blue_search` for ADR queries
|
||||
|
||||
**AI Integration Notes:**
|
||||
|
||||
- Blue MCP server needs LLM access (API key in `.blue/config.yaml`)
|
||||
- Use streaming for responsiveness
|
||||
- Fallback to keyword matching if AI unavailable
|
||||
- Cache relevance results per context hash (5 min TTL)
|
||||
|
||||
**Caching Strategy:**
|
||||
|
||||
```sql
|
||||
CREATE TABLE adr_relevance_cache (
|
||||
context_hash TEXT PRIMARY KEY,
|
||||
adr_versions_hash TEXT, -- Invalidate if ADRs change
|
||||
result_json TEXT,
|
||||
created_at TEXT,
|
||||
expires_at TEXT
|
||||
);
|
||||
```
|
||||
|
||||
**Testing AI Relevance:**
|
||||
|
||||
- Golden test cases with expected ADRs (fuzzy match)
|
||||
- Confidence thresholds: 0004 should be > 0.8 for "testing"
|
||||
- Mock AI responses in unit tests
|
||||
- Integration tests hit real API (rate limited)
|
||||
|
||||
## Test Plan
|
||||
|
||||
- [ ] List all ADRs returns correct count and summaries
|
||||
- [ ] Get specific ADR returns full content
|
||||
- [ ] AI relevance: "testing" context suggests 0004 (Evidence)
|
||||
- [ ] AI relevance: "deleting old code" suggests 0009 (Courage), 0010 (No Dead Code)
|
||||
- [ ] AI relevance: confidence scores are reasonable (0.7-1.0 range)
|
||||
- [ ] AI relevance: explanations are coherent
|
||||
- [ ] Fallback: keyword matching works when AI unavailable
|
||||
- [ ] RFC with `| **ADRs** | 0004 |` creates document link
|
||||
- [ ] Search `adr:0004` finds citing documents
|
||||
- [ ] Audit detects unused exports (ADR 0010)
|
||||
- [ ] Audit reports test coverage (ADR 0004)
|
||||
- [ ] Non-testable ADRs not included in audit findings
|
||||
- [ ] Caching: repeated same context uses cached result
|
||||
- [ ] Cache invalidation: ADR content change clears relevant cache
|
||||
- [ ] `--no-ai` flag forces keyword matching
|
||||
- [ ] Response includes method (ai/keyword), cached, latency
|
||||
- [ ] Graceful degradation when API unavailable
|
||||
|
||||
## FAQ
|
||||
|
||||
**Q: Will this block my PRs?**
|
||||
A: No. All ADR features are informational. Nothing blocks.
|
||||
|
||||
**Q: Do I have to cite ADRs in every RFC?**
|
||||
A: No. Citations are optional but encouraged for significant decisions.
|
||||
|
||||
**Q: What if I disagree with an ADR?**
|
||||
A: ADRs can be superseded. Create a new ADR documenting why.
|
||||
|
||||
**Q: How do I add a new ADR?**
|
||||
A: `blue_adr_create` (future work) or manually add to `docs/adrs/`.
|
||||
|
||||
**Q: Why use AI for relevance instead of keywords?**
|
||||
A: Keywords fail for philosophical ADRs. "Courage" won't match "deleting legacy code" but ADR 0009 is highly relevant. AI understands semantic meaning.
|
||||
|
||||
**Q: What if I don't have an API key configured?**
|
||||
A: Falls back to keyword matching. Less accurate but still functional.
|
||||
|
||||
**Q: How much does the AI relevance check cost?**
|
||||
A: Uses Haiku (~$0.00025 per check). Cached for 5 minutes per unique context.
|
||||
|
||||
---
|
||||
|
||||
*"The beliefs that guide us, made visible."*
|
||||
|
||||
— Blue
|
||||
File diff suppressed because it is too large
Load diff
|
|
@ -1,17 +0,0 @@
|
|||
# Spike: Adr Adherence
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Complete |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Time Box** | 2 hours |
|
||||
|
||||
---
|
||||
|
||||
## Question
|
||||
|
||||
How can Blue help ensure work adheres to ADRs? What mechanisms could check, remind, or enforce architectural decisions?
|
||||
|
||||
---
|
||||
|
||||
*Investigation notes by Blue*
|
||||
|
|
@ -1,169 +0,0 @@
|
|||
# Spike: Agentic Cli Integration
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | In Progress |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Time Box** | 2 hours |
|
||||
|
||||
---
|
||||
|
||||
## Question
|
||||
|
||||
Which commercial-compatible local agentic coding CLI (Aider, Goose, OpenCode) can be integrated into Blue CLI, and what's the best integration pattern?
|
||||
|
||||
---
|
||||
|
||||
## Findings
|
||||
|
||||
### Candidates Evaluated
|
||||
|
||||
| Tool | License | Language | MCP Support | Integration Pattern |
|
||||
|------|---------|----------|-------------|---------------------|
|
||||
| **Goose** | Apache-2.0 | Rust | Native | MCP client/server, subprocess |
|
||||
| **Aider** | Apache-2.0 | Python | Via extensions | Subprocess, CLI flags |
|
||||
| **OpenCode** | MIT | Go | Native | Go SDK, subprocess |
|
||||
|
||||
### Goose (Recommended)
|
||||
|
||||
**Why Goose wins:**
|
||||
|
||||
1. **Same language as Blue** - Rust-based, can share types and potentially link as library
|
||||
2. **Native MCP support** - Goose is built on MCP (co-developed with Anthropic). Blue already speaks MCP.
|
||||
3. **Apache-2.0** - Commercial-compatible with patent grant
|
||||
4. **Block backing** - Maintained by Block (Square/Cash App), contributed to Linux Foundation's Agentic AI Foundation in Dec 2025
|
||||
5. **25+ LLM providers** - Works with Ollama, OpenAI, Anthropic, local models
|
||||
|
||||
**Integration patterns:**
|
||||
|
||||
```
|
||||
Option A: MCP Extension (Lowest friction)
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ Goose CLI │
|
||||
│ ↓ (MCP client) │
|
||||
│ Blue MCP Server (existing blue-mcp) │
|
||||
│ ↓ │
|
||||
│ Blue tools: rfc_create, worktree, etc. │
|
||||
└─────────────────────────────────────────────┘
|
||||
|
||||
Option B: Blue as Goose Extension
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ Blue CLI │
|
||||
│ ↓ (spawns) │
|
||||
│ Goose (subprocess) │
|
||||
│ ↓ (MCP client) │
|
||||
│ Blue MCP Server │
|
||||
└─────────────────────────────────────────────┘
|
||||
|
||||
Option C: Embedded (Future)
|
||||
┌─────────────────────────────────────────────┐
|
||||
│ Blue CLI │
|
||||
│ ↓ (links) │
|
||||
│ goose-core (Rust crate) │
|
||||
│ ↓ │
|
||||
│ Local LLM / API │
|
||||
└─────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Recommendation: Option A first**
|
||||
|
||||
Goose already works as an MCP client. Blue already has an MCP server (`blue mcp`). The integration is:
|
||||
|
||||
```bash
|
||||
# User installs goose
|
||||
brew install block/tap/goose
|
||||
|
||||
# User configures Blue as Goose extension
|
||||
# In ~/.config/goose/config.yaml:
|
||||
extensions:
|
||||
blue:
|
||||
type: stdio
|
||||
command: blue mcp
|
||||
```
|
||||
|
||||
This requires **zero code changes** to Blue. Users get agentic coding with Blue's workflow tools immediately.
|
||||
|
||||
### Aider
|
||||
|
||||
**Pros:**
|
||||
- Mature, battle-tested (Apache-2.0)
|
||||
- Git-native with smart commits
|
||||
- Strong local model support via Ollama
|
||||
|
||||
**Cons:**
|
||||
- Python-based (foreign to Rust codebase)
|
||||
- CLI scripting API is "not officially supported"
|
||||
- No native MCP (would need wrapper)
|
||||
|
||||
**Integration pattern:** Subprocess with `--message` flag for non-interactive use.
|
||||
|
||||
```rust
|
||||
// Hypothetical
|
||||
let output = Command::new("aider")
|
||||
.args(["--message", "implement the function", "--yes-always"])
|
||||
.output()?;
|
||||
```
|
||||
|
||||
**Verdict:** Viable but more friction than Goose.
|
||||
|
||||
### OpenCode
|
||||
|
||||
**Pros:**
|
||||
- MIT license (most permissive)
|
||||
- Go SDK available
|
||||
- Native MCP support
|
||||
- Growing fast (45K+ GitHub stars)
|
||||
|
||||
**Cons:**
|
||||
- Go-based (FFI overhead to call from Rust)
|
||||
- Newer, less mature than Aider
|
||||
- SDK is for Go clients, not embedding
|
||||
|
||||
**Integration pattern:** Go SDK or subprocess.
|
||||
|
||||
**Verdict:** Good option if Goose doesn't work out.
|
||||
|
||||
### Local LLM Backend
|
||||
|
||||
All three support Ollama for local models:
|
||||
|
||||
```bash
|
||||
# Install Ollama
|
||||
brew install ollama
|
||||
|
||||
# Pull a coding model (Apache-2.0 licensed)
|
||||
ollama pull qwen2.5-coder:32b # 19GB, best quality
|
||||
ollama pull qwen2.5-coder:7b # 4.4GB, faster
|
||||
ollama pull deepseek-coder-v2 # Alternative
|
||||
```
|
||||
|
||||
Goose config for local:
|
||||
```yaml
|
||||
# ~/.config/goose/config.yaml
|
||||
provider: ollama
|
||||
model: qwen2.5-coder:32b
|
||||
```
|
||||
|
||||
## Outcome
|
||||
|
||||
**Recommends implementation** with Goose as the integration target.
|
||||
|
||||
### Immediate (Zero code):
|
||||
1. Document Blue + Goose setup in docs/
|
||||
2. Ship example `goose-extension.yaml` config
|
||||
|
||||
### Short-term (Minimal code):
|
||||
1. Add `blue agent` subcommand that launches Goose with Blue extension pre-configured
|
||||
2. Add Blue-specific prompts/instructions for Goose
|
||||
|
||||
### Medium-term (More code):
|
||||
1. Investigate goose-core Rust crate for tighter integration
|
||||
2. Consider Blue daemon serving as persistent MCP host
|
||||
|
||||
## Sources
|
||||
|
||||
- [Goose GitHub](https://github.com/block/goose)
|
||||
- [Goose Architecture](https://block.github.io/goose/docs/goose-architecture/)
|
||||
- [Aider Scripting](https://aider.chat/docs/scripting.html)
|
||||
- [OpenCode Go SDK](https://pkg.go.dev/github.com/sst/opencode-sdk-go)
|
||||
- [Goose MCP Deep Dive](https://dev.to/lymah/deep-dive-into-gooses-extension-system-and-model-context-protocol-mcp-3ehl)
|
||||
|
|
@ -1,17 +0,0 @@
|
|||
# Spike: Local Llm Integration
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | In Progress |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Time Box** | 2 hours |
|
||||
|
||||
---
|
||||
|
||||
## Question
|
||||
|
||||
Which commercial-compatible local LLM CLI tool can be integrated into Blue CLI, and what's the best integration approach?
|
||||
|
||||
---
|
||||
|
||||
*Investigation notes by Blue*
|
||||
|
|
@ -1,17 +0,0 @@
|
|||
# Spike: Per Repo Blue Folder
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Complete |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Time Box** | 1 hour |
|
||||
|
||||
---
|
||||
|
||||
## Question
|
||||
|
||||
Should each repo have its own .blue folder with docs, or centralize in one location? What are the tradeoffs and what changes are needed?
|
||||
|
||||
---
|
||||
|
||||
*Investigation notes by Blue*
|
||||
|
|
@ -1,17 +0,0 @@
|
|||
# Spike: Runbook Driven Actions
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Complete |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Time Box** | 2 hours |
|
||||
|
||||
---
|
||||
|
||||
## Question
|
||||
|
||||
How can runbooks guide Claude Code through repo actions (docker builds, deploys, tests) so it follows the documented steps rather than guessing?
|
||||
|
||||
---
|
||||
|
||||
*Investigation notes by Blue*
|
||||
|
|
@ -1,17 +0,0 @@
|
|||
# Spike: Sqlite Storage Expansion
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Complete |
|
||||
| **Date** | 2026-01-24 |
|
||||
| **Time Box** | 2 hours |
|
||||
|
||||
---
|
||||
|
||||
## Question
|
||||
|
||||
What changes are needed to store spikes and plans in SQLite like RFCs, and store dialogue metadata (but not content) in SQLite?
|
||||
|
||||
---
|
||||
|
||||
*Investigation notes by Blue*
|
||||
|
|
@ -10,7 +10,7 @@ use rusqlite::{params, Connection, OptionalExtension, Transaction, TransactionBe
|
|||
use tracing::{debug, info, warn};
|
||||
|
||||
/// Current schema version
|
||||
const SCHEMA_VERSION: i32 = 2;
|
||||
const SCHEMA_VERSION: i32 = 3;
|
||||
|
||||
/// Core database schema
|
||||
const SCHEMA: &str = r#"
|
||||
|
|
@ -27,11 +27,13 @@ const SCHEMA: &str = r#"
|
|||
file_path TEXT,
|
||||
created_at TEXT NOT NULL,
|
||||
updated_at TEXT NOT NULL,
|
||||
deleted_at TEXT,
|
||||
UNIQUE(doc_type, title)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_documents_type ON documents(doc_type);
|
||||
CREATE INDEX IF NOT EXISTS idx_documents_status ON documents(doc_type, status);
|
||||
CREATE INDEX IF NOT EXISTS idx_documents_deleted ON documents(deleted_at) WHERE deleted_at IS NOT NULL;
|
||||
|
||||
CREATE TABLE IF NOT EXISTS document_links (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
|
|
@ -266,6 +268,7 @@ pub struct Document {
|
|||
pub file_path: Option<String>,
|
||||
pub created_at: Option<String>,
|
||||
pub updated_at: Option<String>,
|
||||
pub deleted_at: Option<String>,
|
||||
}
|
||||
|
||||
impl Document {
|
||||
|
|
@ -280,8 +283,14 @@ impl Document {
|
|||
file_path: None,
|
||||
created_at: None,
|
||||
updated_at: None,
|
||||
deleted_at: None,
|
||||
}
|
||||
}
|
||||
|
||||
/// Check if document is soft-deleted
|
||||
pub fn is_deleted(&self) -> bool {
|
||||
self.deleted_at.is_some()
|
||||
}
|
||||
}
|
||||
|
||||
/// A task in a document's plan
|
||||
|
|
@ -604,6 +613,10 @@ impl DocumentStore {
|
|||
Some(v) if v == SCHEMA_VERSION => {
|
||||
debug!("Database is up to date (version {})", v);
|
||||
}
|
||||
Some(v) if v < SCHEMA_VERSION => {
|
||||
info!("Migrating database from version {} to {}", v, SCHEMA_VERSION);
|
||||
self.run_migrations(v)?;
|
||||
}
|
||||
Some(v) => {
|
||||
warn!(
|
||||
"Schema version {} found, expected {}. Migrations may be needed.",
|
||||
|
|
@ -615,6 +628,39 @@ impl DocumentStore {
|
|||
Ok(())
|
||||
}
|
||||
|
||||
/// Run migrations from old version to current
|
||||
fn run_migrations(&self, from_version: i32) -> Result<(), StoreError> {
|
||||
// Migration from v2 to v3: Add deleted_at column
|
||||
if from_version < 3 {
|
||||
debug!("Adding deleted_at column to documents table");
|
||||
// Check if column exists first
|
||||
let has_column: bool = self.conn.query_row(
|
||||
"SELECT COUNT(*) FROM pragma_table_info('documents') WHERE name = 'deleted_at'",
|
||||
[],
|
||||
|row| Ok(row.get::<_, i64>(0)? > 0),
|
||||
)?;
|
||||
|
||||
if !has_column {
|
||||
self.conn.execute(
|
||||
"ALTER TABLE documents ADD COLUMN deleted_at TEXT",
|
||||
[],
|
||||
)?;
|
||||
self.conn.execute(
|
||||
"CREATE INDEX IF NOT EXISTS idx_documents_deleted ON documents(deleted_at) WHERE deleted_at IS NOT NULL",
|
||||
[],
|
||||
)?;
|
||||
}
|
||||
}
|
||||
|
||||
// Update schema version
|
||||
self.conn.execute(
|
||||
"UPDATE schema_version SET version = ?1",
|
||||
params![SCHEMA_VERSION],
|
||||
)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Execute with retry on busy
|
||||
fn with_retry<F, T>(&self, f: F) -> Result<T, StoreError>
|
||||
where
|
||||
|
|
@ -669,8 +715,8 @@ impl DocumentStore {
|
|||
pub fn get_document(&self, doc_type: DocType, title: &str) -> Result<Document, StoreError> {
|
||||
self.conn
|
||||
.query_row(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at
|
||||
FROM documents WHERE doc_type = ?1 AND title = ?2",
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
FROM documents WHERE doc_type = ?1 AND title = ?2 AND deleted_at IS NULL",
|
||||
params![doc_type.as_str(), title],
|
||||
|row| {
|
||||
Ok(Document {
|
||||
|
|
@ -682,6 +728,7 @@ impl DocumentStore {
|
|||
file_path: row.get(5)?,
|
||||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
})
|
||||
},
|
||||
)
|
||||
|
|
@ -691,11 +738,11 @@ impl DocumentStore {
|
|||
})
|
||||
}
|
||||
|
||||
/// Get a document by ID
|
||||
/// Get a document by ID (including soft-deleted)
|
||||
pub fn get_document_by_id(&self, id: i64) -> Result<Document, StoreError> {
|
||||
self.conn
|
||||
.query_row(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
FROM documents WHERE id = ?1",
|
||||
params![id],
|
||||
|row| {
|
||||
|
|
@ -708,6 +755,7 @@ impl DocumentStore {
|
|||
file_path: row.get(5)?,
|
||||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
})
|
||||
},
|
||||
)
|
||||
|
|
@ -727,8 +775,8 @@ impl DocumentStore {
|
|||
) -> Result<Document, StoreError> {
|
||||
self.conn
|
||||
.query_row(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at
|
||||
FROM documents WHERE doc_type = ?1 AND number = ?2",
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
FROM documents WHERE doc_type = ?1 AND number = ?2 AND deleted_at IS NULL",
|
||||
params![doc_type.as_str(), number],
|
||||
|row| {
|
||||
Ok(Document {
|
||||
|
|
@ -740,6 +788,7 @@ impl DocumentStore {
|
|||
file_path: row.get(5)?,
|
||||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
})
|
||||
},
|
||||
)
|
||||
|
|
@ -773,8 +822,8 @@ impl DocumentStore {
|
|||
// Try substring match
|
||||
let pattern = format!("%{}%", query.to_lowercase());
|
||||
if let Ok(doc) = self.conn.query_row(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at
|
||||
FROM documents WHERE doc_type = ?1 AND LOWER(title) LIKE ?2
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
FROM documents WHERE doc_type = ?1 AND LOWER(title) LIKE ?2 AND deleted_at IS NULL
|
||||
ORDER BY LENGTH(title) ASC LIMIT 1",
|
||||
params![doc_type.as_str(), pattern],
|
||||
|row| {
|
||||
|
|
@ -787,6 +836,7 @@ impl DocumentStore {
|
|||
file_path: row.get(5)?,
|
||||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
})
|
||||
},
|
||||
) {
|
||||
|
|
@ -848,11 +898,11 @@ impl DocumentStore {
|
|||
})
|
||||
}
|
||||
|
||||
/// List all documents of a given type
|
||||
/// List all documents of a given type (excludes soft-deleted)
|
||||
pub fn list_documents(&self, doc_type: DocType) -> Result<Vec<Document>, StoreError> {
|
||||
let mut stmt = self.conn.prepare(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at
|
||||
FROM documents WHERE doc_type = ?1 ORDER BY number DESC, title ASC",
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
FROM documents WHERE doc_type = ?1 AND deleted_at IS NULL ORDER BY number DESC, title ASC",
|
||||
)?;
|
||||
|
||||
let rows = stmt.query_map(params![doc_type.as_str()], |row| {
|
||||
|
|
@ -865,6 +915,7 @@ impl DocumentStore {
|
|||
file_path: row.get(5)?,
|
||||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
})
|
||||
})?;
|
||||
|
||||
|
|
@ -872,15 +923,15 @@ impl DocumentStore {
|
|||
.map_err(StoreError::Database)
|
||||
}
|
||||
|
||||
/// List documents by status
|
||||
/// List documents by status (excludes soft-deleted)
|
||||
pub fn list_documents_by_status(
|
||||
&self,
|
||||
doc_type: DocType,
|
||||
status: &str,
|
||||
) -> Result<Vec<Document>, StoreError> {
|
||||
let mut stmt = self.conn.prepare(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at
|
||||
FROM documents WHERE doc_type = ?1 AND status = ?2 ORDER BY number DESC, title ASC",
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
FROM documents WHERE doc_type = ?1 AND status = ?2 AND deleted_at IS NULL ORDER BY number DESC, title ASC",
|
||||
)?;
|
||||
|
||||
let rows = stmt.query_map(params![doc_type.as_str(), status], |row| {
|
||||
|
|
@ -893,6 +944,7 @@ impl DocumentStore {
|
|||
file_path: row.get(5)?,
|
||||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
})
|
||||
})?;
|
||||
|
||||
|
|
@ -900,7 +952,7 @@ impl DocumentStore {
|
|||
.map_err(StoreError::Database)
|
||||
}
|
||||
|
||||
/// Delete a document
|
||||
/// Delete a document permanently
|
||||
pub fn delete_document(&self, doc_type: DocType, title: &str) -> Result<(), StoreError> {
|
||||
self.with_retry(|| {
|
||||
let deleted = self.conn.execute(
|
||||
|
|
@ -914,6 +966,148 @@ impl DocumentStore {
|
|||
})
|
||||
}
|
||||
|
||||
/// Soft-delete a document (set deleted_at timestamp)
|
||||
pub fn soft_delete_document(&self, doc_type: DocType, title: &str) -> Result<(), StoreError> {
|
||||
self.with_retry(|| {
|
||||
let now = chrono::Utc::now().to_rfc3339();
|
||||
let updated = self.conn.execute(
|
||||
"UPDATE documents SET deleted_at = ?1, updated_at = ?1
|
||||
WHERE doc_type = ?2 AND title = ?3 AND deleted_at IS NULL",
|
||||
params![now, doc_type.as_str(), title],
|
||||
)?;
|
||||
if updated == 0 {
|
||||
return Err(StoreError::NotFound(title.to_string()));
|
||||
}
|
||||
Ok(())
|
||||
})
|
||||
}
|
||||
|
||||
/// Restore a soft-deleted document
|
||||
pub fn restore_document(&self, doc_type: DocType, title: &str) -> Result<(), StoreError> {
|
||||
self.with_retry(|| {
|
||||
let now = chrono::Utc::now().to_rfc3339();
|
||||
let updated = self.conn.execute(
|
||||
"UPDATE documents SET deleted_at = NULL, updated_at = ?1
|
||||
WHERE doc_type = ?2 AND title = ?3 AND deleted_at IS NOT NULL",
|
||||
params![now, doc_type.as_str(), title],
|
||||
)?;
|
||||
if updated == 0 {
|
||||
return Err(StoreError::NotFound(format!(
|
||||
"soft-deleted {} '{}'",
|
||||
doc_type.as_str(),
|
||||
title
|
||||
)));
|
||||
}
|
||||
Ok(())
|
||||
})
|
||||
}
|
||||
|
||||
/// Get a soft-deleted document by type and title
|
||||
pub fn get_deleted_document(&self, doc_type: DocType, title: &str) -> Result<Document, StoreError> {
|
||||
self.conn
|
||||
.query_row(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
FROM documents WHERE doc_type = ?1 AND title = ?2 AND deleted_at IS NOT NULL",
|
||||
params![doc_type.as_str(), title],
|
||||
|row| {
|
||||
Ok(Document {
|
||||
id: Some(row.get(0)?),
|
||||
doc_type: DocType::from_str(row.get::<_, String>(1)?.as_str()).unwrap(),
|
||||
number: row.get(2)?,
|
||||
title: row.get(3)?,
|
||||
status: row.get(4)?,
|
||||
file_path: row.get(5)?,
|
||||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
})
|
||||
},
|
||||
)
|
||||
.map_err(|e| match e {
|
||||
rusqlite::Error::QueryReturnedNoRows => StoreError::NotFound(format!(
|
||||
"soft-deleted {} '{}'",
|
||||
doc_type.as_str(),
|
||||
title
|
||||
)),
|
||||
e => StoreError::Database(e),
|
||||
})
|
||||
}
|
||||
|
||||
/// List soft-deleted documents
|
||||
pub fn list_deleted_documents(&self, doc_type: Option<DocType>) -> Result<Vec<Document>, StoreError> {
|
||||
let query = match doc_type {
|
||||
Some(dt) => format!(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
FROM documents WHERE doc_type = '{}' AND deleted_at IS NOT NULL
|
||||
ORDER BY deleted_at DESC",
|
||||
dt.as_str()
|
||||
),
|
||||
None => "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
FROM documents WHERE deleted_at IS NOT NULL
|
||||
ORDER BY deleted_at DESC".to_string(),
|
||||
};
|
||||
|
||||
let mut stmt = self.conn.prepare(&query)?;
|
||||
let rows = stmt.query_map([], |row| {
|
||||
Ok(Document {
|
||||
id: Some(row.get(0)?),
|
||||
doc_type: DocType::from_str(row.get::<_, String>(1)?.as_str()).unwrap(),
|
||||
number: row.get(2)?,
|
||||
title: row.get(3)?,
|
||||
status: row.get(4)?,
|
||||
file_path: row.get(5)?,
|
||||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
})
|
||||
})?;
|
||||
|
||||
rows.collect::<Result<Vec<_>, _>>()
|
||||
.map_err(StoreError::Database)
|
||||
}
|
||||
|
||||
/// Permanently delete documents that have been soft-deleted for more than N days
|
||||
pub fn purge_old_deleted_documents(&self, days: i64) -> Result<usize, StoreError> {
|
||||
self.with_retry(|| {
|
||||
let cutoff = chrono::Utc::now() - chrono::Duration::days(days);
|
||||
let cutoff_str = cutoff.to_rfc3339();
|
||||
|
||||
let deleted = self.conn.execute(
|
||||
"DELETE FROM documents WHERE deleted_at IS NOT NULL AND deleted_at < ?1",
|
||||
params![cutoff_str],
|
||||
)?;
|
||||
|
||||
Ok(deleted)
|
||||
})
|
||||
}
|
||||
|
||||
/// Check if a document has ADR dependents (documents that reference it via rfc_to_adr link)
|
||||
pub fn has_adr_dependents(&self, document_id: i64) -> Result<Vec<Document>, StoreError> {
|
||||
let mut stmt = self.conn.prepare(
|
||||
"SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at, d.deleted_at
|
||||
FROM documents d
|
||||
JOIN document_links l ON l.source_id = d.id
|
||||
WHERE l.target_id = ?1 AND l.link_type = 'rfc_to_adr' AND d.deleted_at IS NULL",
|
||||
)?;
|
||||
|
||||
let rows = stmt.query_map(params![document_id], |row| {
|
||||
Ok(Document {
|
||||
id: Some(row.get(0)?),
|
||||
doc_type: DocType::from_str(row.get::<_, String>(1)?.as_str()).unwrap(),
|
||||
number: row.get(2)?,
|
||||
title: row.get(3)?,
|
||||
status: row.get(4)?,
|
||||
file_path: row.get(5)?,
|
||||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
})
|
||||
})?;
|
||||
|
||||
rows.collect::<Result<Vec<_>, _>>()
|
||||
.map_err(StoreError::Database)
|
||||
}
|
||||
|
||||
/// Get the next document number for a type
|
||||
pub fn next_number(&self, doc_type: DocType) -> Result<i32, StoreError> {
|
||||
let max: Option<i32> = self.conn.query_row(
|
||||
|
|
@ -944,7 +1138,7 @@ impl DocumentStore {
|
|||
})
|
||||
}
|
||||
|
||||
/// Get linked documents
|
||||
/// Get linked documents (excludes soft-deleted)
|
||||
pub fn get_linked_documents(
|
||||
&self,
|
||||
source_id: i64,
|
||||
|
|
@ -952,16 +1146,16 @@ impl DocumentStore {
|
|||
) -> Result<Vec<Document>, StoreError> {
|
||||
let query = match link_type {
|
||||
Some(lt) => format!(
|
||||
"SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at
|
||||
"SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at, d.deleted_at
|
||||
FROM documents d
|
||||
JOIN document_links l ON l.target_id = d.id
|
||||
WHERE l.source_id = ?1 AND l.link_type = '{}'",
|
||||
WHERE l.source_id = ?1 AND l.link_type = '{}' AND d.deleted_at IS NULL",
|
||||
lt.as_str()
|
||||
),
|
||||
None => "SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at
|
||||
None => "SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at, d.deleted_at
|
||||
FROM documents d
|
||||
JOIN document_links l ON l.target_id = d.id
|
||||
WHERE l.source_id = ?1".to_string(),
|
||||
WHERE l.source_id = ?1 AND d.deleted_at IS NULL".to_string(),
|
||||
};
|
||||
|
||||
let mut stmt = self.conn.prepare(&query)?;
|
||||
|
|
@ -975,6 +1169,7 @@ impl DocumentStore {
|
|||
file_path: row.get(5)?,
|
||||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
})
|
||||
})?;
|
||||
|
||||
|
|
@ -1140,7 +1335,7 @@ impl DocumentStore {
|
|||
|
||||
// ==================== Search Operations ====================
|
||||
|
||||
/// Search documents using FTS5
|
||||
/// Search documents using FTS5 (excludes soft-deleted)
|
||||
pub fn search_documents(
|
||||
&self,
|
||||
query: &str,
|
||||
|
|
@ -1153,19 +1348,19 @@ impl DocumentStore {
|
|||
let sql = match doc_type {
|
||||
Some(dt) => format!(
|
||||
"SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path,
|
||||
d.created_at, d.updated_at, bm25(documents_fts) as score
|
||||
d.created_at, d.updated_at, d.deleted_at, bm25(documents_fts) as score
|
||||
FROM documents_fts fts
|
||||
JOIN documents d ON d.id = fts.rowid
|
||||
WHERE documents_fts MATCH ?1 AND d.doc_type = '{}'
|
||||
WHERE documents_fts MATCH ?1 AND d.doc_type = '{}' AND d.deleted_at IS NULL
|
||||
ORDER BY score
|
||||
LIMIT ?2",
|
||||
dt.as_str()
|
||||
),
|
||||
None => "SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path,
|
||||
d.created_at, d.updated_at, bm25(documents_fts) as score
|
||||
d.created_at, d.updated_at, d.deleted_at, bm25(documents_fts) as score
|
||||
FROM documents_fts fts
|
||||
JOIN documents d ON d.id = fts.rowid
|
||||
WHERE documents_fts MATCH ?1
|
||||
WHERE documents_fts MATCH ?1 AND d.deleted_at IS NULL
|
||||
ORDER BY score
|
||||
LIMIT ?2"
|
||||
.to_string(),
|
||||
|
|
@ -1183,8 +1378,9 @@ impl DocumentStore {
|
|||
file_path: row.get(5)?,
|
||||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
},
|
||||
score: row.get(8)?,
|
||||
score: row.get(9)?,
|
||||
snippet: None,
|
||||
})
|
||||
})?;
|
||||
|
|
|
|||
344
crates/blue-mcp/src/handlers/delete.rs
Normal file
344
crates/blue-mcp/src/handlers/delete.rs
Normal file
|
|
@ -0,0 +1,344 @@
|
|||
//! Document deletion handlers for Blue MCP
|
||||
//!
|
||||
//! Implements soft-delete with 7-day retention and restore capability.
|
||||
|
||||
use serde_json::{json, Value};
|
||||
use std::fs;
|
||||
use std::path::Path;
|
||||
|
||||
use blue_core::store::DocType;
|
||||
use blue_core::ProjectState;
|
||||
|
||||
use crate::ServerError;
|
||||
|
||||
/// Check what would be deleted (dry run)
|
||||
pub fn handle_delete_dry_run(
|
||||
state: &ProjectState,
|
||||
doc_type: DocType,
|
||||
title: &str,
|
||||
) -> Result<Value, ServerError> {
|
||||
let doc = state
|
||||
.store
|
||||
.find_document(doc_type, title)
|
||||
.map_err(|e| ServerError::StateLoadFailed(e.to_string()))?;
|
||||
|
||||
let doc_id = doc.id.unwrap();
|
||||
|
||||
// Check for ADR dependents
|
||||
let adr_dependents = state
|
||||
.store
|
||||
.has_adr_dependents(doc_id)
|
||||
.map_err(|e| ServerError::StateLoadFailed(e.to_string()))?;
|
||||
|
||||
// Check for active sessions
|
||||
let active_session = state
|
||||
.store
|
||||
.get_active_session(&doc.title)
|
||||
.map_err(|e| ServerError::StateLoadFailed(e.to_string()))?;
|
||||
|
||||
// Check for worktree
|
||||
let worktree = state
|
||||
.store
|
||||
.get_worktree(doc_id)
|
||||
.map_err(|e| ServerError::StateLoadFailed(e.to_string()))?;
|
||||
|
||||
// Find companion files
|
||||
let mut companion_files = Vec::new();
|
||||
if let Some(ref file_path) = doc.file_path {
|
||||
let base_path = Path::new(file_path);
|
||||
if let Some(stem) = base_path.file_stem() {
|
||||
if let Some(parent) = base_path.parent() {
|
||||
let stem_str = stem.to_string_lossy();
|
||||
// Check for .plan.md, .dialogue.md
|
||||
for suffix in &[".plan.md", ".dialogue.md", ".draft.md"] {
|
||||
let companion = parent.join(format!("{}{}", stem_str, suffix));
|
||||
if companion.exists() {
|
||||
companion_files.push(companion.display().to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let mut warnings = Vec::new();
|
||||
let mut blockers = Vec::new();
|
||||
|
||||
// ADR dependents are permanent blockers
|
||||
if !adr_dependents.is_empty() {
|
||||
let adr_titles: Vec<_> = adr_dependents.iter().map(|d| d.title.clone()).collect();
|
||||
blockers.push(format!(
|
||||
"Has ADR dependents: {}. ADRs are permanent records and cannot be cascade-deleted.",
|
||||
adr_titles.join(", ")
|
||||
));
|
||||
}
|
||||
|
||||
// Non-draft status requires force
|
||||
if doc.status != "draft" {
|
||||
warnings.push(format!(
|
||||
"Status is '{}'. Use force=true to delete non-draft documents.",
|
||||
doc.status
|
||||
));
|
||||
}
|
||||
|
||||
// Active session requires force
|
||||
if let Some(session) = &active_session {
|
||||
warnings.push(format!(
|
||||
"Has active {} session started at {}. Use force=true to override.",
|
||||
session.session_type.as_str(),
|
||||
session.started_at
|
||||
));
|
||||
}
|
||||
|
||||
Ok(json!({
|
||||
"dry_run": true,
|
||||
"document": {
|
||||
"type": doc_type.as_str(),
|
||||
"title": doc.title,
|
||||
"status": doc.status,
|
||||
"file_path": doc.file_path,
|
||||
},
|
||||
"would_delete": {
|
||||
"primary_file": doc.file_path,
|
||||
"companion_files": companion_files,
|
||||
"worktree": worktree.map(|w| w.worktree_path),
|
||||
},
|
||||
"blockers": blockers,
|
||||
"warnings": warnings,
|
||||
"can_proceed": blockers.is_empty(),
|
||||
}))
|
||||
}
|
||||
|
||||
/// Delete a document with safety checks
|
||||
pub fn handle_delete(
|
||||
state: &mut ProjectState,
|
||||
doc_type: DocType,
|
||||
title: &str,
|
||||
force: bool,
|
||||
permanent: bool,
|
||||
) -> Result<Value, ServerError> {
|
||||
// Find the document
|
||||
let doc = state
|
||||
.store
|
||||
.find_document(doc_type, title)
|
||||
.map_err(|e| ServerError::StateLoadFailed(e.to_string()))?;
|
||||
|
||||
let doc_id = doc.id.unwrap();
|
||||
|
||||
// Check for ADR dependents - this is a permanent blocker
|
||||
let adr_dependents = state
|
||||
.store
|
||||
.has_adr_dependents(doc_id)
|
||||
.map_err(|e| ServerError::StateLoadFailed(e.to_string()))?;
|
||||
|
||||
if !adr_dependents.is_empty() {
|
||||
let adr_titles: Vec<_> = adr_dependents.iter().map(|d| d.title.clone()).collect();
|
||||
return Ok(json!({
|
||||
"status": "blocked",
|
||||
"message": format!(
|
||||
"Cannot delete {} '{}'.\n\nThis document has ADR dependents: {}.\nADRs are permanent architectural records and cannot be cascade-deleted.\n\nTo proceed:\n1. Update the ADR(s) to remove the reference, or\n2. Mark this document as 'superseded' instead of deleting",
|
||||
doc_type.as_str(),
|
||||
doc.title,
|
||||
adr_titles.join(", ")
|
||||
),
|
||||
"adr_dependents": adr_titles,
|
||||
}));
|
||||
}
|
||||
|
||||
// Check status - non-draft requires force
|
||||
if doc.status != "draft" && !force {
|
||||
let status_msg = match doc.status.as_str() {
|
||||
"accepted" => "This document has been accepted.",
|
||||
"in-progress" => "This document has active work.",
|
||||
"implemented" => "This document is a historical record.",
|
||||
_ => "This document is not in draft status.",
|
||||
};
|
||||
|
||||
return Ok(json!({
|
||||
"status": "requires_force",
|
||||
"message": format!(
|
||||
"Cannot delete {} '{}'.\n\nStatus: {}\n{}\n\nUse force=true to delete anyway.",
|
||||
doc_type.as_str(),
|
||||
doc.title,
|
||||
doc.status,
|
||||
status_msg
|
||||
),
|
||||
"current_status": doc.status,
|
||||
}));
|
||||
}
|
||||
|
||||
// Check for active session - requires force
|
||||
let active_session = state
|
||||
.store
|
||||
.get_active_session(&doc.title)
|
||||
.map_err(|e| ServerError::StateLoadFailed(e.to_string()))?;
|
||||
|
||||
if active_session.is_some() && !force {
|
||||
let session = active_session.unwrap();
|
||||
return Ok(json!({
|
||||
"status": "requires_force",
|
||||
"message": format!(
|
||||
"Cannot delete {} '{}'.\n\nHas active {} session started at {}.\n\nUse force=true to delete anyway, which will end the session.",
|
||||
doc_type.as_str(),
|
||||
doc.title,
|
||||
session.session_type.as_str(),
|
||||
session.started_at
|
||||
),
|
||||
"active_session": {
|
||||
"type": session.session_type.as_str(),
|
||||
"started_at": session.started_at,
|
||||
},
|
||||
}));
|
||||
}
|
||||
|
||||
// End any active session
|
||||
if active_session.is_some() {
|
||||
let _ = state.store.end_session(&doc.title);
|
||||
}
|
||||
|
||||
// Remove worktree if exists
|
||||
let mut worktree_removed = false;
|
||||
if let Ok(Some(worktree)) = state.store.get_worktree(doc_id) {
|
||||
// Remove from filesystem
|
||||
let worktree_path = Path::new(&worktree.worktree_path);
|
||||
if worktree_path.exists() {
|
||||
// Use git worktree remove
|
||||
let _ = std::process::Command::new("git")
|
||||
.args(["worktree", "remove", "--force", &worktree.worktree_path])
|
||||
.output();
|
||||
}
|
||||
// Remove from database
|
||||
let _ = state.store.remove_worktree(doc_id);
|
||||
worktree_removed = true;
|
||||
}
|
||||
|
||||
// Delete companion files
|
||||
let mut files_deleted = Vec::new();
|
||||
if let Some(ref file_path) = doc.file_path {
|
||||
let base_path = Path::new(file_path);
|
||||
if let Some(stem) = base_path.file_stem() {
|
||||
if let Some(parent) = base_path.parent() {
|
||||
let stem_str = stem.to_string_lossy();
|
||||
for suffix in &[".plan.md", ".dialogue.md", ".draft.md"] {
|
||||
let companion = parent.join(format!("{}{}", stem_str, suffix));
|
||||
if companion.exists() {
|
||||
if fs::remove_file(&companion).is_ok() {
|
||||
files_deleted.push(companion.display().to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Delete primary file
|
||||
if base_path.exists() {
|
||||
if fs::remove_file(base_path).is_ok() {
|
||||
files_deleted.push(file_path.clone());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Soft or permanent delete
|
||||
if permanent {
|
||||
state
|
||||
.store
|
||||
.delete_document(doc_type, &doc.title)
|
||||
.map_err(|e| ServerError::StateLoadFailed(e.to_string()))?;
|
||||
} else {
|
||||
state
|
||||
.store
|
||||
.soft_delete_document(doc_type, &doc.title)
|
||||
.map_err(|e| ServerError::StateLoadFailed(e.to_string()))?;
|
||||
}
|
||||
|
||||
let action = if permanent {
|
||||
"permanently deleted"
|
||||
} else {
|
||||
"soft-deleted (recoverable for 7 days)"
|
||||
};
|
||||
|
||||
Ok(json!({
|
||||
"status": "success",
|
||||
"message": format!("{} '{}' {}.", doc_type.as_str().to_uppercase(), doc.title, action),
|
||||
"doc_type": doc_type.as_str(),
|
||||
"title": doc.title,
|
||||
"permanent": permanent,
|
||||
"files_deleted": files_deleted,
|
||||
"worktree_removed": worktree_removed,
|
||||
"restore_command": if !permanent {
|
||||
Some(format!("blue restore {} {}", doc_type.as_str(), doc.title))
|
||||
} else {
|
||||
None
|
||||
},
|
||||
}))
|
||||
}
|
||||
|
||||
/// Restore a soft-deleted document
|
||||
pub fn handle_restore(
|
||||
state: &mut ProjectState,
|
||||
doc_type: DocType,
|
||||
title: &str,
|
||||
) -> Result<Value, ServerError> {
|
||||
// Check if document exists and is soft-deleted
|
||||
let doc = state
|
||||
.store
|
||||
.get_deleted_document(doc_type, title)
|
||||
.map_err(|e| ServerError::StateLoadFailed(e.to_string()))?;
|
||||
|
||||
// Restore the document
|
||||
state
|
||||
.store
|
||||
.restore_document(doc_type, &doc.title)
|
||||
.map_err(|e| ServerError::StateLoadFailed(e.to_string()))?;
|
||||
|
||||
Ok(json!({
|
||||
"status": "success",
|
||||
"message": format!("{} '{}' restored.", doc_type.as_str().to_uppercase(), doc.title),
|
||||
"doc_type": doc_type.as_str(),
|
||||
"title": doc.title,
|
||||
"note": "Files were deleted and will need to be recreated if needed.",
|
||||
}))
|
||||
}
|
||||
|
||||
/// List soft-deleted documents
|
||||
pub fn handle_list_deleted(
|
||||
state: &ProjectState,
|
||||
doc_type: Option<DocType>,
|
||||
) -> Result<Value, ServerError> {
|
||||
let deleted = state
|
||||
.store
|
||||
.list_deleted_documents(doc_type)
|
||||
.map_err(|e| ServerError::StateLoadFailed(e.to_string()))?;
|
||||
|
||||
let docs: Vec<_> = deleted
|
||||
.iter()
|
||||
.map(|d| {
|
||||
json!({
|
||||
"type": d.doc_type.as_str(),
|
||||
"title": d.title,
|
||||
"status": d.status,
|
||||
"deleted_at": d.deleted_at,
|
||||
})
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok(json!({
|
||||
"status": "success",
|
||||
"count": docs.len(),
|
||||
"deleted_documents": docs,
|
||||
"note": "Documents are auto-purged 7 days after deletion. Use blue_restore to recover.",
|
||||
}))
|
||||
}
|
||||
|
||||
/// Purge old soft-deleted documents
|
||||
pub fn handle_purge_deleted(state: &mut ProjectState, days: i64) -> Result<Value, ServerError> {
|
||||
let purged = state
|
||||
.store
|
||||
.purge_old_deleted_documents(days)
|
||||
.map_err(|e| ServerError::StateLoadFailed(e.to_string()))?;
|
||||
|
||||
Ok(json!({
|
||||
"status": "success",
|
||||
"message": format!("Purged {} documents older than {} days.", purged, days),
|
||||
"purged_count": purged,
|
||||
}))
|
||||
}
|
||||
|
|
@ -5,6 +5,7 @@
|
|||
pub mod adr;
|
||||
pub mod audit;
|
||||
pub mod decision;
|
||||
pub mod delete;
|
||||
pub mod dialogue;
|
||||
pub mod dialogue_lint;
|
||||
pub mod env;
|
||||
|
|
|
|||
|
|
@ -99,6 +99,7 @@ pub fn handle_create(state: &mut ProjectState, args: &Value) -> Result<Value, Se
|
|||
file_path: Some(file_path.to_string_lossy().to_string()),
|
||||
created_at: None,
|
||||
updated_at: None,
|
||||
deleted_at: None,
|
||||
};
|
||||
state
|
||||
.store
|
||||
|
|
@ -214,6 +215,7 @@ pub fn handle_action_to_rfc(state: &mut ProjectState, args: &Value) -> Result<Va
|
|||
file_path: Some(rfc_file_path.to_string_lossy().to_string()),
|
||||
created_at: None,
|
||||
updated_at: None,
|
||||
deleted_at: None,
|
||||
};
|
||||
state
|
||||
.store
|
||||
|
|
|
|||
|
|
@ -5,6 +5,10 @@
|
|||
//! - Base branch must be `develop` (not `main`)
|
||||
//! - Test plan checkboxes must be verified before merge
|
||||
//! - User must approve PR before merge
|
||||
//!
|
||||
//! PR title convention (RFC 0007):
|
||||
//! - Format: `RFC NNNN: Feature Description`
|
||||
//! - Example: `RFC 0007: Consistent Branch Naming`
|
||||
|
||||
use std::process::Command;
|
||||
|
||||
|
|
@ -12,6 +16,7 @@ use blue_core::ProjectState;
|
|||
use serde_json::{json, Value};
|
||||
|
||||
use crate::error::ServerError;
|
||||
use crate::handlers::worktree::strip_rfc_number_prefix;
|
||||
|
||||
/// Task category for test plan items
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
|
|
@ -26,11 +31,27 @@ pub enum TaskCategory {
|
|||
|
||||
|
||||
/// Handle blue_pr_create
|
||||
///
|
||||
/// If `rfc` is provided (e.g., "0007-consistent-branch-naming"), the title
|
||||
/// will be formatted as "RFC 0007: Consistent Branch Naming" per RFC 0007.
|
||||
pub fn handle_create(_state: &ProjectState, args: &Value) -> Result<Value, ServerError> {
|
||||
let title = args
|
||||
.get("title")
|
||||
let rfc = args.get("rfc").and_then(|v| v.as_str());
|
||||
|
||||
// If RFC is provided, format title as "RFC NNNN: Title Case Name"
|
||||
let title = if let Some(rfc_title) = rfc {
|
||||
let (stripped, number) = strip_rfc_number_prefix(rfc_title);
|
||||
let title_case = to_title_case(&stripped);
|
||||
if let Some(n) = number {
|
||||
format!("RFC {:04}: {}", n, title_case)
|
||||
} else {
|
||||
title_case
|
||||
}
|
||||
} else {
|
||||
args.get("title")
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or(ServerError::InvalidParams)?;
|
||||
.ok_or(ServerError::InvalidParams)?
|
||||
.to_string()
|
||||
};
|
||||
|
||||
let base = args
|
||||
.get("base")
|
||||
|
|
@ -510,3 +531,19 @@ fn update_checkbox_in_body(body: &str, item_selector: &str) -> Result<(String, S
|
|||
))),
|
||||
}
|
||||
}
|
||||
|
||||
/// Convert kebab-case to Title Case
|
||||
///
|
||||
/// Example: "consistent-branch-naming" -> "Consistent Branch Naming"
|
||||
fn to_title_case(s: &str) -> String {
|
||||
s.split('-')
|
||||
.map(|word| {
|
||||
let mut chars = word.chars();
|
||||
match chars.next() {
|
||||
None => String::new(),
|
||||
Some(first) => first.to_uppercase().chain(chars).collect(),
|
||||
}
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
.join(" ")
|
||||
}
|
||||
|
|
|
|||
|
|
@ -85,6 +85,7 @@ pub fn handle_create(state: &mut ProjectState, args: &Value) -> Result<Value, Se
|
|||
file_path: Some(file_path.to_string_lossy().to_string()),
|
||||
created_at: None,
|
||||
updated_at: None,
|
||||
deleted_at: None,
|
||||
};
|
||||
let doc_id = state
|
||||
.store
|
||||
|
|
|
|||
|
|
@ -1,12 +1,32 @@
|
|||
//! Worktree tool handlers
|
||||
//!
|
||||
//! Handles git worktree operations for isolated feature development.
|
||||
//!
|
||||
//! Branch naming convention (RFC 0007):
|
||||
//! - RFC file: `NNNN-feature-description.md`
|
||||
//! - Branch: `feature-description` (number prefix stripped)
|
||||
//! - Worktree: `feature-description`
|
||||
|
||||
use blue_core::{DocType, ProjectState, Worktree as StoreWorktree};
|
||||
use serde_json::{json, Value};
|
||||
|
||||
use crate::error::ServerError;
|
||||
|
||||
/// Strip RFC number prefix from title
|
||||
///
|
||||
/// Converts `0007-consistent-branch-naming` to `consistent-branch-naming`
|
||||
/// Returns (stripped_name, rfc_number) if pattern matches, otherwise (original, None)
|
||||
pub fn strip_rfc_number_prefix(title: &str) -> (String, Option<u32>) {
|
||||
// Match pattern: NNNN-rest-of-title
|
||||
if title.len() > 5 && title.chars().take(4).all(|c| c.is_ascii_digit()) && title.chars().nth(4) == Some('-') {
|
||||
let number: Option<u32> = title[..4].parse().ok();
|
||||
let stripped = title[5..].to_string();
|
||||
(stripped, number)
|
||||
} else {
|
||||
(title.to_string(), None)
|
||||
}
|
||||
}
|
||||
|
||||
/// Handle blue_worktree_create
|
||||
pub fn handle_create(state: &ProjectState, args: &Value) -> Result<Value, ServerError> {
|
||||
let title = args
|
||||
|
|
@ -44,9 +64,10 @@ pub fn handle_create(state: &ProjectState, args: &Value) -> Result<Value, Server
|
|||
}
|
||||
}
|
||||
|
||||
// Create branch name and worktree path
|
||||
let branch_name = format!("rfc/{}", title);
|
||||
let worktree_path = state.home.worktrees_path.join(title);
|
||||
// Create branch name and worktree path (RFC 0007: strip number prefix)
|
||||
let (stripped_name, _rfc_number) = strip_rfc_number_prefix(title);
|
||||
let branch_name = stripped_name.clone();
|
||||
let worktree_path = state.home.worktrees_path.join(&stripped_name);
|
||||
|
||||
// Try to create the git worktree
|
||||
let repo_path = state.home.root.clone();
|
||||
|
|
@ -149,7 +170,9 @@ pub fn handle_cleanup(state: &ProjectState, args: &Value) -> Result<Value, Serve
|
|||
.and_then(|v| v.as_str())
|
||||
.ok_or(ServerError::InvalidParams)?;
|
||||
|
||||
let branch_name = format!("rfc/{}", title);
|
||||
// Support both old (rfc/title) and new (stripped) naming conventions
|
||||
let (stripped_name, _) = strip_rfc_number_prefix(title);
|
||||
let branch_name = stripped_name.clone();
|
||||
|
||||
// Find the RFC to get worktree info
|
||||
let doc = state
|
||||
|
|
@ -313,3 +336,44 @@ pub fn handle_remove(state: &ProjectState, args: &Value) -> Result<Value, Server
|
|||
)
|
||||
}))
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_strip_rfc_number_prefix() {
|
||||
// Standard RFC title with number
|
||||
let (stripped, number) = strip_rfc_number_prefix("0007-consistent-branch-naming");
|
||||
assert_eq!(stripped, "consistent-branch-naming");
|
||||
assert_eq!(number, Some(7));
|
||||
|
||||
// Another example
|
||||
let (stripped, number) = strip_rfc_number_prefix("0001-some-feature");
|
||||
assert_eq!(stripped, "some-feature");
|
||||
assert_eq!(number, Some(1));
|
||||
|
||||
// High number
|
||||
let (stripped, number) = strip_rfc_number_prefix("9999-last-rfc");
|
||||
assert_eq!(stripped, "last-rfc");
|
||||
assert_eq!(number, Some(9999));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_strip_rfc_number_prefix_no_number() {
|
||||
// No number prefix
|
||||
let (stripped, number) = strip_rfc_number_prefix("some-feature");
|
||||
assert_eq!(stripped, "some-feature");
|
||||
assert_eq!(number, None);
|
||||
|
||||
// Too few digits
|
||||
let (stripped, number) = strip_rfc_number_prefix("007-james-bond");
|
||||
assert_eq!(stripped, "007-james-bond");
|
||||
assert_eq!(number, None);
|
||||
|
||||
// No hyphen after number
|
||||
let (stripped, number) = strip_rfc_number_prefix("0007feature");
|
||||
assert_eq!(stripped, "0007feature");
|
||||
assert_eq!(number, None);
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -534,7 +534,7 @@ impl BlueServer {
|
|||
},
|
||||
{
|
||||
"name": "blue_pr_create",
|
||||
"description": "Create a PR with enforced base branch (develop, not main).",
|
||||
"description": "Create a PR with enforced base branch (develop, not main). If rfc is provided, title is formatted as 'RFC NNNN: Title Case Name'.",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
|
|
@ -542,9 +542,13 @@ impl BlueServer {
|
|||
"type": "string",
|
||||
"description": "Current working directory"
|
||||
},
|
||||
"rfc": {
|
||||
"type": "string",
|
||||
"description": "RFC title (e.g., '0007-consistent-branch-naming'). If provided, PR title is formatted as 'RFC 0007: Consistent Branch Naming'"
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "PR title"
|
||||
"description": "PR title (used if rfc not provided)"
|
||||
},
|
||||
"base": {
|
||||
"type": "string",
|
||||
|
|
@ -558,8 +562,7 @@ impl BlueServer {
|
|||
"type": "boolean",
|
||||
"description": "Create as draft PR"
|
||||
}
|
||||
},
|
||||
"required": ["title"]
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
|
|
@ -1758,6 +1761,100 @@ impl BlueServer {
|
|||
},
|
||||
"required": ["name"]
|
||||
}
|
||||
},
|
||||
// RFC 0006: Delete tools
|
||||
{
|
||||
"name": "blue_delete",
|
||||
"description": "Delete a document (RFC, spike, decision, etc.) with safety checks. Supports dry_run, force, and permanent options. Default is soft-delete with 7-day retention.",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"cwd": {
|
||||
"type": "string",
|
||||
"description": "Current working directory"
|
||||
},
|
||||
"doc_type": {
|
||||
"type": "string",
|
||||
"description": "Document type",
|
||||
"enum": ["rfc", "spike", "adr", "decision", "prd", "postmortem", "runbook"]
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "Document title or number"
|
||||
},
|
||||
"dry_run": {
|
||||
"type": "boolean",
|
||||
"description": "Preview what would be deleted without making changes"
|
||||
},
|
||||
"force": {
|
||||
"type": "boolean",
|
||||
"description": "Skip confirmation for non-draft documents or active sessions"
|
||||
},
|
||||
"permanent": {
|
||||
"type": "boolean",
|
||||
"description": "Permanently delete (skip soft-delete retention)"
|
||||
}
|
||||
},
|
||||
"required": ["doc_type", "title"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "blue_restore",
|
||||
"description": "Restore a soft-deleted document within the 7-day retention period.",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"cwd": {
|
||||
"type": "string",
|
||||
"description": "Current working directory"
|
||||
},
|
||||
"doc_type": {
|
||||
"type": "string",
|
||||
"description": "Document type",
|
||||
"enum": ["rfc", "spike", "adr", "decision", "prd", "postmortem", "runbook"]
|
||||
},
|
||||
"title": {
|
||||
"type": "string",
|
||||
"description": "Document title to restore"
|
||||
}
|
||||
},
|
||||
"required": ["doc_type", "title"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "blue_deleted_list",
|
||||
"description": "List soft-deleted documents that can be restored.",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"cwd": {
|
||||
"type": "string",
|
||||
"description": "Current working directory"
|
||||
},
|
||||
"doc_type": {
|
||||
"type": "string",
|
||||
"description": "Filter by document type (optional)",
|
||||
"enum": ["rfc", "spike", "adr", "decision", "prd", "postmortem", "runbook"]
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "blue_purge_deleted",
|
||||
"description": "Permanently remove soft-deleted documents older than specified days.",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"cwd": {
|
||||
"type": "string",
|
||||
"description": "Current working directory"
|
||||
},
|
||||
"days": {
|
||||
"type": "number",
|
||||
"description": "Documents deleted more than this many days ago will be purged (default: 7)"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}))
|
||||
|
|
@ -1875,6 +1972,11 @@ impl BlueServer {
|
|||
"blue_model_pull" => crate::handlers::llm::handle_model_pull(&call.arguments.unwrap_or_default()),
|
||||
"blue_model_remove" => crate::handlers::llm::handle_model_remove(&call.arguments.unwrap_or_default()),
|
||||
"blue_model_warmup" => crate::handlers::llm::handle_model_warmup(&call.arguments.unwrap_or_default()),
|
||||
// RFC 0006: Delete tools
|
||||
"blue_delete" => self.handle_delete(&call.arguments),
|
||||
"blue_restore" => self.handle_restore(&call.arguments),
|
||||
"blue_deleted_list" => self.handle_deleted_list(&call.arguments),
|
||||
"blue_purge_deleted" => self.handle_purge_deleted(&call.arguments),
|
||||
_ => Err(ServerError::ToolNotFound(call.name)),
|
||||
}?;
|
||||
|
||||
|
|
@ -2787,6 +2889,88 @@ impl BlueServer {
|
|||
.and_then(|v| v.as_str());
|
||||
crate::handlers::realm::handle_notifications_list(self.cwd.as_deref(), state)
|
||||
}
|
||||
|
||||
// RFC 0006: Delete handlers
|
||||
|
||||
fn handle_delete(&mut self, args: &Option<Value>) -> Result<Value, ServerError> {
|
||||
let args = args.as_ref().ok_or(ServerError::InvalidParams)?;
|
||||
|
||||
let doc_type_str = args
|
||||
.get("doc_type")
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or(ServerError::InvalidParams)?;
|
||||
let doc_type = DocType::from_str(doc_type_str)
|
||||
.ok_or(ServerError::InvalidParams)?;
|
||||
|
||||
let title = args
|
||||
.get("title")
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or(ServerError::InvalidParams)?;
|
||||
|
||||
let dry_run = args
|
||||
.get("dry_run")
|
||||
.and_then(|v| v.as_bool())
|
||||
.unwrap_or(false);
|
||||
|
||||
let force = args
|
||||
.get("force")
|
||||
.and_then(|v| v.as_bool())
|
||||
.unwrap_or(false);
|
||||
|
||||
let permanent = args
|
||||
.get("permanent")
|
||||
.and_then(|v| v.as_bool())
|
||||
.unwrap_or(false);
|
||||
|
||||
if dry_run {
|
||||
let state = self.ensure_state()?;
|
||||
crate::handlers::delete::handle_delete_dry_run(state, doc_type, title)
|
||||
} else {
|
||||
let state = self.ensure_state_mut()?;
|
||||
crate::handlers::delete::handle_delete(state, doc_type, title, force, permanent)
|
||||
}
|
||||
}
|
||||
|
||||
fn handle_restore(&mut self, args: &Option<Value>) -> Result<Value, ServerError> {
|
||||
let args = args.as_ref().ok_or(ServerError::InvalidParams)?;
|
||||
|
||||
let doc_type_str = args
|
||||
.get("doc_type")
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or(ServerError::InvalidParams)?;
|
||||
let doc_type = DocType::from_str(doc_type_str)
|
||||
.ok_or(ServerError::InvalidParams)?;
|
||||
|
||||
let title = args
|
||||
.get("title")
|
||||
.and_then(|v| v.as_str())
|
||||
.ok_or(ServerError::InvalidParams)?;
|
||||
|
||||
let state = self.ensure_state_mut()?;
|
||||
crate::handlers::delete::handle_restore(state, doc_type, title)
|
||||
}
|
||||
|
||||
fn handle_deleted_list(&mut self, args: &Option<Value>) -> Result<Value, ServerError> {
|
||||
let doc_type = args
|
||||
.as_ref()
|
||||
.and_then(|a| a.get("doc_type"))
|
||||
.and_then(|v| v.as_str())
|
||||
.and_then(DocType::from_str);
|
||||
|
||||
let state = self.ensure_state()?;
|
||||
crate::handlers::delete::handle_list_deleted(state, doc_type)
|
||||
}
|
||||
|
||||
fn handle_purge_deleted(&mut self, args: &Option<Value>) -> Result<Value, ServerError> {
|
||||
let days = args
|
||||
.as_ref()
|
||||
.and_then(|a| a.get("days"))
|
||||
.and_then(|v| v.as_i64())
|
||||
.unwrap_or(7);
|
||||
|
||||
let state = self.ensure_state_mut()?;
|
||||
crate::handlers::delete::handle_purge_deleted(state, days)
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for BlueServer {
|
||||
|
|
|
|||
|
|
@ -64,4 +64,10 @@ Run Blue as an MCP server for Claude integration:
|
|||
blue mcp
|
||||
```
|
||||
|
||||
Configure in Claude settings to enable Blue tools.
|
||||
This exposes 8 realm coordination tools to Claude:
|
||||
- `realm_status`, `realm_check`, `contract_get`
|
||||
- `session_start`, `session_stop`
|
||||
- `realm_worktree_create`, `realm_pr_status`
|
||||
- `notifications_list`
|
||||
|
||||
See [../mcp/README.md](../mcp/README.md) for tool reference and [../mcp/integration.md](../mcp/integration.md) for setup guide.
|
||||
|
|
|
|||
Loading…
Reference in a new issue