feat: implement document sync and Claude Code task integration (RFC 0018, 0019)
RFC 0018 - Document Import/Sync:
- Add content_hash and indexed_at fields to Document
- Implement find_document_with_fallback for filesystem recovery
- Add reconcile() for database/filesystem sync
- Create blue_sync MCP tool
- Update blue_status to show index drift
RFC 0019 - Claude Code Task Integration:
- Expose .plan.md as MCP resource (blue://docs/rfcs/{n}/plan)
- Enhance blue_rfc_get with claude_code_tasks array
- Add 💙 prefix for Blue-synced tasks
- Add knowledge/task-sync.md for session injection
- Automatic sync via injected instructions
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
parent
9759f0e3db
commit
1ed6f15fa9
12 changed files with 1485 additions and 22 deletions
|
|
@ -0,0 +1,358 @@
|
|||
# Alignment Dialogue: Claude Code Task Integration
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Topic** | RFC for integrating Blue plan files with Claude Code's task management |
|
||||
| **Source** | Spike: Claude Code Task Integration (2026-01-26) |
|
||||
| **Experts** | 12 |
|
||||
| **Target** | 95% convergence |
|
||||
| **Status** | In Progress |
|
||||
|
||||
---
|
||||
|
||||
## Round 1: Initial Perspectives
|
||||
|
||||
### Scoreboard
|
||||
|
||||
| Expert | Role | Wisdom | Consistency | Truth | Relationships | Total |
|
||||
|--------|------|--------|-------------|-------|---------------|-------|
|
||||
| 1 | Systems Architect | 9 | 9 | 9 | 8 | 35 |
|
||||
| 2 | Developer Experience | 8 | 9 | 8 | 9 | 34 |
|
||||
| 3 | File System Philosopher | 10 | 10 | 10 | 7 | 37 |
|
||||
| 4 | MCP Protocol Expert | 9 | 10 | 9 | 8 | 36 |
|
||||
| 5 | Workflow Automation | 8 | 8 | 8 | 8 | 32 |
|
||||
| 6 | Data Sync Expert | 8 | 9 | 9 | 7 | 33 |
|
||||
| 7 | UI Designer | 9 | 8 | 9 | 9 | 35 |
|
||||
| 8 | Reliability Engineer | 9 | 9 | 9 | 7 | 34 |
|
||||
| 9 | Simplicity Advocate | 10 | 9 | 10 | 8 | 37 |
|
||||
| 10 | Security Analyst | 8 | 9 | 9 | 7 | 33 |
|
||||
| 11 | Integration Architect | 9 | 8 | 8 | 8 | 33 |
|
||||
| 12 | Blue ADR Guardian | 9 | 10 | 10 | 9 | 38 |
|
||||
|
||||
### Convergence Points (100%)
|
||||
|
||||
1. **File Authority**: `.plan.md` files are the single source of truth (ADR 5)
|
||||
2. **Ephemeral Tasks**: Claude Code tasks are session-local mirrors, not persistent state
|
||||
3. **Skill Orchestration**: Skills mediate between Blue MCP and Claude Code tasks
|
||||
4. **No MCP Push**: MCP's request-response nature means Blue cannot initiate sync
|
||||
|
||||
### Key Perspectives
|
||||
|
||||
**Expert 3 (File System Philosopher):**
|
||||
> "The file must win, always. When divergence is detected, the file's state is ground truth; the database state is an error to be corrected."
|
||||
|
||||
**Expert 9 (Simplicity Advocate):**
|
||||
> "The integration isn't worth the complexity... Strip this down to read-only exposure. If a user wants to update the Blue plan after completing a Claude Code task, they run `blue_rfc_task_complete` explicitly."
|
||||
|
||||
**Expert 4 (MCP Protocol Expert):**
|
||||
> "Pure skill orchestration is sufficient. The MCP server stays pure—it only answers queries about its documents, never tries to manage external task state."
|
||||
|
||||
**Expert 7 (UI Designer):**
|
||||
> "Make task state transitions (not progress updates) the trigger for filesystem writes."
|
||||
|
||||
### Tensions
|
||||
|
||||
| Tension | Position A | Position B | Experts |
|
||||
|---------|-----------|-----------|---------|
|
||||
| Integration Scope | Full bidirectional sync | Read-only context only | 1,2,5,6,7,8,11 vs 9 |
|
||||
| New Blue Tools | Add `blue_task_context` | Pure skill orchestration | 11 vs 4,9 |
|
||||
| Sync Timing | Automatic on completion | Explicit user command | 2,7 vs 5,6,9 |
|
||||
|
||||
### Round 1 Convergence: ~75%
|
||||
|
||||
Strong agreement on principles, divergence on implementation scope.
|
||||
|
||||
---
|
||||
|
||||
## Round 2: Resolving Tensions
|
||||
|
||||
### Votes
|
||||
|
||||
| Expert | Tension 1 | Tension 2 | Tension 3 |
|
||||
|--------|-----------|-----------|-----------|
|
||||
| 1 | - | - | - |
|
||||
| 2 | B | B | B |
|
||||
| 3 | A | B | B |
|
||||
| 4 | B | B | B |
|
||||
| 5 | B | B | B |
|
||||
| 6 | B | B | B |
|
||||
| 7 | B | B | B |
|
||||
| 8 | B | B | B |
|
||||
| 9 | B | B | B |
|
||||
| 10 | B | B | B |
|
||||
| 11 | B | B | B |
|
||||
| 12 | B | B | B |
|
||||
|
||||
### Results
|
||||
|
||||
| Tension | Position A | Position B | Winner |
|
||||
|---------|-----------|-----------|--------|
|
||||
| 1. Integration Scope | 1 (9%) | 10 (91%) | **B: Read-only context injection** |
|
||||
| 2. New Blue Tools | 0 (0%) | 11 (100%) | **B: Pure skill orchestration** |
|
||||
| 3. Sync Timing | 0 (0%) | 11 (100%) | **B: Explicit sync command** |
|
||||
|
||||
### Round 2 Convergence: 97%
|
||||
|
||||
Target of 95% achieved.
|
||||
|
||||
---
|
||||
|
||||
## Consensus
|
||||
|
||||
The 12 experts converged on the following RFC specification:
|
||||
|
||||
### Core Principles
|
||||
|
||||
1. **File Authority**: `.plan.md` files are the single source of truth for RFC task state
|
||||
2. **Ephemeral Mirror**: Claude Code tasks are session-local projections, not persistent state
|
||||
3. **Skill Orchestration**: A `/blue-plan` skill mediates using existing tools only
|
||||
4. **Explicit Sync**: Users invoke `blue_rfc_task_complete` manually to persist changes
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ read ┌─────────────────┐
|
||||
│ .plan.md │◄──────────────│ /blue-plan │
|
||||
│ (authority) │ │ skill │
|
||||
└─────────────────┘ └────────┬────────┘
|
||||
▲ │
|
||||
│ │ create
|
||||
│ explicit ▼
|
||||
│ blue_rfc_task_complete ┌─────────────────┐
|
||||
│ │ Claude Code │
|
||||
└──────────────────────────│ Tasks │
|
||||
user invokes │ (ephemeral) │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### What the Skill Does
|
||||
|
||||
1. On `/blue-plan <rfc-title>`:
|
||||
- Calls `blue_rfc_get` to fetch RFC with plan tasks
|
||||
- Creates Claude Code tasks via `TaskCreate` for each plan task
|
||||
- Stores mapping in task metadata: `{ blue_rfc: "title", blue_task_index: N }`
|
||||
|
||||
2. During work:
|
||||
- User works normally, Claude marks tasks in_progress/completed
|
||||
- Claude Code UI shows progress
|
||||
|
||||
3. On task completion:
|
||||
- User (or skill prompt) calls `blue_rfc_task_complete` explicitly
|
||||
- Plan file updated, becomes source of truth for next session
|
||||
|
||||
### What We Don't Build
|
||||
|
||||
- No automatic writeback from Claude Code to plan files
|
||||
- No new Blue MCP tools (existing tools sufficient)
|
||||
- No bidirectional sync machinery
|
||||
- No watcher processes or polling
|
||||
|
||||
### ADR Alignment
|
||||
|
||||
| ADR | Alignment |
|
||||
|-----|-----------|
|
||||
| ADR 5 (Single Source) | `.plan.md` is sole authority |
|
||||
| ADR 8 (Honor) | Explicit sync = say what you do |
|
||||
| ADR 10 (No Dead Code) | No new tools needed |
|
||||
| ADR 11 (Constraint) | Simple one-way flow |
|
||||
|
||||
---
|
||||
|
||||
## Status
|
||||
|
||||
~~CONVERGED at 97%~~ - User rejected skills and explicit sync.
|
||||
|
||||
---
|
||||
|
||||
## Round 3: User Constraints
|
||||
|
||||
**User Requirements:**
|
||||
1. No explicit sync - automatic/implicit instead
|
||||
2. No skills - don't add Claude Code skills
|
||||
3. Use injection - context appears automatically
|
||||
|
||||
### New Consensus
|
||||
|
||||
| Expert | Position | Key Insight |
|
||||
|--------|----------|-------------|
|
||||
| 1 | MCP Resources | Expose `.plan.md` as resource, inject on RFC access |
|
||||
| 2 | Seamless UX | Zero onboarding, tasks appear naturally |
|
||||
| 3 | Visible Sync | Automatic OK if auditable (git commits) |
|
||||
| 4 | Tool-Triggered | `blue_rfc_get` returns `_plan_uri` for injection |
|
||||
| 5 | Lazy Injection | Inject on-demand when RFC referenced |
|
||||
| 6 | Hash Versioning | Content-hash with three-way merge on conflict |
|
||||
| 7 | Audit Trail | Sync events logged, visible in status |
|
||||
| 8 | Confirmation | Three-phase handshake for reliability |
|
||||
| 9 | File Watcher | Session-scoped injection + file watcher |
|
||||
| 10 | **DISSENT** | Automatic file writes are security risk |
|
||||
| 11 | Hooks | Option B+C: tool injection + hook writeback |
|
||||
| 12 | Observable | Automatic sync honors ADR 8 if transparent |
|
||||
|
||||
### Convergence: ~92%
|
||||
|
||||
Expert 10 dissents on automatic writeback security.
|
||||
|
||||
### Proposed Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌─────────────────┐
|
||||
│ .plan.md │◄───── MCP ────────│ blue_rfc_get │
|
||||
│ (authority) │ Resource │ │
|
||||
└────────┬────────┘ └────────┬────────┘
|
||||
│ │
|
||||
│ auto-inject │ returns tasks +
|
||||
│ as context │ creates CC tasks
|
||||
▼ ▼
|
||||
┌─────────────────┐ ┌─────────────────┐
|
||||
│ Claude Code │◄───────────────────│ TaskCreate │
|
||||
│ Context │ auto-populate │ (automatic) │
|
||||
└────────┬────────┘ └─────────────────┘
|
||||
│
|
||||
│ on task complete
|
||||
│ (hook triggers)
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ blue_rfc_task │────────► Updates .plan.md
|
||||
│ _complete │ (automatic writeback)
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### Implementation Approach
|
||||
|
||||
1. **MCP Resource**: Expose `.plan.md` files via `blue://docs/rfcs/{id}/plan`
|
||||
2. **Tool Enhancement**: `blue_rfc_get` includes `_plan_uri` and auto-creates Claude Code tasks
|
||||
3. **Hook Integration**: Claude Code hook watches task state → calls `blue_rfc_task_complete`
|
||||
4. **Audit Trail**: All syncs logged with timestamps, visible in `blue status`
|
||||
|
||||
### Security Mitigation (for Expert 10's concern)
|
||||
|
||||
- Writeback only for tasks with valid `blue_rfc` metadata
|
||||
- Content-hash validation before write (detect external changes)
|
||||
- Audit log in `.plan.md` comments for forensics
|
||||
- Rate limiting on automatic writes
|
||||
|
||||
---
|
||||
|
||||
## Round 4: Security Resolution
|
||||
|
||||
**Question**: How to address Expert 10's security concern about automatic file writes?
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| A | Accept risk with mitigations only |
|
||||
| B | First-time confirmation per RFC |
|
||||
| C | Opt-in via config (disabled by default) |
|
||||
|
||||
### Votes
|
||||
|
||||
| Expert | Vote | Justification |
|
||||
|--------|------|---------------|
|
||||
| 1 | B | Confirmation friction only hits once per RFC |
|
||||
| 2 | B | Builds confidence after first sync |
|
||||
| 3 | B | Establishes implicit consent to manage companion file |
|
||||
| 9 | B | Sweet spot: informed consent without ongoing friction |
|
||||
| 10 | B | Hash validation + first confirmation = informed consent |
|
||||
| 12 | B | ADR 8 requires transparency; confirmation makes behavior knowable |
|
||||
|
||||
### Result: **Option B Unanimous (100%)**
|
||||
|
||||
First-time confirmation per RFC satisfies security concern while preserving seamless UX.
|
||||
|
||||
---
|
||||
|
||||
## Final Consensus
|
||||
|
||||
**Convergence: 97%** - Target achieved.
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌─────────────────┐
|
||||
│ .plan.md │◄───── MCP ────────│ blue_rfc_get │
|
||||
│ (authority) │ Resource │ │
|
||||
└────────┬────────┘ └────────┬────────┘
|
||||
│ │
|
||||
│ auto-inject │ auto-creates
|
||||
│ as context │ Claude Code tasks
|
||||
▼ ▼
|
||||
┌─────────────────┐ ┌─────────────────┐
|
||||
│ Claude Code │◄───────────────────│ TaskCreate │
|
||||
│ Context │ │ (automatic) │
|
||||
└────────┬────────┘ └─────────────────┘
|
||||
│
|
||||
│ on task complete → hook triggers
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ First time for this RFC? │
|
||||
│ ├─ YES → Confirm: "Enable auto-sync for RFC X?" [Y/n] │
|
||||
│ └─ NO → Automatic writeback │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ blue_rfc_task │────────► Updates .plan.md
|
||||
│ _complete │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### Key Decisions
|
||||
|
||||
| Decision | Rationale |
|
||||
|----------|-----------|
|
||||
| MCP Resource injection | Context appears automatically, no skills |
|
||||
| Tool-triggered task creation | `blue_rfc_get` auto-populates Claude Code tasks |
|
||||
| Hook-based writeback | Task completion triggers `blue_rfc_task_complete` |
|
||||
| First-time confirmation | Balances security with seamlessness |
|
||||
| Audit trail | All syncs logged, visible in git |
|
||||
|
||||
### ADR Alignment
|
||||
|
||||
| ADR | How Honored |
|
||||
|-----|-------------|
|
||||
| ADR 5 (Single Source) | `.plan.md` remains authoritative |
|
||||
| ADR 8 (Honor) | First-time confirmation = explicit consent |
|
||||
| ADR 11 (Constraint) | Automatic flow with minimal friction |
|
||||
|
||||
---
|
||||
|
||||
## Round 5: User Override
|
||||
|
||||
**User Decision**: Remove first-time confirmation. It adds friction for a low-risk operation.
|
||||
|
||||
Rationale:
|
||||
- User is already in their own project
|
||||
- Writes are just checkbox updates in `.plan.md`
|
||||
- Git provides full audit trail and rollback
|
||||
- The "security risk" is overstated for this context
|
||||
|
||||
**Final Architecture**: Fully automatic. No prompts, no confirmation.
|
||||
|
||||
---
|
||||
|
||||
---
|
||||
|
||||
## Round 6: Open Questions
|
||||
|
||||
### Q1: Visual indicator for auto-created tasks?
|
||||
**User Decision**: Yes, use 💙
|
||||
|
||||
### Q2: Mid-session task additions?
|
||||
|
||||
| Expert | Vote | Rationale |
|
||||
|--------|------|-----------|
|
||||
| 1 | B | Honors file authority, syncs at natural interaction points |
|
||||
| 2 | B | Predictable - sync at interaction, not background |
|
||||
| 3 | B | File is truth, re-read ensures current state |
|
||||
| 9 | B | Rebuild-on-read already exists, no new complexity |
|
||||
| 11 | B | Lazy re-read aligns with `is_cache_stale()` pattern |
|
||||
| 12 | B | ADR 5 requires trusting `.plan.md` as authority |
|
||||
|
||||
**Result: B (Poll on access) - Unanimous**
|
||||
|
||||
Re-read plan file on next `blue_rfc_get`, create missing tasks.
|
||||
|
||||
---
|
||||
|
||||
## Status
|
||||
|
||||
**CONVERGED** - All open questions resolved. RFC 0019 ready.
|
||||
210
.blue/docs/rfcs/0018-document-import-sync.md
Normal file
210
.blue/docs/rfcs/0018-document-import-sync.md
Normal file
|
|
@ -0,0 +1,210 @@
|
|||
# RFC 0018: Document Import/Sync Mechanism
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Approved |
|
||||
| **Date** | 2026-01-25 |
|
||||
| **Dialogue** | [rfc-document-import-sync](../dialogues/rfc-document-import-sync.dialogue.md) |
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
Blue maintains documents in both filesystem (`.blue/docs/*.md`) and database (`blue.db`). When these diverge, Blue reports "not found" for files that visibly exist. This RFC establishes the filesystem as the single source of truth, with the database serving as a rebuildable index/cache.
|
||||
|
||||
## Problem
|
||||
|
||||
1. **Files invisible to Blue**: Manually created files, copied files, or files after database reset aren't found by `find_document()`
|
||||
2. **ADR 0005 violation**: Two sources of truth (filesystem and database) inevitably diverge
|
||||
3. **Git collaboration broken**: Database doesn't survive `git clone`, so collaborators can't see each other's documents
|
||||
4. **Branch isolation**: Database state persists across branch switches, causing phantom documents
|
||||
|
||||
## Architecture
|
||||
|
||||
### Authority Model
|
||||
|
||||
```
|
||||
CURRENT (problematic):
|
||||
blue_rfc_create → writes file AND database (can diverge)
|
||||
find_document() → queries database ONLY (misses files)
|
||||
|
||||
PROPOSED:
|
||||
Filesystem = SOURCE OF TRUTH (survives git clone)
|
||||
Database = DERIVED INDEX (rebuildable, disposable)
|
||||
find_document() = checks index, falls back to filesystem scan
|
||||
```
|
||||
|
||||
### Metadata Location
|
||||
|
||||
| Location | Contents | Rationale |
|
||||
|----------|----------|-----------|
|
||||
| **Frontmatter** | title, number, status, date | Human-readable identity |
|
||||
| **Content** | Relationships (as links) | Parseable from text |
|
||||
| **Database Only** | id, file_path, content_hash, indexed_at, computed relationships | Derived/computed |
|
||||
|
||||
**Principle**: If the database is deleted, files alone must be sufficient for full rebuild.
|
||||
|
||||
### Staleness Detection
|
||||
|
||||
Hash-based lazy revalidation:
|
||||
|
||||
```rust
|
||||
fn is_document_stale(doc: &Document, file_path: &Path) -> bool {
|
||||
// Fast path: check mtime
|
||||
let file_mtime = fs::metadata(file_path).modified();
|
||||
if file_mtime <= doc.indexed_at { return false; }
|
||||
|
||||
// Slow path: verify with hash
|
||||
let content = fs::read_to_string(file_path)?;
|
||||
let current_hash = hash_content(&content);
|
||||
current_hash != doc.content_hash
|
||||
}
|
||||
```
|
||||
|
||||
No file watchers - they're fragile across platforms and introduce race conditions.
|
||||
|
||||
### Reconciliation
|
||||
|
||||
| Condition | Action |
|
||||
|-----------|--------|
|
||||
| File exists, no DB record | Create DB record from file |
|
||||
| DB record exists, no file | Soft-delete DB record (`deleted_at = now()`) |
|
||||
| Both exist, hash mismatch | Update DB from file (filesystem wins) |
|
||||
|
||||
### User-Facing Commands
|
||||
|
||||
```bash
|
||||
# Explicit reconciliation
|
||||
blue sync # Full filesystem scan, reconcile all
|
||||
blue sync --dry-run # Report drift without fixing
|
||||
blue sync rfcs/ # Scope to directory
|
||||
|
||||
# Status shows drift
|
||||
blue status # Warns if index drift detected
|
||||
|
||||
# Normal operations use index (fast)
|
||||
blue search "feature" # Queries index
|
||||
blue rfc get 0042 # Queries index, falls back to filesystem
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### Phase 1: Add content_hash to Document
|
||||
|
||||
```rust
|
||||
pub struct Document {
|
||||
// ... existing fields ...
|
||||
pub content_hash: Option<String>,
|
||||
pub indexed_at: Option<String>,
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 2: Implement `find_document` fallback
|
||||
|
||||
```rust
|
||||
pub fn find_document(&self, doc_type: DocType, query: &str) -> Result<Document, StoreError> {
|
||||
// Try database first (fast path)
|
||||
if let Ok(doc) = self.find_document_in_db(doc_type, query) {
|
||||
return Ok(doc);
|
||||
}
|
||||
|
||||
// Fall back to filesystem scan
|
||||
self.scan_and_register(doc_type, query)
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 3: Add `blue sync` command
|
||||
|
||||
```rust
|
||||
pub fn reconcile(&self) -> ReconcileResult {
|
||||
let mut result = ReconcileResult::default();
|
||||
|
||||
// Scan filesystem
|
||||
for file in glob(".blue/docs/**/*.md") {
|
||||
if !self.has_record_for(&file) {
|
||||
self.register_from_file(&file);
|
||||
result.added.push(file);
|
||||
}
|
||||
}
|
||||
|
||||
// Check for orphan records
|
||||
for doc in self.all_documents() {
|
||||
if let Some(path) = &doc.file_path {
|
||||
if !Path::new(path).exists() {
|
||||
self.soft_delete(doc.id);
|
||||
result.orphaned.push(doc);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
result
|
||||
}
|
||||
```
|
||||
|
||||
### Phase 4: Update `blue status` to show drift
|
||||
|
||||
```
|
||||
$ blue status
|
||||
RFC 0042 in-progress (3/5 tasks)
|
||||
|
||||
⚠ Index drift detected:
|
||||
+ rfcs/0043-new-feature.md (not indexed)
|
||||
- rfcs/0037-old-thing.md (file missing)
|
||||
|
||||
Run `blue sync` to reconcile.
|
||||
```
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Schema & Hashing
|
||||
- [ ] Add `content_hash` and `indexed_at` fields to Document struct in `store.rs`
|
||||
- [ ] Add migration to create `content_hash` and `indexed_at` columns in documents table
|
||||
- [ ] Update document creation/update to populate `content_hash` via `hash_content()`
|
||||
|
||||
### Phase 2: Fallback Logic
|
||||
- [ ] Implement `is_document_stale()` with mtime fast path and hash slow path
|
||||
- [ ] Add `scan_and_register()` to parse frontmatter and create DB record from file
|
||||
- [ ] Modify `find_document()` to fall back to filesystem scan when DB lookup fails
|
||||
|
||||
### Phase 3: Sync Command
|
||||
- [ ] Create `blue_sync` MCP handler with `ReconcileResult` struct
|
||||
- [ ] Implement `reconcile()` - scan filesystem, register unindexed files
|
||||
- [ ] Implement orphan detection - soft-delete records for missing files
|
||||
- [ ] Add `--dry-run` flag to report drift without fixing
|
||||
- [ ] Add directory scoping (`blue sync rfcs/`)
|
||||
|
||||
### Phase 4: Status Integration
|
||||
- [ ] Update `blue_status` to detect and warn about index drift
|
||||
- [ ] Show count of unindexed files and orphan records in status output
|
||||
|
||||
## Guardrails
|
||||
|
||||
1. **Never auto-fix**: Always report drift, require explicit `blue sync`
|
||||
2. **Soft delete only**: DB records for missing files get `deleted_at`, never hard-deleted
|
||||
3. **30-day retention**: Soft-deleted records purged after 30 days via `blue purge`
|
||||
4. **Frontmatter validation**: Files with malformed frontmatter get indexed with warnings, not rejected
|
||||
|
||||
## Test Plan
|
||||
|
||||
- [ ] `find_document` returns file that exists but has no DB record
|
||||
- [ ] `blue sync` creates records for unindexed files
|
||||
- [ ] `blue sync` soft-deletes records for missing files
|
||||
- [ ] `blue status` warns when drift detected
|
||||
- [ ] Database can be deleted and rebuilt from files
|
||||
- [ ] Frontmatter parse errors don't block indexing
|
||||
- [ ] Hash-based staleness detection works correctly
|
||||
|
||||
## References
|
||||
|
||||
- **ADR 0005**: Single Source of Truth - "One truth, one location"
|
||||
- **ADR 0007**: Integrity - "Hidden state is a crack"
|
||||
- **RFC 0008**: Status Update File Sync - Already syncs status to files
|
||||
- **RFC 0017**: Plan File Authority - Companion files as source of truth
|
||||
- **Dialogue**: 6-expert alignment achieved 97% convergence
|
||||
|
||||
---
|
||||
|
||||
*"If I can `cat` the file, Blue should know about it."*
|
||||
|
||||
— The 🧁 Consensus
|
||||
|
||||
148
.blue/docs/rfcs/0019-claude-code-task-integration.md
Normal file
148
.blue/docs/rfcs/0019-claude-code-task-integration.md
Normal file
|
|
@ -0,0 +1,148 @@
|
|||
# RFC 0019: Claude Code Task Integration
|
||||
|
||||
| | |
|
||||
|---|---|
|
||||
| **Status** | Draft |
|
||||
| **Created** | 2026-01-25 |
|
||||
| **Source** | Spike: Claude Code Task Integration |
|
||||
| **Dialogue** | 12-expert alignment, 97% convergence (4 rounds) |
|
||||
|
||||
---
|
||||
|
||||
## Problem
|
||||
|
||||
Blue's RFC task tracking (via `.plan.md` files per RFC 0017) and Claude Code's built-in task management operate independently. Users cannot see Blue tasks in Claude Code's UI without manual effort.
|
||||
|
||||
## Proposal
|
||||
|
||||
Integrate Blue plan files with Claude Code through **automatic injection and sync** - no skills, no explicit commands.
|
||||
|
||||
### Design Principles
|
||||
|
||||
1. **File Authority**: `.plan.md` remains the single source of truth
|
||||
2. **Automatic Injection**: Plan context appears when RFC is accessed
|
||||
3. **Automatic Sync**: Task completion writes back without explicit command
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌─────────────────┐
|
||||
│ .plan.md │◄───── MCP ────────│ blue_rfc_get │
|
||||
│ (authority) │ Resource │ │
|
||||
└────────┬────────┘ └────────┬────────┘
|
||||
│ │
|
||||
│ auto-inject │ auto-creates
|
||||
│ as context │ Claude Code tasks
|
||||
▼ ▼
|
||||
┌─────────────────┐ ┌─────────────────┐
|
||||
│ Claude Code │◄───────────────────│ TaskCreate │
|
||||
│ Context │ │ (automatic) │
|
||||
└────────┬────────┘ └─────────────────┘
|
||||
│
|
||||
│ on task complete (hook)
|
||||
▼
|
||||
┌─────────────────┐
|
||||
│ blue_rfc_task │────────► Updates .plan.md
|
||||
│ _complete │ (automatic)
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
## Implementation
|
||||
|
||||
### 1. MCP Resource: Plan Files
|
||||
|
||||
Expose `.plan.md` files as MCP resources for context injection:
|
||||
|
||||
```
|
||||
URI: blue://docs/rfcs/{number}/plan
|
||||
Type: text/markdown
|
||||
```
|
||||
|
||||
When Claude Code accesses an RFC, the plan resource is automatically available for context injection.
|
||||
|
||||
### 2. Tool Enhancement: Auto Task Creation
|
||||
|
||||
Modify `blue_rfc_get` to:
|
||||
1. Return RFC content as normal
|
||||
2. Include `_plan_uri` field pointing to plan resource
|
||||
3. **Automatically call TaskCreate** for each plan task with metadata:
|
||||
|
||||
```json
|
||||
{
|
||||
"subject": "Task description from plan",
|
||||
"activeForm": "Working on RFC task...",
|
||||
"metadata": {
|
||||
"blue_rfc": "plan-file-authority",
|
||||
"blue_rfc_number": 17,
|
||||
"blue_task_index": 0
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Injected Sync Instruction
|
||||
|
||||
Via Blue's `SessionStart` hook, inject knowledge that instructs Claude to sync:
|
||||
|
||||
```markdown
|
||||
# knowledge/task-sync.md (injected at session start)
|
||||
|
||||
When you mark a task complete that has `blue_rfc` metadata,
|
||||
call `blue_rfc_task_complete` with the RFC title and task index
|
||||
to update the plan file automatically.
|
||||
```
|
||||
|
||||
This works in any repo with Blue installed - no per-repo configuration needed.
|
||||
|
||||
### 4. Audit Trail
|
||||
|
||||
All syncs are logged:
|
||||
- In `blue status` output
|
||||
- As git-friendly comments in `.plan.md`:
|
||||
```markdown
|
||||
<!-- sync: 2026-01-25T10:30:42Z task-0 completed -->
|
||||
```
|
||||
|
||||
## What This Enables
|
||||
|
||||
| Before | After |
|
||||
|--------|-------|
|
||||
| User runs `/blue-plan` skill | Tasks appear automatically |
|
||||
| User calls `blue_rfc_task_complete` | Completion syncs via hook |
|
||||
| No visibility in Claude Code UI | Full task progress in UI |
|
||||
| Manual context switching | Seamless flow |
|
||||
|
||||
## What We Don't Build
|
||||
|
||||
| Rejected | Reason |
|
||||
|----------|--------|
|
||||
| Skills | User preference: use injection |
|
||||
| Explicit sync command | User preference: automatic |
|
||||
| Bidirectional conflict resolution | First-time consent + hash validation sufficient |
|
||||
|
||||
## Security Considerations
|
||||
|
||||
| Risk | Mitigation |
|
||||
|------|------------|
|
||||
| Injection via metadata | Validate `blue_rfc` metadata exists in Blue |
|
||||
| Hash conflicts | Content-hash validation before write |
|
||||
| Audit gaps | All syncs logged with timestamps + git history |
|
||||
|
||||
## ADR Alignment
|
||||
|
||||
| ADR | How Honored |
|
||||
|-----|-------------|
|
||||
| ADR 5 (Single Source) | `.plan.md` is authoritative; Claude Code tasks are mirrors |
|
||||
| ADR 8 (Honor) | Automatic sync is documented behavior; git provides audit |
|
||||
| ADR 11 (Constraint) | Fully automatic flow removes all ceremony |
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. ~~Should auto-created tasks be marked with a visual indicator?~~ **Resolved: Yes, use 💙**
|
||||
2. ~~How to handle task additions mid-session?~~ **Resolved: Poll on access** - Re-read plan file on next `blue_rfc_get`, create missing tasks. Aligns with rebuild-on-read pattern from RFC 0017.
|
||||
|
||||
## References
|
||||
|
||||
- [RFC 0017: Plan File Authority](./0017-plan-file-authority.md)
|
||||
- [RFC 0018: Document Import/Sync](./0018-document-import-sync.md)
|
||||
- [Spike: Claude Code Task Integration](../spikes/2026-01-26-claude-code-task-integration.md)
|
||||
- [Alignment Dialogue](../dialogues/2026-01-25-claude-code-task-integration.dialogue.md)
|
||||
|
|
@ -25,6 +25,7 @@ reqwest.workspace = true
|
|||
dirs.workspace = true
|
||||
semver.workspace = true
|
||||
regex.workspace = true
|
||||
sha2.workspace = true
|
||||
|
||||
[dev-dependencies]
|
||||
tower.workspace = true
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ pub use indexer::{Indexer, IndexerConfig, IndexerError, IndexResult, ParsedSymbo
|
|||
pub use llm::{CompletionOptions, CompletionResult, LlmBackendChoice, LlmConfig, LlmError, LlmManager, LlmProvider, LlmProviderChoice, LocalLlmConfig, ApiLlmConfig, KeywordLlm, MockLlm, ProviderStatus};
|
||||
pub use repo::{detect_blue, BlueHome, RepoError, WorktreeInfo};
|
||||
pub use state::{ItemType, ProjectState, StateError, StatusSummary, WorkItem};
|
||||
pub use store::{ContextInjection, DocType, Document, DocumentStore, EdgeType, FileIndexEntry, IndexSearchResult, IndexStatus, LinkType, RefreshPolicy, RefreshRateLimit, RelevanceEdge, Reminder, ReminderStatus, SearchResult, Session, SessionType, StagingLock, StagingLockQueueEntry, StagingLockResult, StalenessCheck, StalenessReason, StoreError, SymbolIndexEntry, Task as StoreTask, TaskProgress, Worktree, INDEX_PROMPT_VERSION};
|
||||
pub use store::{ContextInjection, DocType, Document, DocumentStore, EdgeType, FileIndexEntry, IndexSearchResult, IndexStatus, LinkType, ParsedDocument, ReconcileResult, RefreshPolicy, RefreshRateLimit, RelevanceEdge, Reminder, ReminderStatus, SearchResult, Session, SessionType, StagingLock, StagingLockQueueEntry, StagingLockResult, StalenessCheck, StalenessReason, StoreError, SymbolIndexEntry, Task as StoreTask, TaskProgress, Worktree, INDEX_PROMPT_VERSION, hash_content, parse_document_from_file};
|
||||
pub use voice::*;
|
||||
pub use workflow::{PrdStatus, RfcStatus, SpikeOutcome as WorkflowSpikeOutcome, SpikeStatus, WorkflowError, validate_rfc_transition};
|
||||
pub use manifest::{ContextManifest, IdentityConfig, WorkflowConfig, ReferenceConfig, PluginConfig, SourceConfig, RefreshTrigger, SalienceTrigger, ManifestError, ManifestResolution, TierResolution, ResolvedSource};
|
||||
|
|
|
|||
|
|
@ -7,10 +7,18 @@ use std::thread;
|
|||
use std::time::Duration;
|
||||
|
||||
use rusqlite::{params, Connection, OptionalExtension, Transaction, TransactionBehavior};
|
||||
use sha2::{Sha256, Digest};
|
||||
use tracing::{debug, info, warn};
|
||||
|
||||
/// Compute a SHA-256 hash of content for staleness detection (RFC 0018)
|
||||
pub fn hash_content(content: &str) -> String {
|
||||
let mut hasher = Sha256::new();
|
||||
hasher.update(content.as_bytes());
|
||||
format!("{:x}", hasher.finalize())
|
||||
}
|
||||
|
||||
/// Current schema version
|
||||
const SCHEMA_VERSION: i32 = 7;
|
||||
const SCHEMA_VERSION: i32 = 8;
|
||||
|
||||
/// Core database schema
|
||||
const SCHEMA: &str = r#"
|
||||
|
|
@ -28,6 +36,8 @@ const SCHEMA: &str = r#"
|
|||
created_at TEXT NOT NULL,
|
||||
updated_at TEXT NOT NULL,
|
||||
deleted_at TEXT,
|
||||
content_hash TEXT,
|
||||
indexed_at TEXT,
|
||||
UNIQUE(doc_type, title)
|
||||
);
|
||||
|
||||
|
|
@ -352,6 +362,21 @@ impl DocType {
|
|||
DocType::Audit => "audits",
|
||||
}
|
||||
}
|
||||
|
||||
/// Subdirectory in .blue/docs/ (RFC 0018)
|
||||
pub fn subdir(&self) -> &'static str {
|
||||
match self {
|
||||
DocType::Rfc => "rfcs",
|
||||
DocType::Spike => "spikes",
|
||||
DocType::Adr => "adrs",
|
||||
DocType::Decision => "decisions",
|
||||
DocType::Prd => "prds",
|
||||
DocType::Postmortem => "postmortems",
|
||||
DocType::Runbook => "runbooks",
|
||||
DocType::Dialogue => "dialogues",
|
||||
DocType::Audit => "audits",
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Link types between documents
|
||||
|
|
@ -393,6 +418,10 @@ pub struct Document {
|
|||
pub created_at: Option<String>,
|
||||
pub updated_at: Option<String>,
|
||||
pub deleted_at: Option<String>,
|
||||
/// Content hash for staleness detection (RFC 0018)
|
||||
pub content_hash: Option<String>,
|
||||
/// When the document was last indexed from filesystem (RFC 0018)
|
||||
pub indexed_at: Option<String>,
|
||||
}
|
||||
|
||||
impl Document {
|
||||
|
|
@ -408,6 +437,8 @@ impl Document {
|
|||
created_at: None,
|
||||
updated_at: None,
|
||||
deleted_at: None,
|
||||
content_hash: None,
|
||||
indexed_at: None,
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -415,6 +446,144 @@ impl Document {
|
|||
pub fn is_deleted(&self) -> bool {
|
||||
self.deleted_at.is_some()
|
||||
}
|
||||
|
||||
/// Check if document is stale based on content hash (RFC 0018)
|
||||
pub fn is_stale(&self, file_path: &Path) -> bool {
|
||||
use std::fs;
|
||||
|
||||
// If no file exists, document isn't stale (it's orphaned, handled separately)
|
||||
if !file_path.exists() {
|
||||
return false;
|
||||
}
|
||||
|
||||
// If we have no hash, document is stale (needs indexing)
|
||||
let Some(ref stored_hash) = self.content_hash else {
|
||||
return true;
|
||||
};
|
||||
|
||||
// Fast path: check mtime if we have indexed_at
|
||||
if let Some(ref indexed_at) = self.indexed_at {
|
||||
if let Ok(metadata) = fs::metadata(file_path) {
|
||||
if let Ok(modified) = metadata.modified() {
|
||||
let file_mtime: chrono::DateTime<chrono::Utc> = modified.into();
|
||||
if let Ok(indexed_time) = chrono::DateTime::parse_from_rfc3339(indexed_at) {
|
||||
// File hasn't changed since indexing
|
||||
if file_mtime <= indexed_time {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Slow path: verify with hash
|
||||
if let Ok(content) = fs::read_to_string(file_path) {
|
||||
let current_hash = hash_content(&content);
|
||||
return current_hash != *stored_hash;
|
||||
}
|
||||
|
||||
// If we can't read the file, assume not stale
|
||||
false
|
||||
}
|
||||
}
|
||||
|
||||
/// Result of parsing a document from a file (RFC 0018)
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct ParsedDocument {
|
||||
pub doc_type: DocType,
|
||||
pub title: String,
|
||||
pub number: Option<i32>,
|
||||
pub status: String,
|
||||
pub content_hash: String,
|
||||
}
|
||||
|
||||
/// Parse document metadata from a markdown file's frontmatter (RFC 0018)
|
||||
///
|
||||
/// Extracts title, number, status from the header table format:
|
||||
/// ```markdown
|
||||
/// # RFC 0042: My Feature
|
||||
///
|
||||
/// | | |
|
||||
/// |---|---|
|
||||
/// | **Status** | Draft |
|
||||
/// ```
|
||||
pub fn parse_document_from_file(file_path: &Path) -> Result<ParsedDocument, StoreError> {
|
||||
use std::fs;
|
||||
|
||||
let content = fs::read_to_string(file_path)
|
||||
.map_err(|e| StoreError::IoError(e.to_string()))?;
|
||||
|
||||
// Determine doc type from path
|
||||
let path_str = file_path.to_string_lossy();
|
||||
let doc_type = if path_str.contains("/rfcs/") {
|
||||
DocType::Rfc
|
||||
} else if path_str.contains("/spikes/") {
|
||||
DocType::Spike
|
||||
} else if path_str.contains("/adrs/") {
|
||||
DocType::Adr
|
||||
} else if path_str.contains("/decisions/") {
|
||||
DocType::Decision
|
||||
} else if path_str.contains("/postmortems/") {
|
||||
DocType::Postmortem
|
||||
} else if path_str.contains("/runbooks/") {
|
||||
DocType::Runbook
|
||||
} else if path_str.contains("/dialogues/") {
|
||||
DocType::Dialogue
|
||||
} else if path_str.contains("/audits/") {
|
||||
DocType::Audit
|
||||
} else if path_str.contains("/prds/") {
|
||||
DocType::Prd
|
||||
} else {
|
||||
return Err(StoreError::InvalidOperation(
|
||||
format!("Unknown document type for path: {}", path_str)
|
||||
));
|
||||
};
|
||||
|
||||
// Extract title from first line: # Type NNNN: Title or # Type: Title
|
||||
let title_re = regex::Regex::new(r"^#\s+(?:\w+)\s*(?:(\d+):?)?\s*:?\s*(.+)$").unwrap();
|
||||
let title_line = content.lines().next()
|
||||
.ok_or_else(|| StoreError::InvalidOperation("Empty file".to_string()))?;
|
||||
|
||||
let (number, title) = if let Some(caps) = title_re.captures(title_line) {
|
||||
let num = caps.get(1).and_then(|m| m.as_str().parse().ok());
|
||||
let title = caps.get(2)
|
||||
.map(|m| m.as_str().trim().to_string())
|
||||
.unwrap_or_else(|| "untitled".to_string());
|
||||
(num, title)
|
||||
} else {
|
||||
// Fallback: use filename as title
|
||||
let stem = file_path.file_stem()
|
||||
.and_then(|s| s.to_str())
|
||||
.unwrap_or("untitled");
|
||||
// Try to extract number from filename like "0042-my-feature.md"
|
||||
let num_re = regex::Regex::new(r"^(\d+)-(.+)$").unwrap();
|
||||
if let Some(caps) = num_re.captures(stem) {
|
||||
let num = caps.get(1).and_then(|m| m.as_str().parse().ok());
|
||||
let title = caps.get(2).map(|m| m.as_str().to_string()).unwrap_or_else(|| stem.to_string());
|
||||
(num, title)
|
||||
} else {
|
||||
(None, stem.to_string())
|
||||
}
|
||||
};
|
||||
|
||||
// Extract status from table format: | **Status** | Draft |
|
||||
let status_re = regex::Regex::new(r"\|\s*\*\*Status\*\*\s*\|\s*([^|]+)\s*\|").unwrap();
|
||||
let status = content.lines()
|
||||
.find_map(|line| {
|
||||
status_re.captures(line)
|
||||
.map(|c| c.get(1).unwrap().as_str().trim().to_lowercase())
|
||||
})
|
||||
.unwrap_or_else(|| "draft".to_string());
|
||||
|
||||
let content_hash = hash_content(&content);
|
||||
|
||||
Ok(ParsedDocument {
|
||||
doc_type,
|
||||
title,
|
||||
number,
|
||||
status,
|
||||
content_hash,
|
||||
})
|
||||
}
|
||||
|
||||
/// A task in a document's plan
|
||||
|
|
@ -454,6 +623,35 @@ pub struct SearchResult {
|
|||
pub snippet: Option<String>,
|
||||
}
|
||||
|
||||
/// Result of reconciling database with filesystem (RFC 0018)
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct ReconcileResult {
|
||||
/// Files found on filesystem but not in database
|
||||
pub unindexed: Vec<String>,
|
||||
/// DB records with no corresponding file
|
||||
pub orphaned: Vec<String>,
|
||||
/// Files that have changed since last index
|
||||
pub stale: Vec<String>,
|
||||
/// Number of documents added (when not dry_run)
|
||||
pub added: usize,
|
||||
/// Number of documents updated (when not dry_run)
|
||||
pub updated: usize,
|
||||
/// Number of documents soft-deleted (when not dry_run)
|
||||
pub soft_deleted: usize,
|
||||
}
|
||||
|
||||
impl ReconcileResult {
|
||||
/// Check if there is any drift between filesystem and database
|
||||
pub fn has_drift(&self) -> bool {
|
||||
!self.unindexed.is_empty() || !self.orphaned.is_empty() || !self.stale.is_empty()
|
||||
}
|
||||
|
||||
/// Total count of issues found
|
||||
pub fn drift_count(&self) -> usize {
|
||||
self.unindexed.len() + self.orphaned.len() + self.stale.len()
|
||||
}
|
||||
}
|
||||
|
||||
/// Session types for multi-agent coordination
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
|
||||
pub enum SessionType {
|
||||
|
|
@ -864,6 +1062,9 @@ pub enum StoreError {
|
|||
|
||||
#[error("Can't do that: {0}")]
|
||||
InvalidOperation(String),
|
||||
|
||||
#[error("File system error: {0}")]
|
||||
IoError(String),
|
||||
}
|
||||
|
||||
/// Check if an error is a busy/locked error
|
||||
|
|
@ -1145,6 +1346,44 @@ impl DocumentStore {
|
|||
)?;
|
||||
}
|
||||
|
||||
// Migration from v7 to v8: Add content_hash and indexed_at columns (RFC 0018)
|
||||
if from_version < 8 {
|
||||
debug!("Adding content_hash and indexed_at columns (RFC 0018)");
|
||||
|
||||
// Check if columns exist first
|
||||
let has_content_hash: bool = self.conn.query_row(
|
||||
"SELECT COUNT(*) FROM pragma_table_info('documents') WHERE name = 'content_hash'",
|
||||
[],
|
||||
|row| Ok(row.get::<_, i64>(0)? > 0),
|
||||
)?;
|
||||
|
||||
if !has_content_hash {
|
||||
self.conn.execute(
|
||||
"ALTER TABLE documents ADD COLUMN content_hash TEXT",
|
||||
[],
|
||||
)?;
|
||||
}
|
||||
|
||||
let has_indexed_at: bool = self.conn.query_row(
|
||||
"SELECT COUNT(*) FROM pragma_table_info('documents') WHERE name = 'indexed_at'",
|
||||
[],
|
||||
|row| Ok(row.get::<_, i64>(0)? > 0),
|
||||
)?;
|
||||
|
||||
if !has_indexed_at {
|
||||
self.conn.execute(
|
||||
"ALTER TABLE documents ADD COLUMN indexed_at TEXT",
|
||||
[],
|
||||
)?;
|
||||
}
|
||||
|
||||
// Add index for staleness checking
|
||||
self.conn.execute(
|
||||
"CREATE INDEX IF NOT EXISTS idx_documents_content_hash ON documents(content_hash)",
|
||||
[],
|
||||
)?;
|
||||
}
|
||||
|
||||
// Update schema version
|
||||
self.conn.execute(
|
||||
"UPDATE schema_version SET version = ?1",
|
||||
|
|
@ -1188,8 +1427,8 @@ impl DocumentStore {
|
|||
self.with_retry(|| {
|
||||
let now = chrono::Utc::now().to_rfc3339();
|
||||
self.conn.execute(
|
||||
"INSERT INTO documents (doc_type, number, title, status, file_path, created_at, updated_at)
|
||||
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7)",
|
||||
"INSERT INTO documents (doc_type, number, title, status, file_path, created_at, updated_at, content_hash, indexed_at)
|
||||
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9)",
|
||||
params![
|
||||
doc.doc_type.as_str(),
|
||||
doc.number,
|
||||
|
|
@ -1198,6 +1437,8 @@ impl DocumentStore {
|
|||
doc.file_path,
|
||||
now,
|
||||
now,
|
||||
doc.content_hash,
|
||||
doc.indexed_at.as_ref().unwrap_or(&now),
|
||||
],
|
||||
)?;
|
||||
Ok(self.conn.last_insert_rowid())
|
||||
|
|
@ -1208,7 +1449,7 @@ impl DocumentStore {
|
|||
pub fn get_document(&self, doc_type: DocType, title: &str) -> Result<Document, StoreError> {
|
||||
self.conn
|
||||
.query_row(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at, content_hash, indexed_at
|
||||
FROM documents WHERE doc_type = ?1 AND title = ?2 AND deleted_at IS NULL",
|
||||
params![doc_type.as_str(), title],
|
||||
|row| {
|
||||
|
|
@ -1222,6 +1463,8 @@ impl DocumentStore {
|
|||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
content_hash: row.get(9)?,
|
||||
indexed_at: row.get(10)?,
|
||||
})
|
||||
},
|
||||
)
|
||||
|
|
@ -1235,7 +1478,7 @@ impl DocumentStore {
|
|||
pub fn get_document_by_id(&self, id: i64) -> Result<Document, StoreError> {
|
||||
self.conn
|
||||
.query_row(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at, content_hash, indexed_at
|
||||
FROM documents WHERE id = ?1",
|
||||
params![id],
|
||||
|row| {
|
||||
|
|
@ -1249,6 +1492,8 @@ impl DocumentStore {
|
|||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
content_hash: row.get(9)?,
|
||||
indexed_at: row.get(10)?,
|
||||
})
|
||||
},
|
||||
)
|
||||
|
|
@ -1268,7 +1513,7 @@ impl DocumentStore {
|
|||
) -> Result<Document, StoreError> {
|
||||
self.conn
|
||||
.query_row(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at, content_hash, indexed_at
|
||||
FROM documents WHERE doc_type = ?1 AND number = ?2 AND deleted_at IS NULL",
|
||||
params![doc_type.as_str(), number],
|
||||
|row| {
|
||||
|
|
@ -1282,6 +1527,8 @@ impl DocumentStore {
|
|||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
content_hash: row.get(9)?,
|
||||
indexed_at: row.get(10)?,
|
||||
})
|
||||
},
|
||||
)
|
||||
|
|
@ -1315,7 +1562,7 @@ impl DocumentStore {
|
|||
// Try substring match
|
||||
let pattern = format!("%{}%", query.to_lowercase());
|
||||
if let Ok(doc) = self.conn.query_row(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at, content_hash, indexed_at
|
||||
FROM documents WHERE doc_type = ?1 AND LOWER(title) LIKE ?2 AND deleted_at IS NULL
|
||||
ORDER BY LENGTH(title) ASC LIMIT 1",
|
||||
params![doc_type.as_str(), pattern],
|
||||
|
|
@ -1330,6 +1577,8 @@ impl DocumentStore {
|
|||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
content_hash: row.get(9)?,
|
||||
indexed_at: row.get(10)?,
|
||||
})
|
||||
},
|
||||
) {
|
||||
|
|
@ -1343,6 +1592,272 @@ impl DocumentStore {
|
|||
)))
|
||||
}
|
||||
|
||||
/// Find a document with filesystem fallback (RFC 0018)
|
||||
///
|
||||
/// First tries the database, then falls back to scanning the filesystem
|
||||
/// if the document isn't found. Any document found on filesystem is
|
||||
/// automatically registered in the database.
|
||||
pub fn find_document_with_fallback(
|
||||
&self,
|
||||
doc_type: DocType,
|
||||
query: &str,
|
||||
docs_path: &Path,
|
||||
) -> Result<Document, StoreError> {
|
||||
// Try database first (fast path)
|
||||
if let Ok(doc) = self.find_document(doc_type, query) {
|
||||
return Ok(doc);
|
||||
}
|
||||
|
||||
// Fall back to filesystem scan
|
||||
self.scan_and_register(doc_type, query, docs_path)
|
||||
}
|
||||
|
||||
/// Scan filesystem for a document and register it (RFC 0018)
|
||||
pub fn scan_and_register(
|
||||
&self,
|
||||
doc_type: DocType,
|
||||
query: &str,
|
||||
docs_path: &Path,
|
||||
) -> Result<Document, StoreError> {
|
||||
use std::fs;
|
||||
|
||||
let subdir = match doc_type {
|
||||
DocType::Rfc => "rfcs",
|
||||
DocType::Spike => "spikes",
|
||||
DocType::Adr => "adrs",
|
||||
DocType::Decision => "decisions",
|
||||
DocType::Dialogue => "dialogues",
|
||||
DocType::Audit => "audits",
|
||||
DocType::Runbook => "runbooks",
|
||||
DocType::Postmortem => "postmortems",
|
||||
DocType::Prd => "prds",
|
||||
};
|
||||
|
||||
let search_dir = docs_path.join(subdir);
|
||||
if !search_dir.exists() {
|
||||
return Err(StoreError::NotFound(format!(
|
||||
"{} matching '{}' (directory {} not found)",
|
||||
doc_type.as_str(),
|
||||
query,
|
||||
search_dir.display()
|
||||
)));
|
||||
}
|
||||
|
||||
let query_lower = query.to_lowercase();
|
||||
|
||||
// Try to parse query as a number
|
||||
let query_num: Option<i32> = query.trim_start_matches('0')
|
||||
.parse()
|
||||
.ok()
|
||||
.or_else(|| if query == "0" { Some(0) } else { None });
|
||||
|
||||
// Scan directory for matching files
|
||||
let entries = fs::read_dir(&search_dir)
|
||||
.map_err(|e| StoreError::IoError(e.to_string()))?;
|
||||
|
||||
for entry in entries.flatten() {
|
||||
let path = entry.path();
|
||||
if path.extension().map(|e| e == "md").unwrap_or(false) {
|
||||
// Skip .plan.md files
|
||||
if let Some(name) = path.file_name().and_then(|n| n.to_str()) {
|
||||
if name.ends_with(".plan.md") {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
// Try to parse the file
|
||||
if let Ok(parsed) = parse_document_from_file(&path) {
|
||||
if parsed.doc_type != doc_type {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Check if this file matches the query
|
||||
let matches = parsed.title.to_lowercase().contains(&query_lower)
|
||||
|| query_num.map(|n| parsed.number == Some(n)).unwrap_or(false);
|
||||
|
||||
if matches {
|
||||
// Register this document in the database
|
||||
let relative_path = path.strip_prefix(docs_path)
|
||||
.map(|p| p.to_string_lossy().to_string())
|
||||
.unwrap_or_else(|_| path.to_string_lossy().to_string());
|
||||
|
||||
let doc = Document {
|
||||
id: None,
|
||||
doc_type: parsed.doc_type,
|
||||
number: parsed.number,
|
||||
title: parsed.title,
|
||||
status: parsed.status,
|
||||
file_path: Some(relative_path),
|
||||
created_at: None,
|
||||
updated_at: None,
|
||||
deleted_at: None,
|
||||
content_hash: Some(parsed.content_hash),
|
||||
indexed_at: Some(chrono::Utc::now().to_rfc3339()),
|
||||
};
|
||||
|
||||
let id = self.add_document(&doc)?;
|
||||
return self.get_document_by_id(id);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Err(StoreError::NotFound(format!(
|
||||
"{} matching '{}'",
|
||||
doc_type.as_str(),
|
||||
query
|
||||
)))
|
||||
}
|
||||
|
||||
/// Register a document from a file path (RFC 0018)
|
||||
pub fn register_from_file(&self, file_path: &Path, docs_path: &Path) -> Result<Document, StoreError> {
|
||||
let parsed = parse_document_from_file(file_path)?;
|
||||
|
||||
let relative_path = file_path.strip_prefix(docs_path)
|
||||
.map(|p| p.to_string_lossy().to_string())
|
||||
.unwrap_or_else(|_| file_path.to_string_lossy().to_string());
|
||||
|
||||
let doc = Document {
|
||||
id: None,
|
||||
doc_type: parsed.doc_type,
|
||||
number: parsed.number,
|
||||
title: parsed.title,
|
||||
status: parsed.status,
|
||||
file_path: Some(relative_path),
|
||||
created_at: None,
|
||||
updated_at: None,
|
||||
deleted_at: None,
|
||||
content_hash: Some(parsed.content_hash),
|
||||
indexed_at: Some(chrono::Utc::now().to_rfc3339()),
|
||||
};
|
||||
|
||||
let id = self.add_document(&doc)?;
|
||||
self.get_document_by_id(id)
|
||||
}
|
||||
|
||||
/// Reconcile database with filesystem (RFC 0018)
|
||||
///
|
||||
/// Scans the filesystem for documents and reconciles with the database:
|
||||
/// - Files without DB records: create records
|
||||
/// - DB records without files: soft-delete records
|
||||
/// - Hash mismatch: update DB from file
|
||||
pub fn reconcile(
|
||||
&self,
|
||||
docs_path: &Path,
|
||||
doc_type: Option<DocType>,
|
||||
dry_run: bool,
|
||||
) -> Result<ReconcileResult, StoreError> {
|
||||
use std::collections::HashSet;
|
||||
use std::fs;
|
||||
|
||||
let mut result = ReconcileResult::default();
|
||||
|
||||
let subdirs: Vec<(&str, DocType)> = match doc_type {
|
||||
Some(dt) => vec![(dt.subdir(), dt)],
|
||||
None => vec![
|
||||
("rfcs", DocType::Rfc),
|
||||
("spikes", DocType::Spike),
|
||||
("adrs", DocType::Adr),
|
||||
("decisions", DocType::Decision),
|
||||
("dialogues", DocType::Dialogue),
|
||||
("audits", DocType::Audit),
|
||||
("runbooks", DocType::Runbook),
|
||||
("postmortems", DocType::Postmortem),
|
||||
("prds", DocType::Prd),
|
||||
],
|
||||
};
|
||||
|
||||
for (subdir, dt) in subdirs {
|
||||
let search_dir = docs_path.join(subdir);
|
||||
if !search_dir.exists() {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Track files we've seen
|
||||
let mut seen_files: HashSet<String> = HashSet::new();
|
||||
|
||||
// Scan filesystem
|
||||
if let Ok(entries) = fs::read_dir(&search_dir) {
|
||||
for entry in entries.flatten() {
|
||||
let path = entry.path();
|
||||
if path.extension().map(|e| e == "md").unwrap_or(false) {
|
||||
// Skip .plan.md files
|
||||
if let Some(name) = path.file_name().and_then(|n| n.to_str()) {
|
||||
if name.ends_with(".plan.md") {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
let relative_path = path.strip_prefix(docs_path)
|
||||
.map(|p| p.to_string_lossy().to_string())
|
||||
.unwrap_or_else(|_| path.to_string_lossy().to_string());
|
||||
|
||||
seen_files.insert(relative_path.clone());
|
||||
|
||||
// Check if file is in database
|
||||
if let Ok(parsed) = parse_document_from_file(&path) {
|
||||
if parsed.doc_type != dt {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Try to find existing document
|
||||
let existing = self.list_documents(dt)
|
||||
.unwrap_or_default()
|
||||
.into_iter()
|
||||
.find(|d| d.file_path.as_ref() == Some(&relative_path));
|
||||
|
||||
match existing {
|
||||
None => {
|
||||
// File exists but no DB record
|
||||
result.unindexed.push(relative_path.clone());
|
||||
if !dry_run {
|
||||
if let Ok(_doc) = self.register_from_file(&path, docs_path) {
|
||||
result.added += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
Some(doc) => {
|
||||
// Check if stale
|
||||
if doc.content_hash.as_ref() != Some(&parsed.content_hash) {
|
||||
result.stale.push(relative_path.clone());
|
||||
if !dry_run {
|
||||
if let Some(id) = doc.id {
|
||||
let _ = self.update_document_index(id, &parsed.content_hash);
|
||||
// Also update status if it changed
|
||||
if doc.status.to_lowercase() != parsed.status.to_lowercase() {
|
||||
let _ = self.update_document_status(dt, &doc.title, &parsed.status);
|
||||
}
|
||||
result.updated += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for orphan records
|
||||
for doc in self.list_documents(dt).unwrap_or_default() {
|
||||
if let Some(ref file_path) = doc.file_path {
|
||||
if !seen_files.contains(file_path) {
|
||||
let full_path = docs_path.join(file_path);
|
||||
if !full_path.exists() {
|
||||
result.orphaned.push(file_path.clone());
|
||||
if !dry_run {
|
||||
let _ = self.soft_delete_document(dt, &doc.title);
|
||||
result.soft_deleted += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
/// Update a document's status
|
||||
pub fn update_document_status(
|
||||
&self,
|
||||
|
|
@ -1373,7 +1888,7 @@ impl DocumentStore {
|
|||
let now = chrono::Utc::now().to_rfc3339();
|
||||
let updated = self.conn.execute(
|
||||
"UPDATE documents SET doc_type = ?1, number = ?2, title = ?3, status = ?4,
|
||||
file_path = ?5, updated_at = ?6 WHERE id = ?7",
|
||||
file_path = ?5, updated_at = ?6, content_hash = ?7, indexed_at = ?8 WHERE id = ?9",
|
||||
params![
|
||||
doc.doc_type.as_str(),
|
||||
doc.number,
|
||||
|
|
@ -1381,6 +1896,8 @@ impl DocumentStore {
|
|||
doc.status,
|
||||
doc.file_path,
|
||||
now,
|
||||
doc.content_hash,
|
||||
doc.indexed_at.as_ref().unwrap_or(&now),
|
||||
id
|
||||
],
|
||||
)?;
|
||||
|
|
@ -1391,10 +1908,29 @@ impl DocumentStore {
|
|||
})
|
||||
}
|
||||
|
||||
/// Update a document's content hash and indexed_at timestamp (RFC 0018)
|
||||
pub fn update_document_index(
|
||||
&self,
|
||||
id: i64,
|
||||
content_hash: &str,
|
||||
) -> Result<(), StoreError> {
|
||||
self.with_retry(|| {
|
||||
let now = chrono::Utc::now().to_rfc3339();
|
||||
let updated = self.conn.execute(
|
||||
"UPDATE documents SET content_hash = ?1, indexed_at = ?2 WHERE id = ?3",
|
||||
params![content_hash, now, id],
|
||||
)?;
|
||||
if updated == 0 {
|
||||
return Err(StoreError::NotFound(format!("document #{}", id)));
|
||||
}
|
||||
Ok(())
|
||||
})
|
||||
}
|
||||
|
||||
/// List all documents of a given type (excludes soft-deleted)
|
||||
pub fn list_documents(&self, doc_type: DocType) -> Result<Vec<Document>, StoreError> {
|
||||
let mut stmt = self.conn.prepare(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at, content_hash, indexed_at
|
||||
FROM documents WHERE doc_type = ?1 AND deleted_at IS NULL ORDER BY number DESC, title ASC",
|
||||
)?;
|
||||
|
||||
|
|
@ -1409,6 +1945,8 @@ impl DocumentStore {
|
|||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
content_hash: row.get(9)?,
|
||||
indexed_at: row.get(10)?,
|
||||
})
|
||||
})?;
|
||||
|
||||
|
|
@ -1423,7 +1961,7 @@ impl DocumentStore {
|
|||
status: &str,
|
||||
) -> Result<Vec<Document>, StoreError> {
|
||||
let mut stmt = self.conn.prepare(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at, content_hash, indexed_at
|
||||
FROM documents WHERE doc_type = ?1 AND status = ?2 AND deleted_at IS NULL ORDER BY number DESC, title ASC",
|
||||
)?;
|
||||
|
||||
|
|
@ -1438,6 +1976,8 @@ impl DocumentStore {
|
|||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
content_hash: row.get(9)?,
|
||||
indexed_at: row.get(10)?,
|
||||
})
|
||||
})?;
|
||||
|
||||
|
|
@ -1499,7 +2039,7 @@ impl DocumentStore {
|
|||
pub fn get_deleted_document(&self, doc_type: DocType, title: &str) -> Result<Document, StoreError> {
|
||||
self.conn
|
||||
.query_row(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at, content_hash, indexed_at
|
||||
FROM documents WHERE doc_type = ?1 AND title = ?2 AND deleted_at IS NOT NULL",
|
||||
params![doc_type.as_str(), title],
|
||||
|row| {
|
||||
|
|
@ -1513,6 +2053,8 @@ impl DocumentStore {
|
|||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
content_hash: row.get(9)?,
|
||||
indexed_at: row.get(10)?,
|
||||
})
|
||||
},
|
||||
)
|
||||
|
|
@ -1530,12 +2072,12 @@ impl DocumentStore {
|
|||
pub fn list_deleted_documents(&self, doc_type: Option<DocType>) -> Result<Vec<Document>, StoreError> {
|
||||
let query = match doc_type {
|
||||
Some(dt) => format!(
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
"SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at, content_hash, indexed_at
|
||||
FROM documents WHERE doc_type = '{}' AND deleted_at IS NOT NULL
|
||||
ORDER BY deleted_at DESC",
|
||||
dt.as_str()
|
||||
),
|
||||
None => "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at
|
||||
None => "SELECT id, doc_type, number, title, status, file_path, created_at, updated_at, deleted_at, content_hash, indexed_at
|
||||
FROM documents WHERE deleted_at IS NOT NULL
|
||||
ORDER BY deleted_at DESC".to_string(),
|
||||
};
|
||||
|
|
@ -1552,6 +2094,8 @@ impl DocumentStore {
|
|||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
content_hash: row.get(9)?,
|
||||
indexed_at: row.get(10)?,
|
||||
})
|
||||
})?;
|
||||
|
||||
|
|
@ -1577,7 +2121,7 @@ impl DocumentStore {
|
|||
/// Check if a document has ADR dependents (documents that reference it via rfc_to_adr link)
|
||||
pub fn has_adr_dependents(&self, document_id: i64) -> Result<Vec<Document>, StoreError> {
|
||||
let mut stmt = self.conn.prepare(
|
||||
"SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at, d.deleted_at
|
||||
"SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at, d.deleted_at, d.content_hash, d.indexed_at
|
||||
FROM documents d
|
||||
JOIN document_links l ON l.source_id = d.id
|
||||
WHERE l.target_id = ?1 AND l.link_type = 'rfc_to_adr' AND d.deleted_at IS NULL",
|
||||
|
|
@ -1594,6 +2138,8 @@ impl DocumentStore {
|
|||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
content_hash: row.get(9)?,
|
||||
indexed_at: row.get(10)?,
|
||||
})
|
||||
})?;
|
||||
|
||||
|
|
@ -1639,13 +2185,13 @@ impl DocumentStore {
|
|||
) -> Result<Vec<Document>, StoreError> {
|
||||
let query = match link_type {
|
||||
Some(lt) => format!(
|
||||
"SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at, d.deleted_at
|
||||
"SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at, d.deleted_at, d.content_hash, d.indexed_at
|
||||
FROM documents d
|
||||
JOIN document_links l ON l.target_id = d.id
|
||||
WHERE l.source_id = ?1 AND l.link_type = '{}' AND d.deleted_at IS NULL",
|
||||
lt.as_str()
|
||||
),
|
||||
None => "SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at, d.deleted_at
|
||||
None => "SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path, d.created_at, d.updated_at, d.deleted_at, d.content_hash, d.indexed_at
|
||||
FROM documents d
|
||||
JOIN document_links l ON l.target_id = d.id
|
||||
WHERE l.source_id = ?1 AND d.deleted_at IS NULL".to_string(),
|
||||
|
|
@ -1663,6 +2209,8 @@ impl DocumentStore {
|
|||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
content_hash: row.get(9)?,
|
||||
indexed_at: row.get(10)?,
|
||||
})
|
||||
})?;
|
||||
|
||||
|
|
@ -1903,7 +2451,7 @@ impl DocumentStore {
|
|||
let sql = match doc_type {
|
||||
Some(dt) => format!(
|
||||
"SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path,
|
||||
d.created_at, d.updated_at, d.deleted_at, bm25(documents_fts) as score
|
||||
d.created_at, d.updated_at, d.deleted_at, d.content_hash, d.indexed_at, bm25(documents_fts) as score
|
||||
FROM documents_fts fts
|
||||
JOIN documents d ON d.id = fts.rowid
|
||||
WHERE documents_fts MATCH ?1 AND d.doc_type = '{}' AND d.deleted_at IS NULL
|
||||
|
|
@ -1912,7 +2460,7 @@ impl DocumentStore {
|
|||
dt.as_str()
|
||||
),
|
||||
None => "SELECT d.id, d.doc_type, d.number, d.title, d.status, d.file_path,
|
||||
d.created_at, d.updated_at, d.deleted_at, bm25(documents_fts) as score
|
||||
d.created_at, d.updated_at, d.deleted_at, d.content_hash, d.indexed_at, bm25(documents_fts) as score
|
||||
FROM documents_fts fts
|
||||
JOIN documents d ON d.id = fts.rowid
|
||||
WHERE documents_fts MATCH ?1 AND d.deleted_at IS NULL
|
||||
|
|
@ -1934,8 +2482,10 @@ impl DocumentStore {
|
|||
created_at: row.get(6)?,
|
||||
updated_at: row.get(7)?,
|
||||
deleted_at: row.get(8)?,
|
||||
content_hash: row.get(9)?,
|
||||
indexed_at: row.get(10)?,
|
||||
},
|
||||
score: row.get(9)?,
|
||||
score: row.get(11)?,
|
||||
snippet: None,
|
||||
})
|
||||
})?;
|
||||
|
|
|
|||
|
|
@ -145,6 +145,31 @@ impl BlueUri {
|
|||
|
||||
match id {
|
||||
Some(id) => {
|
||||
// RFC 0019: Check for /plan suffix to return plan file
|
||||
if id.ends_with("/plan") {
|
||||
let rfc_num = id.trim_end_matches("/plan");
|
||||
// Find the RFC file to get its title
|
||||
let entries = std::fs::read_dir(&type_dir)?;
|
||||
for entry in entries.flatten() {
|
||||
let path = entry.path();
|
||||
if let Some(name) = path.file_stem().and_then(|n| n.to_str()) {
|
||||
if let Some(num_str) = name.split('-').next() {
|
||||
if num_str == rfc_num
|
||||
|| num_str.trim_start_matches('0') == rfc_num
|
||||
{
|
||||
// Found the RFC, now get its plan file
|
||||
let plan_name = format!("{}.plan.md", name);
|
||||
let plan_path = type_dir.join(plan_name);
|
||||
if plan_path.exists() {
|
||||
return Ok(vec![plan_path]);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return Ok(Vec::new());
|
||||
}
|
||||
|
||||
// Specific document - try exact match or pattern match
|
||||
let exact = type_dir.join(format!("{}.md", id));
|
||||
if exact.exists() {
|
||||
|
|
|
|||
|
|
@ -100,6 +100,8 @@ pub fn handle_create(state: &mut ProjectState, args: &Value) -> Result<Value, Se
|
|||
created_at: None,
|
||||
updated_at: None,
|
||||
deleted_at: None,
|
||||
content_hash: Some(blue_core::store::hash_content(&markdown)),
|
||||
indexed_at: None,
|
||||
};
|
||||
state
|
||||
.store
|
||||
|
|
@ -216,6 +218,8 @@ pub fn handle_action_to_rfc(state: &mut ProjectState, args: &Value) -> Result<Va
|
|||
created_at: None,
|
||||
updated_at: None,
|
||||
deleted_at: None,
|
||||
content_hash: Some(blue_core::store::hash_content(&markdown)),
|
||||
indexed_at: None,
|
||||
};
|
||||
state
|
||||
.store
|
||||
|
|
|
|||
|
|
@ -97,6 +97,23 @@ pub fn handle_resources_list(state: &ProjectState) -> Result<Value, ServerError>
|
|||
"mimeType": "text/markdown"
|
||||
}));
|
||||
|
||||
// Add plan files for in-progress RFCs (RFC 0019)
|
||||
if let Ok(docs) = state.store.list_documents(blue_core::DocType::Rfc) {
|
||||
for doc in docs.iter().filter(|d| d.status == "in-progress") {
|
||||
if let Some(num) = doc.number {
|
||||
let plan_path = blue_core::plan_file_path(&state.home.docs_path, &doc.title, num);
|
||||
if plan_path.exists() {
|
||||
resources.push(json!({
|
||||
"uri": format!("blue://docs/rfcs/{}/plan", num),
|
||||
"name": format!("💙 Plan: {}", doc.title),
|
||||
"description": format!("Task plan for RFC {:04}", num),
|
||||
"mimeType": "text/markdown"
|
||||
}));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(json!({
|
||||
"resources": resources
|
||||
}))
|
||||
|
|
|
|||
|
|
@ -86,6 +86,8 @@ pub fn handle_create(state: &mut ProjectState, args: &Value) -> Result<Value, Se
|
|||
created_at: None,
|
||||
updated_at: None,
|
||||
deleted_at: None,
|
||||
content_hash: Some(blue_core::store::hash_content(&markdown)),
|
||||
indexed_at: None,
|
||||
};
|
||||
let doc_id = state
|
||||
.store
|
||||
|
|
|
|||
|
|
@ -1185,6 +1185,29 @@ impl BlueServer {
|
|||
}
|
||||
}
|
||||
},
|
||||
// RFC 0018: Document sync tool
|
||||
{
|
||||
"name": "blue_sync",
|
||||
"description": "Reconcile database with filesystem. Scans .blue/docs/ for documents not in database and vice versa.",
|
||||
"inputSchema": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"cwd": {
|
||||
"type": "string",
|
||||
"description": "Current working directory"
|
||||
},
|
||||
"doc_type": {
|
||||
"type": "string",
|
||||
"description": "Limit to specific document type",
|
||||
"enum": ["rfc", "spike", "adr", "decision", "dialogue", "audit", "runbook", "postmortem", "prd"]
|
||||
},
|
||||
"dry_run": {
|
||||
"type": "boolean",
|
||||
"description": "Report drift without fixing (default: false)"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
// Phase 7: Environment isolation tools
|
||||
{
|
||||
"name": "blue_env_detect",
|
||||
|
|
@ -2163,6 +2186,8 @@ impl BlueServer {
|
|||
"blue_prd_list" => self.handle_prd_list(&call.arguments),
|
||||
// Phase 7: Lint handler
|
||||
"blue_lint" => self.handle_lint(&call.arguments),
|
||||
// RFC 0018: Document sync handler
|
||||
"blue_sync" => self.handle_sync(&call.arguments),
|
||||
// Phase 7: Environment handlers
|
||||
"blue_env_detect" => self.handle_env_detect(&call.arguments),
|
||||
"blue_env_mock" => self.handle_env_mock(&call.arguments),
|
||||
|
|
@ -2236,13 +2261,44 @@ impl BlueServer {
|
|||
match self.ensure_state() {
|
||||
Ok(state) => {
|
||||
let summary = state.status_summary();
|
||||
Ok(json!({
|
||||
|
||||
// Check for index drift across all doc types
|
||||
let mut total_drift = 0;
|
||||
let mut drift_details = serde_json::Map::new();
|
||||
|
||||
for doc_type in &[DocType::Rfc, DocType::Spike, DocType::Adr, DocType::Decision] {
|
||||
if let Ok(result) = state.store.reconcile(&state.home.docs_path, Some(*doc_type), true) {
|
||||
if result.has_drift() {
|
||||
total_drift += result.drift_count();
|
||||
drift_details.insert(
|
||||
format!("{:?}", doc_type).to_lowercase(),
|
||||
json!({
|
||||
"unindexed": result.unindexed.len(),
|
||||
"orphaned": result.orphaned.len(),
|
||||
"stale": result.stale.len()
|
||||
})
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let mut response = json!({
|
||||
"active": summary.active,
|
||||
"ready": summary.ready,
|
||||
"stalled": summary.stalled,
|
||||
"drafts": summary.drafts,
|
||||
"hint": summary.hint
|
||||
}))
|
||||
});
|
||||
|
||||
if total_drift > 0 {
|
||||
response["index_drift"] = json!({
|
||||
"total": total_drift,
|
||||
"by_type": drift_details,
|
||||
"hint": "Run blue_sync to reconcile."
|
||||
});
|
||||
}
|
||||
|
||||
Ok(response)
|
||||
}
|
||||
Err(_) => {
|
||||
// Fall back to a simple message if not in a Blue project
|
||||
|
|
@ -2463,7 +2519,27 @@ impl BlueServer {
|
|||
// Add plan file info if it exists
|
||||
if plan_path.exists() {
|
||||
response["plan_file"] = json!(plan_path.display().to_string());
|
||||
response["_plan_uri"] = json!(format!("blue://docs/rfcs/{}/plan", rfc_number));
|
||||
response["cache_rebuilt"] = json!(cache_rebuilt);
|
||||
|
||||
// RFC 0019: Include Claude Code task format for auto-creation
|
||||
let incomplete_tasks: Vec<_> = tasks.iter()
|
||||
.filter(|t| !t.completed)
|
||||
.map(|t| json!({
|
||||
"subject": format!("💙 {}", t.description),
|
||||
"description": format!("RFC: {}\nTask {} of {}", doc.title, t.task_index + 1, tasks.len()),
|
||||
"activeForm": format!("Working on: {}", t.description),
|
||||
"metadata": {
|
||||
"blue_rfc": doc.title,
|
||||
"blue_rfc_number": rfc_number,
|
||||
"blue_task_index": t.task_index
|
||||
}
|
||||
}))
|
||||
.collect();
|
||||
|
||||
if !incomplete_tasks.is_empty() {
|
||||
response["claude_code_tasks"] = json!(incomplete_tasks);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(response)
|
||||
|
|
@ -3143,6 +3219,65 @@ impl BlueServer {
|
|||
crate::handlers::lint::handle_lint(args, &state.home.root)
|
||||
}
|
||||
|
||||
// RFC 0018: Document sync handler
|
||||
fn handle_sync(&mut self, args: &Option<Value>) -> Result<Value, ServerError> {
|
||||
let empty = json!({});
|
||||
let args = args.as_ref().unwrap_or(&empty);
|
||||
|
||||
let doc_type = args.get("doc_type")
|
||||
.and_then(|v| v.as_str())
|
||||
.and_then(DocType::from_str);
|
||||
|
||||
let dry_run = args.get("dry_run")
|
||||
.and_then(|v| v.as_bool())
|
||||
.unwrap_or(false);
|
||||
|
||||
let state = self.ensure_state()?;
|
||||
|
||||
let result = state.store.reconcile(&state.home.docs_path, doc_type, dry_run)
|
||||
.map_err(|e| ServerError::StateLoadFailed(e.to_string()))?;
|
||||
|
||||
let message = if dry_run {
|
||||
if result.has_drift() {
|
||||
blue_core::voice::info(
|
||||
&format!(
|
||||
"Found {} issues: {} unindexed, {} orphaned, {} stale",
|
||||
result.drift_count(),
|
||||
result.unindexed.len(),
|
||||
result.orphaned.len(),
|
||||
result.stale.len()
|
||||
),
|
||||
Some("Run without --dry-run to fix.")
|
||||
)
|
||||
} else {
|
||||
blue_core::voice::success("No drift detected. Database and filesystem in sync.", None)
|
||||
}
|
||||
} else if result.added > 0 || result.updated > 0 || result.soft_deleted > 0 {
|
||||
blue_core::voice::success(
|
||||
&format!(
|
||||
"Synced: {} added, {} updated, {} soft-deleted",
|
||||
result.added, result.updated, result.soft_deleted
|
||||
),
|
||||
None
|
||||
)
|
||||
} else {
|
||||
blue_core::voice::success("Already in sync.", None)
|
||||
};
|
||||
|
||||
Ok(json!({
|
||||
"status": "success",
|
||||
"message": message,
|
||||
"dry_run": dry_run,
|
||||
"unindexed": result.unindexed,
|
||||
"orphaned": result.orphaned,
|
||||
"stale": result.stale,
|
||||
"added": result.added,
|
||||
"updated": result.updated,
|
||||
"soft_deleted": result.soft_deleted,
|
||||
"has_drift": result.has_drift()
|
||||
}))
|
||||
}
|
||||
|
||||
// Phase 7: Environment handlers
|
||||
|
||||
fn handle_env_detect(&mut self, args: &Option<Value>) -> Result<Value, ServerError> {
|
||||
|
|
|
|||
13
knowledge/task-sync.md
Normal file
13
knowledge/task-sync.md
Normal file
|
|
@ -0,0 +1,13 @@
|
|||
# Blue Task Sync
|
||||
|
||||
When you call `blue_rfc_get`, it returns a `claude_code_tasks` array with pre-formatted tasks ready for TaskCreate. Each task includes:
|
||||
- 💙 prefix in subject (indicates Blue-synced task)
|
||||
- `blue_rfc` and `blue_task_index` in metadata
|
||||
|
||||
Create these tasks to track RFC work in Claude Code's task UI.
|
||||
|
||||
When you mark a task complete that has `blue_rfc` metadata, call `blue_rfc_task_complete` with:
|
||||
- `title`: the RFC title from metadata
|
||||
- `task_index`: the task index from metadata
|
||||
|
||||
This updates the `.plan.md` file automatically.
|
||||
Loading…
Reference in a new issue