Wake Intelligence: 3-Layer Temporal Intelligence for AI Agents
A production-ready Model Context Protocol (MCP) server implementing a temporal intelligence "brain" with three layers: Past (causality tracking), Present (memory management), and Future (predictive pre-fetching).
Reference implementation of Semantic Intent as Single Source of Truth patterns with hexagonal architecture.
- Wake Intelligence Brain Architecture
- What Makes This Different
- Quick Start
- Architecture
- Features
- Testing
- Database Setup
- Contributing
- Security
- License
Wake Intelligence implements a 3-layer temporal intelligence system that learns from the past, manages the present, and predicts the future:
Tracks WHY contexts were created and their causal relationships.
Features:
- ✅ Causal chain tracking (what led to what)
- ✅ Dependency auto-detection from temporal proximity
- ✅ Reasoning reconstruction ("Why did I do this?")
- ✅ Action type taxonomy (decision, implementation, refactor, etc.)
Use Cases:
- Trace decision history backwards through time
- Understand why a context was created
- Identify context dependencies automatically
- Reconstruct reasoning from past sessions
Manages HOW relevant contexts are right now based on temporal patterns.
Features:
- ✅ 4-tier memory classification (ACTIVE, RECENT, ARCHIVED, EXPIRED)
- ✅ LRU tracking (last access time + access count)
- ✅ Automatic tier recalculation based on age
- ✅ Expired context pruning
Memory Tiers:
- ACTIVE: Last accessed < 1 hour ago
- RECENT: Last accessed 1-24 hours ago
- ARCHIVED: Last accessed 1-30 days ago
- EXPIRED: Last accessed > 30 days ago
Use Cases:
- Prioritize recent contexts in search results
- Automatically archive old contexts
- Prune expired contexts to save storage
- Track context access patterns
Predicts WHAT contexts will be needed next for proactive optimization.
Features:
- ✅ Composite prediction scoring (40% temporal + 30% causal + 30% frequency)
- ✅ Pattern-based next access estimation
- ✅ Observable prediction reasoning
- ✅ Staleness management with lazy refresh
Prediction Algorithm:
- Temporal Score (40%): Exponential decay based on last access time
- Causal Score (30%): Position in causal chains (roots score higher)
- Frequency Score (30%): Logarithmic scaling of access count
Use Cases:
- Pre-fetch high-value contexts for faster retrieval
- Cache frequently accessed contexts in memory
- Prioritize contexts by prediction score
- Identify patterns in context usage
┌─────────────────────────────────────────────────────────────┐
│                   WAKE INTELLIGENCE BRAIN                    │
├─────────────────────────────────────────────────────────────┤
│                                                               │
│  LAYER 3: PROPAGATION ENGINE (Future - WHAT)                │
│  ┌─────────────────────────────────────────────────────┐    │
│  │ • Predicts WHAT will be needed next                 │    │
│  │ • Composite scoring (temporal + causal + frequency) │    │
│  │ • Pre-fetching optimization                         │    │
│  └─────────────────────────────────────────────────────┘    │
│                            ▲                                  │
│  LAYER 2: MEMORY MANAGER (Present - HOW)                    │
│  ┌─────────────────────────────────────────────────────┐    │
│  │ • Tracks HOW relevant contexts are NOW              │    │
│  │ • 4-tier memory classification                      │    │
│  │ • LRU tracking + automatic tier updates             │    │
│  └─────────────────────────────────────────────────────┘    │
│                            ▲                                  │
│  LAYER 1: CAUSALITY ENGINE (Past - WHY)                     │
│  ┌─────────────────────────────────────────────────────┐    │
│  │ • Tracks WHY contexts were created                  │    │
│  │ • Causal chain tracking                             │    │
│  │ • Dependency auto-detection                         │    │
│  └─────────────────────────────────────────────────────┘    │
│                                                               │
└─────────────────────────────────────────────────────────────┘
Benefits:
- 🎯 Learn from the past: Understand causal relationships
- 🎯 Optimize the present: Manage memory intelligently
- 🎯 Predict the future: Pre-fetch what's needed next
- 🎯 Observable reasoning: Every decision is explainable
- 🎯 Deterministic algorithms: No black-box predictions
This isn't just another MCP server—it's a reference implementation of proven semantic intent patterns:
- ✅ Semantic Anchoring: Decisions based on meaning, not technical characteristics
- ✅ Intent Preservation: Semantic contracts maintained through all transformations
- ✅ Observable Properties: Behavior anchored to directly observable semantic markers
- ✅ Domain Boundaries: Clear semantic ownership across layers
Built on research from Semantic Intent as Single Source of Truth, this implementation demonstrates how to build maintainable, AI-friendly codebases that preserve intent.
- Node.js 20.x or higher
- Cloudflare account (free tier works)
- Wrangler CLI: npm install -g wrangler
- 
Clone the repository git clone https://github.com/semanticintent/semantic-wake-intelligence-mcp.git cd semantic-wake-intelligence-mcp
- 
Install dependencies npm install 
- 
Configure Wrangler Copy the example configuration: cp wrangler.jsonc.example wrangler.jsonc Create a D1 database: wrangler d1 create mcp-context Update wrangler.jsoncwith your database ID:
- 
Run database migrations # Local development wrangler d1 execute mcp-context --local --file=./migrations/0001_initial_schema.sql # Production wrangler d1 execute mcp-context --file=./migrations/0001_initial_schema.sql 
- 
Start development server npm run dev 
npm run deployYour MCP server will be available at: semantic-wake-intelligence-mcp.<your-account>.workers.dev
This codebase demonstrates semantic intent patterns throughout:
- src/index.ts - Dependency injection composition root (74 lines)
- src/domain/ - Business logic layer (ContextSnapshot, ContextService)
- src/application/ - Orchestration layer (handlers and protocol)
- src/infrastructure/ - Technical adapters (D1, AI, CORS)
- src/presentation/ - HTTP routing layer (MCPRouter)
- migrations/0001_initial_schema.sql - Schema with semantic intent documentation
- src/types.ts - Type-safe semantic contracts
- SEMANTIC_ANCHORING_GOVERNANCE.md - Governance rules and patterns
- REFACTORING_PLAN.md - Complete refactoring documentation
Each file includes comprehensive comments explaining WHY decisions preserve semantic intent, not just WHAT the code does.
You can connect to your MCP server from the Cloudflare AI Playground, which is a remote MCP client:
- Go to https://playground.ai.cloudflare.com/
- Enter your deployed MCP server URL (remote-mcp-server-authless.<your-account>.workers.dev/sse)
- You can now use your MCP tools directly from the playground!
You can also connect to your remote MCP server from local MCP clients, by using the mcp-remote proxy.
To connect to your MCP server from Claude Desktop, follow Anthropic's Quickstart and within Claude Desktop go to Settings > Developer > Edit Config.
Update with this configuration:
{
  "mcpServers": {
    "semantic-context": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "http://localhost:8787/sse"  // or semantic-wake-intelligence-mcp.your-account.workers.dev/sse
      ]
    }
  }
}Restart Claude and you should see the tools become available.
This project demonstrates Domain-Driven Hexagonal Architecture with clean separation of concerns:
┌─────────────────────────────────────────────────────────┐
│                   Presentation Layer                     │
│              (MCPRouter - HTTP routing)                  │
└────────────────────┬────────────────────────────────────┘
                     │
┌────────────────────▼────────────────────────────────────┐
│                  Application Layer                       │
│     (ToolExecutionHandler, MCPProtocolHandler)          │
│              MCP Protocol & Orchestration                │
└────────────────────┬────────────────────────────────────┘
                     │
┌────────────────────▼────────────────────────────────────┐
│                    Domain Layer                          │
│         (ContextService, ContextSnapshot)                │
│                 Business Logic                           │
└────────────────────┬────────────────────────────────────┘
                     │
┌────────────────────▼────────────────────────────────────┐
│                Infrastructure Layer                      │
│    (D1ContextRepository, CloudflareAIProvider)          │
│           Technical Adapters (Ports & Adapters)         │
└─────────────────────────────────────────────────────────┘
Domain Layer (src/domain/):
- Pure business logic independent of infrastructure
- ContextSnapshot: Entity with validation rules
- ContextService: Core business operations
Application Layer (src/application/):
- Orchestrates domain operations
- ToolExecutionHandler: Translates MCP tools to domain operations
- MCPProtocolHandler: Manages JSON-RPC protocol
Infrastructure Layer (src/infrastructure/):
- Technical adapters implementing ports (interfaces)
- D1ContextRepository: Cloudflare D1 persistence
- CloudflareAIProvider: Workers AI integration
- CORSMiddleware: Cross-cutting concerns
Presentation Layer (src/presentation/):
- HTTP routing and request handling
- MCPRouter: Routes requests to appropriate handlers
Composition Root (src/index.ts):
- Dependency injection
- Wires all layers together
- 74 lines (down from 483 - 90% reduction)
- ✅ Testability: Each layer independently testable
- ✅ Maintainability: Clear responsibilities per layer
- ✅ Flexibility: Swap infrastructure (D1 → Postgres) without touching domain
- ✅ Semantic Intent: Comprehensive documentation of WHY
- ✅ Type Safety: Strong TypeScript contracts throughout
- save_context: Save conversation context with AI-powered summarization and auto-tagging
- load_context: Retrieve relevant context for a project (with Layer 2 LRU tracking)
- search_context: Search contexts using keyword matching (with Layer 2 access tracking)
- reconstruct_reasoning: Understand WHY a context was created
- build_causal_chain: Trace decision history backwards through time
- get_causality_stats: Analytics on causal relationships and action types
- get_memory_stats: View memory tier distribution and access patterns
- recalculate_memory_tiers: Update tier classifications based on current time
- prune_expired_contexts: Automatic cleanup of old, unused contexts
- update_predictions: Refresh prediction scores for a project
- get_high_value_contexts: Retrieve contexts most likely to be accessed next
- get_propagation_stats: Analytics on prediction quality and patterns
This project includes comprehensive unit tests with 70 tests covering all architectural layers.
# Run all tests
npm test
# Run tests in watch mode
npm run test:watch
# Run tests with UI
npm run test:ui
# Run tests with coverage report
npm run test:coverage- ✅ Domain Layer: 15 tests (ContextSnapshot validation, ContextService orchestration)
- ✅ Application Layer: 10 tests (ToolExecutionHandler, MCP tool dispatch)
- ✅ Infrastructure Layer: 20 tests (D1Repository, CloudflareAIProvider with fallbacks)
- ✅ Presentation Layer: 12 tests (MCPRouter, CORS, error handling)
- ✅ Integration: 13 tests (End-to-end service flows)
Tests are co-located with source files using the .test.ts suffix:
src/
├── domain/
│   ├── models/
│   │   ├── ContextSnapshot.ts
│   │   └── ContextSnapshot.test.ts
│   └── services/
│       ├── ContextService.ts
│       └── ContextService.test.ts
├── application/
│   └── handlers/
│       ├── ToolExecutionHandler.ts
│       └── ToolExecutionHandler.test.ts
└── ...
All tests use Vitest with mocking for external dependencies (D1, AI services).
This project uses GitHub Actions for automated testing and quality checks.
Automated Checks on Every Push/PR:
- ✅ TypeScript compilation (npm run type-check)
- ✅ Unit tests (npm test)
- ✅ Test coverage reports
- ✅ Code formatting (Biome)
- ✅ Linting (Biome)
Status Badges:
- CI status displayed at top of README
- Automatically updates on each commit
- Shows passing/failing state
Workflow Configuration: .github/workflows/ci.yml
The CI pipeline runs on Node.js 20.x and ensures code quality before merging.
This project uses Cloudflare D1 for persistent context storage.
- 
Create D1 Database: wrangler d1 create mcp-context 
- 
Update wrangler.jsoncwith your database ID:{ "d1_databases": [ { "binding": "DB", "database_name": "mcp-context", "database_id": "your-database-id-here" } ] }
- 
Run Initial Migration: wrangler d1 execute mcp-context --file=./migrations/0001_initial_schema.sql 
For local testing, initialize the local D1 database:
wrangler d1 execute mcp-context --local --file=./migrations/0001_initial_schema.sqlCheck that tables were created successfully:
# Production
wrangler d1 execute mcp-context --command="SELECT name FROM sqlite_master WHERE type='table'"
# Local
wrangler d1 execute mcp-context --local --command="SELECT name FROM sqlite_master WHERE type='table'"All database schema changes are managed through versioned migration files in migrations/:
- 0001_initial_schema.sql- Initial context snapshots table with semantic indexes
See migrations/README.md for detailed migration management guide.
This project is licensed under the MIT License - see the LICENSE file for details.
This implementation is based on the research paper "Semantic Intent as Single Source of Truth: Immutable Governance for AI-Assisted Development".
- Semantic Over Structural - Use meaning, not technical characteristics
- Intent Preservation - Maintain semantic contracts through transformations
- Observable Anchoring - Base behavior on directly observable properties
- Immutable Governance - Protect semantic integrity at runtime
- Research Paper (coming soon)
- Semantic Anchoring Governance
- semanticintent.dev (coming soon)
We welcome contributions! This is a reference implementation, so contributions should maintain semantic intent principles.
- Read the guidelines: CONTRIBUTING.md
- Check existing issues: Avoid duplicates
- Follow the architecture: Maintain layer boundaries
- Add tests: All changes need test coverage
- Document intent: Explain WHY, not just WHAT
- ✅ Follow semantic intent patterns
- ✅ Maintain hexagonal architecture
- ✅ Add comprehensive tests
- ✅ Include semantic documentation
- ✅ Pass all CI checks
Quick Links:
- Contributing Guide - Detailed guidelines
- Code of Conduct - Community standards
- Architecture Guide - Design principles
- Security Policy - Report vulnerabilities
- 💬 Discussions - Ask questions
- 🐛 Issues - Report bugs
- 🔒 Security - Report vulnerabilities privately
Security is a top priority. Please review our Security Policy for:
- Secrets management best practices
- What to commit / what to exclude
- Reporting security vulnerabilities
- Security checklist for deployment
Found a vulnerability? Email: [email protected]
{ "d1_databases": [{ "database_id": "your-database-id-from-above-command" }] }