A framework for defining persistent AI personas with memory, file ownership, and cross-team protocols.
How AI teammates persist knowledge across sessions.
Every AI session starts with a blank slate. The model has no memory of prior conversations, decisions, or context. Without a persistence layer, users must re-explain context every time, and agents repeat the same mistakes.
Teammates solves this with a file-based memory system. Files are the only persistence layer — there is no RAM between sessions.
Memory operates on two independent axes:
Episodic (timeline): Daily Logs → Weekly Summaries → Monthly Summaries
(raw, days) (compacted, 52 wk) (compacted, permanent)
Semantic (knowledge): Typed Memories → WISDOM
(durable facts) (distilled principles, permanent)
Daily logs feed both axes. Episodic compaction produces weekly and monthly summaries. Durable facts and lessons extracted during compaction become typed memories, which eventually distill into wisdom.
Each teammate has its own memory under .teammates/<name>/:
.teammates/<name>/
├── SOUL.md # Identity, principles, boundaries
├── WISDOM.md # Distilled principles (Tier 3)
└── memory/
├── YYYY-MM-DD.md # Daily logs (Tier 1)
├── <type>_<topic>.md # Typed memories (Tier 2)
├── weekly/
│ └── YYYY-Wnn.md # Weekly summaries (Tier 1b)
└── monthly/
└── YYYY-MM.md # Monthly summaries (Tier 1b)
The CLI automatically builds each teammate’s context before every task. The prompt stack (in order):
Teammates do not need to manually read these files or run recall searches — the CLI handles all context assembly automatically.
memory/YYYY-MM-DD.md — Append-only session notes. What was worked on, decided, what to pick up next. A new file is created each day. No frontmatter needed. These are raw scratch.
# 2026-03-14
## Refactored auth middleware
- Moved session validation into its own module
- Decided to keep cookie-based auth for now (JWT migration deferred)
- Need to update tests tomorrow
Compacted from daily logs by the /compact command:
memory/weekly/YYYY-Wnn.md) — Summary of a full week’s daily logs. Kept for 52 weeks.memory/monthly/YYYY-MM.md) — Summary of weekly summaries older than 52 weeks. Kept permanently.Both include frontmatter for searchability:
---
type: weekly
week: 2026-W11
teammate: beacon
period: 2026-03-09 to 2026-03-15
---
Implemented hybrid search merging for recall. Fixed chunking overlap
issue that caused duplicate results. Shipped v0.4.0.
memory/<type>_<topic>.md — Individual files capturing durable knowledge. Each has frontmatter for search and relevance matching.
| Type | When to save | Body structure |
|---|---|---|
user |
User’s role, goals, preferences, knowledge level | Free-form description |
feedback |
Corrections or guidance from the user | Rule, then Why: and How to apply: |
project |
Ongoing work, goals, deadlines, decisions | Fact/decision, then Why: and How to apply: |
reference |
Pointers to external resources | Resource location and when to use it |
memory/feedback_no_mocks.md:
---
name: No mocks in integration tests
description: Integration tests must use real services, not mocks
type: feedback
---
Integration tests must hit a real database, not mocks.
**Why:** Last quarter, mocked tests passed but the prod migration
failed because mocks diverged from actual behavior.
**How to apply:** When writing integration tests, always use the
staging environment. Only use mocks for unit tests of pure logic.
WISDOM.md — Distilled, high-signal principles derived from compacting multiple typed memories. Compact, stable, rarely changes. Read second after SOUL.md.
A good wisdom entry is:
Three memories:
feedback_no_mocks.md — “Don’t mock the database in tests”feedback_real_api.md — “Use real API calls in integration tests”project_staging_env.md — “Staging environment was set up for realistic testing”Become one wisdom entry:
Test against reality — Integration tests use real services, not mocks. Mock/prod divergence has caused incidents. Prefer the staging environment over in-process fakes.
The three source memories are deleted. The wisdom entry persists.
The /compact command runs two independent pipelines:
memory/weekly/YYYY-Wnn.mdmemory/monthly/YYYY-MM.mdmemory/git log / git blame@teammates/recall is bundled as a direct dependency of the CLI. It provides a local vector database with semantic search across all teammate memory files, queried automatically before every task. It uses Vectra as the vector store and transformers.js for on-device embeddings (Xenova/all-MiniLM-L6-v2, 384 dimensions). Zero cloud dependencies — everything runs locally.
The CLI queries the recall index before every task, using the task prompt as the search query. Relevant episodic summaries and typed memories are injected into the teammate’s context window automatically — no manual search needed.
queryRecallContext() with the task prompt as the query. Results are injected into the prompt between WISDOM.md and the daily logs./compact, the CLI calls syncRecallIndex() to pick up any new or changed files..teammates/<name>/.index/ (gitignored, rebuildable)| Content | Indexed | Reason |
|---|---|---|
| Weekly summaries | Yes | Primary episodic search surface (52-week window) |
| Monthly summaries | Yes | Long-term episodic context (permanent) |
| Typed memories | Yes | Searchable semantic knowledge |
| Raw daily logs | No | Already in prompt context (last 7 days), too noisy for search |
| SOUL.md / WISDOM.md | No | Always loaded directly into prompt |
memory/ for domain-specific knowledgeCROSS-TEAM.md — tagged with affected teammates.teammates/<name>/ can be shared by adding a pointer in CROSS-TEAM.mdBoth Teammates and OpenClaw use plain Markdown files as the source of truth for memory. Both systems solve the same fundamental problem: AI agents lose context between sessions. The approaches differ significantly in complexity and design philosophy.
| Aspect | Teammates | OpenClaw |
|---|---|---|
| Daily logs | memory/YYYY-MM-DD.md |
memory/YYYY-MM-DD.md |
| Long-term memory | WISDOM.md (distilled) + typed memories | MEMORY.md (curated, flat) |
| Structure | 3-tier hierarchy with compaction | 2-layer (daily + curated) |
| Episodic summaries | Weekly + monthly compacted files | None |
| Per-entity scope | Per-teammate (each has own memory/) | Per-agent |
Both systems use daily logs in the same memory/YYYY-MM-DD.md format with append-only semantics. The key structural difference is that Teammates introduces typed memories and wisdom as intermediate layers, while OpenClaw uses a single curated MEMORY.md file for all long-term knowledge.
| Aspect | Teammates | OpenClaw |
|---|---|---|
| Compaction | Two pipelines: episodic (daily->weekly->monthly) and semantic (typed->wisdom) | No built-in compaction; MEMORY.md is manually curated |
| Memory flush | Teammate writes at end of session by convention | Automatic pre-compaction flush (silent agentic turn before context window compaction) |
| Retention | Dailies deleted after weekly compaction; weeklies kept 52 weeks; monthlies permanent | Dailies accumulate indefinitely |
| Knowledge distillation | Typed memories -> WISDOM.md via /compact |
User manually curates MEMORY.md |
OpenClaw has an automatic memory flush that triggers when the session approaches context window compaction — a silent agentic turn reminds the model to write durable memories before context is lost. Teammates relies on convention (teammates write session notes before returning results).
| Aspect | Teammates (with recall) | OpenClaw |
|---|---|---|
| Search engine | teammates-recall — Vectra vector DB + transformers.js embeddings | Built-in vector index (default) or QMD sidecar |
| Embedding model | Local only — Xenova/all-MiniLM-L6-v2 (384-dim, ~23 MB) |
Auto-selected: local, OpenAI, Gemini, Voyage, Mistral, Ollama |
| Storage | Per-teammate Vectra index at .teammates/<name>/.index/ |
Per-agent SQLite at ~/.openclaw/memory/<agentId>.sqlite |
| Hybrid search | BM25 + vector (Vectra native) + recency pass, merged and deduped | BM25 + vector with configurable weights |
| Post-processing | Type-based priority boost (typed memories rank above episodic summaries) | MMR (diversity re-ranking) + temporal decay (recency boost) |
| Auto-sync | Yes — indexes new/changed files before every search | N/A — indexes on write |
| Session indexing | Not supported | Optional — indexes session transcripts for recall |
Both systems provide hybrid BM25+vector search with embeddings. Teammates leverages Vectra’s native BM25+vector hybrid scoring, layered with a recency pass (most recent weekly summaries, always included), then merges and deduplicates. OpenClaw similarly combines BM25 keyword matching with vector similarity using configurable weights. Both approaches combine keyword exactness with semantic understanding. OpenClaw supports more embedding providers and adds MMR diversity re-ranking, while Teammates adds type-aware priority boosting (pre-extracted knowledge scores higher than episodic narrative) and runs fully offline with zero configuration.
| Aspect | Teammates | OpenClaw |
|---|---|---|
| Philosophy | Structure through convention | Power through infrastructure |
| Complexity | Low — plain Markdown, no config | High — SQLite, embeddings, dual backends, 98+ source files |
| Dependencies | Recall bundled (Vectra + transformers.js, all local) | SQLite, embedding providers, optional QMD sidecar |
| Cloud requirements | None | Optional (remote embedding providers) |
| Configuration surface | Minimal (file naming conventions) | Extensive (chunking, weights, batch indexing, caching, scoping) |
| Multi-agent coordination | Built-in (CROSS-TEAM.md, handoffs, ownership) | Per-agent isolation |
Teammates prioritizes simplicity and multi-agent coordination. The memory system is plain Markdown with naming conventions — any AI tool that can read files can participate. The framework is designed for teams of specialized agents that share context through explicit handoffs and cross-team notes.
OpenClaw prioritizes search quality and retrieval sophistication. It invests heavily in making memory_search return the best possible results through hybrid scoring, diversity re-ranking, and temporal decay. The tradeoff is a much larger configuration surface and infrastructure footprint.
Both systems store Markdown as the source of truth and both provide hybrid BM25+vector search with embeddings. Teammates adds semantic structure on top (types, tiers, compaction rules) with automatic recall queries bundled into the CLI — zero configuration, fully local. OpenClaw adds more search infrastructure (MMR, temporal decay, multiple providers) and offers finer-grained tuning at the cost of a larger configuration surface.