Self-Hosted Memory for Claude

The question

Can Claude Code keep memory across sessions without a hosted service?

Where it started (March 2026)

Plain markdown files in three folders. No database, no embedding, no index.

~/.claude-memory/
├── memories/          (general facts and preferences I want kept)
├── lessons-learned/   (patterns that worked, decisions and why)
└── errors-made/       (mistakes Claude made, so it does not make them again)

A small skill exposed read, write, and list over those folders. At session start, the file names plus first lines were dropped into the system prompt as a one-screen index. When Claude wanted detail, it asked for the file by name.

It worked because the file system is already a database. Grep is the search engine. A markdown file is the row. Editing memory was just opening a .md and saving.

Where it is now (May 2026)

The file-based prototype was a great proof of concept. It also scaled badly. After a few hundred sessions the folders got noisy and the manual curation got tiring. So I switched to claude-mem — a maintained SQLite + Chroma stack that handles indexing, embeddings, and retrieval automatically. Plus I kept a small file-based memory layer for cross-conversation rules — same idea as the original, just smaller scope.

Two layers now:

Layer 1 — claude-mem (SQLite + Chroma). Auto-captures session content. Embeds with a local model. Retrieves by semantic search.

Layer 2 — file-based feedback memory. A flat directory of small markdown files for “rules” that should always apply (formatting preferences, project context, do-this-not-that). Auto-loaded into every conversation.

The numbers

After roughly six weeks of daily use:

WhatSize
~/.claude-mem/claude-mem.db17.4 GB
~/.claude-mem/chroma/chroma.sqlite3143 MB
~/.claude-mem/ total (incl. backups, embeddings)33 GB
~/.claude/projects/<project>/memory/ (file-based)~70 KB across 9 markdown files

The big SQLite file is conversation history. The Chroma file is the vector index over that history. The file-based layer is tiny because it only holds rules, not transcripts.

What works

  • Auto-capture beats manual save. I do not remember to write things down. claude-mem does it for me.
  • Two-layer split. Rules live in tiny markdown files I can read with my eyes. History lives in the SQLite that Claude reads with semantic search.
  • No hosted service. Everything runs on my machine. Survives plane mode, ISP outages, vendor pivots.

What does not work

  • The DB grows fast. 17 GB after six weeks. I will need a retention policy soon — keep the last 90 days at full fidelity, archive the rest as one-line summaries.
  • Cross-project leakage. claude-mem indexes everything globally. Sometimes a result from project A surfaces in project B. Acceptable for now; would not survive a paid-client setup.
  • Backups are big. A pre-upgrade backup is the same 17 GB again. The ~/.claude-mem/backups/ folder needs pruning every few weeks.

Verdict

Daily driver. The two-layer split (auto-history + manual rules) works better than either alone. The file-based rules cost almost nothing and prevent the same correction landing twice. The big SQLite costs disk but pays back every session.

Next move: a retention policy. 90 days hot, older summarised, with the disk back under 5 GB.