Skip to main content
TENET’s memory system ensures nothing is forgotten between sessions. Every journal entry, code change, and decision is indexed, embedded, and searchable.

What Gets Remembered

SourceWhat’s IndexedHow Often
Journal entriesFeatures, fixes, decisions, discoveries, pivotsEvery 60 seconds
Code headers@purpose annotations from source filesEvery 5 minutes
Manual memoriesInsights, notes, decisions via jfl_memory_addOn demand

How Search Works

TENET uses hybrid search — combining lexical and semantic approaches for best results:
Query: "CLI startup optimization"
         |
    +----+----+
    |         |
  BM25+    Embedding
  (lexical) (semantic)
    |         |
    +----+----+
         |
   Reciprocal Rank
      Fusion (RRF)
         |
    Ranked Results

BM25+ (Always Available)

Term-frequency scoring with:
  • Stopword removal and phrase detection
  • Adaptive document length normalization
  • Query term weighting based on IDF
  • Positive IDF floor (BM25+ variant) — common terms still contribute

Semantic Search (When Embeddings Available)

Cosine similarity on text-embedding-3-small vectors:
  • 1536 dimensions
  • OpenAI or OpenRouter fallback
  • Auto-backfill: if key was missing when indexed, embeddings are added later

Reciprocal Rank Fusion

Merges BM25 and embedding results by rank position, not raw scores. More robust than linear interpolation because it doesn’t require score normalization.

Current Stats

jfl memory status
total_memories: 349
by_type: {feature: 96, milestone: 96, decision: 55, fix: 40, ...}
embeddings: {available: true, count: 349, model: "openrouter/text-embedding-3-small"}
349/349 memories embedded — zero gaps.

Graph Edges

Memories aren’t isolated. They connect to each other:
Edge TypeMeaningExample
updatesNew info supersedes old”CLI speed now 98ms” updates “CLI speed was 6.7s”
contradictsNew finding invalidates old”Connection pooling helps” contradicts “Keep connections short”
related_toTopically connectedMemory about eval system → related to agent config
caused_byCausal relationship”Test failures” caused_by “Dependency upgrade”
part_ofHierarchical groupingSession memories → part_of project milestone
# Add a link via API
curl -X POST http://localhost:4360/api/memory/link \
  -H "Content-Type: application/json" \
  -d '{"from": 42, "to": 17, "type": "updates"}'

Code Header Indexing

Files with @purpose annotations are automatically indexed:
/**
 * Memory Indexer Module
 *
 * @purpose Automatic indexing of journal entries and code headers
 */
This creates a searchable memory entry:
  • Source: file
  • Type: code-header
  • Content: src/lib/memory-indexer.ts: Automatic indexing of journal entries and code headers
Scans: src/, packages/, scripts/, eval/ Updates if @purpose changes. Deduped by file path.

Knowledge Doc Lifecycle

Knowledge docs (VISION.md, THESIS.md, etc.) are audited for staleness:
jfl organize
  Doc Health

  ✓  VISION              0d old  drift:   20%  mentions: 221
  ✓  THESIS              0d old  drift:   20%  mentions: 223
  ✓  NARRATIVE            0d old  drift:   20%  mentions: 206

  0 of 5 docs need attention
When docs drift from journal evidence, jfl organize generates a PENDING.md with proposed updates and open questions for human review.

Search

BM25+ hybrid search internals and query optimization.

Graph Edges

Structured relationships between memories.

Embeddings

Auto-backfill, model selection, and fallback behavior.

Code Headers

Indexing @purpose annotations from source files.