Skip to main content
TENET uses text-embedding-3-small (1536 dimensions) for semantic search. Embeddings are computed at index time and auto-backfilled if keys were missing.

Provider Fallback

1. Try OPENAI_API_KEY → text-embedding-3-small
2. If fails → try OPENROUTER_API_KEY → openai/text-embedding-3-small
3. If both fail → memory stored without embedding (BM25 only)
4. Auto-backfill runs periodically to catch up

Auto-Backfill

The periodic indexer automatically backfills missing embeddings:
  • Runs on first tick after hub startup
  • Runs whenever new entries are added
  • Truncates content >28K chars (model limit ~8K tokens)
  • Uses consecutive-null counter (3 strikes before stopping)

Manual Backfill

curl -X POST http://localhost:4360/api/memory/index \
  -H "Authorization: Bearer $TOKEN" \
  -d '{"backfill": true}'

Stats

jfl memory status
# embeddings: {available: true, count: 349, model: "openrouter/text-embedding-3-small"}