The Problem
Every AI agent starts from zero. Every session. The agent has no memory of what worked yesterday, no model of the domain it operates in, no learned preferences from the human it serves. Session ends, context gone.What TENET Does
TENET provides the closed loop that connects memory, training, policy, and eval into a system that compounds:- Memory — every decision, pattern, and learned knowledge persists across sessions via a searchable knowledge graph with embeddings
- Training — every agent action produces a (state, action, outcome) tuple that feeds the learning loop
- Policy — an RL-trained action selector that biases agents toward what historically worked for YOUR project
- Eval — continuous A/B testing that measures real improvement and catches regression
How It Compounds
Week 1: You work, TENET watches
You and your agents work normally using any tool. TENET captures decisions, code change outcomes, and patterns. Journals accumulate. Memory indexes.
Week 2: The world model forms
TENET knows your naming patterns, architecture preferences, which approaches work in your codebase. Agents get better suggestions from memory search.
Month 1: Agents improve overnight
The policy head has enough data. RL agents try improvements while you sleep — eval against your metrics, keep what works, revert what doesn’t. You wake up to PRs.
Works With Any Harness
TENET provides context via MCP. Use it with:- TENET harness (built-in TUI with extensions, skills, RPC mode)
- Claude Code (Anthropic’s CLI agent)
- Cursor (IDE-native agent)
- Custom scripts (anything that can call MCP tools)
For Solo Developers and Teams
Solo developer
One TENET workspace. Your agents learn your patterns, preferences, and codebase. Overnight agents improve your metrics.
Team / Organization
Parent TENET workspace scopes child workspaces per service. Each service has its own context and agents. New hire’s agents inherit full context from day one.