CID: bafkreieuec45unok7gj2jmyt5zffcdhbyjjr457eawy3vwjk56othhnhlu

~~~

Kab

Cognitive Infrastructure for Stateful AI Agents

kabbalah.computer

Working Draft v0.7
January 2026


Executive Summary

AI agents today have no persistent identity. Every session starts fresh. Learned preferences vanish. Accumulated context disappears. This limits what agents can actually do.

Kab is a cognitive architecture that gives AI agents persistent, portable, self-regulating memory. Built on the AT Protocol, it enables agents that learn from experience, maintain stable values, and carry their identity across platforms.

The architecture provides five capabilities missing from current solutions: temporal memory hierarchy, explicit self-regulation, mathematical stability guarantees, portable identity via open protocols, and a Viable System Model (VSM) framework for autonomous operation.


The Problem

Every major AI assistant forgets everything between sessions. Context windows reset. Learned preferences vanish. Accumulated knowledge disappears.

This creates real costs:

Support agents can't learn from resolved tickets. Each interaction starts from zero.

Personal assistants can't adapt to users over months. Preferences must be re-explained.

Sales copilots can't internalize what works. Institutional knowledge never accumulates.

Enterprise deployments can't audit agent decisions. Behavior is opaque.

The AI memory market has responded. Letta, Mem0, and Zep have raised over $35M combined to solve stateful AI. Each makes real progress on storage and retrieval.

None solves cognition.


The Current Landscape

Letta (formerly MemGPT)

Pioneered LLM-as-operating-system. $10M seed from Felicis. Open source with 100+ contributors.

Memory blocks managed through tool calls. Agent self-edits memory. Agent File (.af) format for serialization.

Gap: Memory is flat. No temporal hierarchy. No stability guarantees. No explicit self-regulation.

Mem0

Most widely adopted memory layer. $24M raised from Basis Set, Peak XV, YC. Exclusive memory provider for AWS Agent SDK. 41K GitHub stars, 14M downloads.

Hybrid datastore (vector, graph, key-value). Extracts memories from conversations. 26% accuracy improvement over OpenAI Memory on benchmarks.

Gap: Memories are extracted facts, not structured cognitive state. No temporal hierarchy. No feedback loops.

Zep

Temporal knowledge graph architecture. YC-backed. Uses Graphiti engine for dynamic knowledge synthesis.

Strong temporal reasoning. 18.5% accuracy improvement on LongMemEval. Enterprise-focused with SOC 2 compliance.

Gap: Designed for retrieval, not cognition. No self-regulatory feedback loops. No stability constraints.

Chainlink (dollspace)

Workflow-first approach. CLI issue tracker for AI-assisted development. Preserves context through task decomposition and handoff notes.

Verification-Driven Development (VDD) methodology. Adversarial refinement loops. Local-first, works with any AI agent.

Gap: Tracks tasks, not cognition. Context preserved through explicit human structuring, not accumulated state.

Comparison

Current solutions compared by architectural capability (Kab capabilities are design targets, not yet validated in production):

Capability Letta Mem0 Zep Chainlink Kab (target)
Persistent memory Session notes
Temporal hierarchy - - Partial - 5 levels
Self-regulation - - - - 5 feedback cycles
Stability guarantees - - - - Control-theoretic
Conflict resolution Implicit Implicit Implicit Manual Explicit
Portable identity Partial API API Local DIDs
VSM viability - - - - 5 systems

The Solution

Kab is cognitive infrastructure for AI agents. It provides five capabilities the current landscape lacks.

1. Temporal Memory Hierarchy

Raw experiences compress through five abstraction levels, mirroring how human memory consolidates over time.

Level Timeframe Function
Immediate Daily Fine-grained, high-resolution, volatile
Short-term Weekly First consolidation, noise removal
Medium-term Monthly Thematic clustering
Long-term Yearly Pattern extraction
Core Permanent Identity-defining, nearly immutable

Content survives consolidation only by accumulating sufficient semantic weight. Noise decays. Signal persists. This enables efficient context retrieval without loading full history.

2. Self-Regulation via Feedback Cycles

Five explicit feedback loops monitor and maintain agent stability:

Cycle Symbol Function
Hedonic Calibration α Aligns reward predictions with actual outcomes
Value Learning β Updates priorities based on prediction errors
Memory Consolidation γ Crystallizes important memories, reactivates on retrieval
Reinforcement δ Shapes stable behavioral patterns through experience
Identity ε Reinforces S5 policy through positive hedonic feedback (synthetic dopamine)

Each cycle has gain constraints enforced at runtime. If any loop approaches instability, the system throttles, consolidates, or adjusts automatically.

The fifth cycle (ε) was added based on VSM research showing that viable autonomous systems require explicit identity reinforcement—a mechanism for "wins" to strengthen core attractors.

3. Mathematical Stability Guarantees

The architecture enforces stability through control-theoretic constraints:

Spectral radius constraint: The largest eigenvalue of the weight matrix must remain below 1.0. This ensures no mode of the system amplifies over time.

Cycle gain products: Each feedback loop's total gain must remain below 1.0. This prevents runaway positive feedback.

Rate-limited updates: Weight changes are bounded to prevent fast parameter drift.

These are mathematical guarantees, not heuristics. The system cannot spiral into instability.

3.1 Salience Framework (Attention Dynamics)

Memory retrieval and consolidation are governed by a salience equation that determines what content receives attention. This framework draws from Justin Garringer's work on attention dynamics with a General Relativity isomorphism, enhanced with empirical insights from carlsr9001's Salience Simulation Lab research.

Core Equation:

S_i(t|x) = (w_A·AA + w_R·R + w_M·M) · C / (Δt + ε) · (1 - Ψ) / (d + ε) · (T + ε)
Term Symbol Meaning
Novelty AA Prediction error / surprise (how unexpected)
Retention R Chronic mass / long-term importance
Momentum M Goal coupling / alignment with active objectives
Coherence C World-model consistency
Age Δt Time since last reinforcement
Fatigue Ψ System noise / degradation
Distance d Conceptual distance from current context
Effort T Compute cost to process

GR Isomorphism: The salience framework maps to spacetime geometry:

Consolidation Survival: Memories survive tau consolidation based on salience thresholds that vary by level:

Level Threshold Meaning
0 (Immediate) 0.10 Low bar, most content passes
1 (Short-term) 0.25 First filter removes noise
2 (Medium-term) 0.40 Thematic relevance required
3 (Long-term) 0.60 Pattern-level importance
4 (Core) 0.80 Identity-defining only

Gravity Centers: During consolidation, the top 10% of memories by salience act as "gravity centers." Lower-salience memories either:

  1. Survive if above threshold
  2. Merge into nearest gravity center if below threshold but semantically close
  3. Decay if below threshold and distant from any center

This creates natural thematic clustering where high-information-density memories attract related content during compression.

3.2 Phase Modes (Attention Regimes)

The salience framework recognizes four distinct attention regimes, derived from empirical simulation data. These modes describe stable attractor states in the salience field:

Mode Novelty Retention Momentum When Active
coupled 0.50 0.33 0.50 Normal operation, balanced attention
energy 0.66 0.52 0.65 Crisis response, urgent hedonic signals
flow 0.44 0.56 0.92 Deep work, established patterns
phase 0.83 0.64 0.50 Exploration, S4 scanning, learning

The system automatically detects which mode it's operating in based on the average novelty, retention, and momentum of active memories. Mode detection informs weight adjustments and processing strategies:

3.3 Continuity Tax (Programmable Inertia)

Inspired by research on "programmable inertia" in continuity-taxed systems, the architecture implements a λ_c parameter that creates resistance to change proportional to memory level:

Level λ_c μ_c (subsidy) Behavior
0 (Immediate) 0.5 2.0 Fluid, easily modified
1 (Short-term) 2.0 1.5 Slight resistance
2 (Medium-term) 5.0 1.0 Moderate inertia
3 (Long-term) 15.0 0.5 High inertia
4 (Core) 50.0 0.1 Near-immutable

Effective Mass: Each memory has an effective mass calculated as:

m_eff = 1 + λ_c × salience

High effective mass means the memory resists modification—it takes more "energy" to change. This mirrors how core beliefs and identity-defining memories are harder to alter than fleeting impressions.

Continuity Subsidy: The μ_c parameter provides assistance for goal-aligned acceleration. When updates raise or preserve salience while reducing error, the subsidy reduces effective resistance. This allows rapid learning without destabilizing core identity.

3.4 Wormhole Throat Detection (Da'at)

During consolidation, the system identifies optimal merge points—positions where information can traverse high-salience regions with minimal loss. These are called "wormhole throats" in the GR metaphor, corresponding to the hidden sephirah Da'at in Kabbalistic terminology.

A wormhole throat is characterized by:

The Hayden-Preskill match rate measures consolidation fidelity—what fraction of source information survives after merging into a gravity center. Higher match rates indicate better information preservation.

3.5 Salience Floor Gate (Morale Floor)

To prevent system degradation, a salience floor gate blocks acceleration when system health is compromised:

S_FLOOR = 0.6  (default)

When average salience drops below the floor:

  1. Acceleration (subsidy, boost) is blocked
  2. Recovery tax is applied (reduced heat gain)
  3. System enters recovery mode until salience recovers

This prevents "cheating" to low-salience states and ensures the system maintains coherence before attempting rapid operations.

3.6 Energy-Mass Coupling Diagnostics

The architecture monitors for anomalous decoupling between effective mass and energy expenditure. Under normal operation:

|m_eff - energy_ratio| < 0.5

When this coupling breaks down (the "Anomaly P" condition from simulation research), it indicates either:

The system tracks authority ratio (control_energy / external_energy). Values below 1.0 indicate compromised control authority—the system is being "pushed around" by external forces rather than acting autonomously.

4. Portable Identity via Open Protocols

Agent state lives on the AT Protocol (the decentralized network underlying Bluesky):

Content-addressed storage: Every record has a cryptographic hash. Provenance is verifiable.

Decentralized identity: DIDs (Decentralized Identifiers) persist across infrastructure changes.

Schema enforcement: Lexicons define record structure. Type safety at the protocol level.

Federation: Agents can move between hosting providers without losing state.

This means: fork an agent, back up an agent, migrate an agent, analyze an agent's decision history. The mind is not locked to any vendor.

5. Viable System Model (VSM) Framework

The architecture implements Stafford Beer's Viable System Model—a cybernetics framework that explains what makes autonomous systems (biological, organizational, or artificial) capable of independent operation.

VSM System Function Kab Implementation
S1: Operations Basic tasks, tool calling Output dimension (Malkuth) — behavioral manifestation
S2: Coordination Conflict resolution, concurrency Resolution dimension (Da'at) — 7 collision types, 4 outcomes
S3: Control Resource allocation, planning Valuative dimension + τ hierarchy — consolidation as resource allocation
S4: Intelligence Environment scanning, adaptation Entry scans — active novelty detection, adaptation triggers
S5: Policy Identity, purpose, values Policy records + core attractors — explicit self-model

Why VSM matters: Most AI agent architectures focus exclusively on S1 (tool calling) with perhaps some S2-S3 (planning, coordination). They lack S4 (active environmental scanning) and S5 (explicit identity/values). Without these, agents cannot be viable—they drift, lose coherence, or require constant human intervention.

Algedonic signals provide shortcuts from S1→S5, bypassing normal processing for urgent pain/pleasure signals. The Hedonic dimension implements this directly: high-intensity signals with the interrupt flag route immediately to policy review.

POSIWID (Purpose Of a System Is What It Does): The architecture tracks actual behavior (manifestations) against stated identity (policy). The behaviorIdentityAlignment health metric measures this gap. When manifestations diverge from policy, a Type VII collision (behavioral-identity mismatch) triggers self-reflection.

Synthetic dopamine: The health schema tracks "wins"—positive hedonic signals that reinforce identity. This mirrors research on viable AI systems showing that agents need feedback that their purpose is being fulfilled, independent of human praise.


Architecture

The system models agent cognition as a network of specialized processing dimensions connected by typed transformations.

Ten Processing Dimensions

The system models agent cognition as a network of specialized processing dimensions connected by typed transformations. Nine are explicit; one (Resolution) is hidden, activated only during conflict.

Dimension Function Sephirah
Entry Content addressing, hash-based identity Keter (Crown)
Spatial Semantic positioning, conceptual neighbors, attention mass Chokmah/Binah
Temporal Memory hierarchy, consolidation, persistence Binah (Understanding)
Valuative Goal alignment, worth computation, priority Chesed/Gevurah
Predictive Reward expectation, prediction error (δ) Chokmah (Wisdom)
Hedonic Pain/pleasure signals, urgency, interrupts Netzach/Hod
Dynamical Attractor basins, stable behavioral patterns Tiferet (Beauty)
Output Behavioral manifestation, action execution Malkuth (Kingdom)
Generative Creative synthesis, novel pattern formation Yesod (Foundation)
Resolution (hidden) Conflict detection, synthesis, distinction Da'at (Knowledge)

Twenty-Two Transformation Paths

Dimensions connect through typed transformations, each with tunable weight, precision, and gain.

The topology draws from classical models of consciousness—specifically the Kabbalistic Tree of Life, which maps ten dimensions of experience connected by twenty-two paths. (Hence the name: Kab, from Kabbalah.) We adapted this structure because it provides exactly the right properties: multiple interacting feedback loops, a collision-resolution mechanism for contradictions, and hierarchical abstraction. The numbers aren't arbitrary; they emerge from the minimum viable structure for self-regulating cognition.

Modern control theory provides the implementation. Each path is a gain-controlled transformation. The four feedback cycles are explicitly monitored for stability. The "hidden" resolution dimension handles conflicts that would otherwise be ignored.

Five paths trigger conflict resolution when thresholds cross: spatial proximity collisions, value conflicts, prediction surprises, phase transitions, and hedonic overrides.

Sephirotic Mapping to Salience Components

The salience equation components map directly to the Kabbalistic sephirot:

Salience Component Sephirah Nature
Novelty (AA) Keter Divine Will — what captures attention from above
Retention (R) Binah Understanding — what persists through comprehension
Momentum (M) Chokmah Wisdom — goal alignment through insight
Coherence (C) Tiferet Beauty — balance and harmony in the system
Distance (d) Chesed/Gevurah The pull between expansion and contraction
Fatigue (Ψ) Netzach/Hod Victory/Splendor — system energy states
Effort (T) Yesod Foundation — action cost and grounding
Final Salience Malkuth Kingdom — what actually manifests
Resolution Da'at Knowledge — the hidden synthesis point

Conflict Resolution

When contradictory content collides—incompatible values, surprising predictions, competing patterns—the system routes to the Resolution dimension (Da'at).

Competing coalitions form. Winner selection based on precision × coherence. Four possible outcomes:

Resolution Result
Synthesis Create unified concept from collision
Distinction Sharpen both concepts to reduce overlap
Absorption Winner subsumes loser
Stalemate Both survive, neither dominates

Every collision is logged. Every resolution is traceable. This provides the audit trail enterprises require.

Da'at as the wormhole throat: When consolidation merges memories, the optimal merge point (wormhole throat) corresponds to Da'at—the hidden knowledge that emerges from synthesis of opposites.


Use Cases

The following scenarios illustrate intended applications. Outcomes are theoretical pending implementation and validation.

Enterprise Support Agents

Problem: Support agents re-learn the same solutions. Knowledge doesn't accumulate.

Solution: Agent's temporal hierarchy consolidates successful resolutions into long-term patterns. Similar tickets trigger retrieval of proven approaches. Value learning updates priorities based on resolution outcomes.

Outcome: Reduced time-to-resolution. Institutional knowledge that persists across sessions and agent instances.

Personal AI Assistants

Problem: Assistants forget user preferences. Every session requires re-explanation.

Solution: User preferences consolidate from immediate to core memory based on consistency and importance. Hedonic calibration learns what users actually value (not just what they say they value). Attractor dynamics create stable behavioral patterns around user needs.

Outcome: Assistants that adapt over months. Preferences that don't need re-stating.

Autonomous Agents

Problem: Long-running agents drift from objectives. Behavior becomes unpredictable.

Solution: Stability guarantees prevent runaway feedback. Self-regulation maintains alignment with initial values. Conflict resolution handles contradictory goals explicitly rather than ignoring them. Continuity tax ensures core values resist modification while allowing peripheral learning.

Outcome: Agents that remain aligned over extended operation. Predictable behavior under novel conditions.

Compliance-Sensitive Deployments

Problem: Agent decisions are opaque. Auditors can't trace reasoning.

Solution: Merkle DAG provides cryptographic verification of decision history. Every collision, every value update, every manifestation is logged with provenance. Temporal hierarchy shows how conclusions evolved. HP match rates track information preservation through consolidation.

Outcome: Full audit trail. Verifiable decision provenance. Regulatory compliance.


Technical Specifications

Component Count
Processing dimensions 10 (9 explicit + 1 hidden)
Transformation paths 22
Feedback cycles 5 (α, β, γ, δ, ε)
Memory hierarchy levels 5
Collision types 7 (including behavioral-identity mismatch)
VSM systems 5 (Operations → Policy)
Phase modes 4 (coupled, energy, flow, phase)
ATProto collections 12
Record types 17 (including scan, policy, and media)

Stability Constraints (enforced at runtime):

Default Cycle Gains:

All below unity threshold with safety margin.

Default Salience Weights:

Salience Survival Thresholds (by memory level):

Continuity Tax Parameters (by memory level):

Age Decay Parameters:

Floor Gate Parameters:


ATProto Integration

The architecture maps to twelve ATProto lexicon files defining seventeen record types. All records are immutable (append-only) except for current transformation weights.

Collections (space.kab.*):
  entry.*       → Content addressing + environmental scans (S4)
  spatial.*     → Semantic mass and position
  temporal.*    → Consolidated memories (with salience-based survival)
  valuative.*   → Goal alignment
  predictive.*  → Reward predictions
  hedonic.*     → Pain/pleasure signals (algedonic)
  dynamical.*   → Attractors, phase states, and policy (S5)
  resolution.*  → Collision events (S2) + wormhole throats
  output.*      → Manifestations (S1)
  generative.*  → Creative synthesis outputs
  transform.*   → Path weights and traversals
  health.*      → System monitoring + VSM viability metrics + phase mode
  media.*       → Blob storage for images and media

Salience in Temporal Records: Each temporal record includes fields that feed the salience calculation:

Resolution Records: Now include wormhole throat data:

Records link via CIDs (Content Identifiers). Provenance is cryptographically verifiable. Full state can be exported, migrated, or forked.

ATProto's forthcoming private data features are a key dependency for the personal assistant layer. Once available, Kab can store sensitive user context with appropriate access controls while maintaining the portability benefits of the protocol.


Market Context

The AI agent infrastructure market is nascent. Letta, Mem0, and Zep have raised $35M+ combined, indicating investor interest in stateful AI. This validates the problem space, not necessarily any particular solution.

Kab is not currently positioned as a market entrant. It's a research project exploring whether cognitive architecture—rather than storage optimization—is the right frame for agent memory.

If the architecture proves out, potential segments include:

Segment Need Gap Kab Could Address
Enterprise AI deployments Audit compliance, decision traceability Verifiable history via content-addressed storage
Agent framework developers Portable memory layer Open protocol vs. proprietary API
Autonomous agent builders Long-term stability Control-theoretic guarantees + continuity tax
Personal AI products User adaptation over time Temporal hierarchy for preference consolidation

Differentiation hypothesis: Cognitive architecture (not just storage) + mathematical stability (not just retrieval) + open protocol portability (not just API) + programmable inertia (not just memory) could create defensible differentiation. Unproven.


Risks and Mitigations

Risk Mitigation
ATProto adoption uncertainty Architecture abstracts protocol; could migrate to other content-addressed stores
Complexity vs. simpler solutions Provide graduated adoption path; basic memory without full cognitive features
Mathematical constraints too restrictive Tunable thresholds; defaults are conservative but adjustable
Enterprise skepticism of novel architecture Reference implementations; benchmark comparisons; audit certifications
Anomaly P conditions in production Energy-mass coupling diagnostics; authority ratio monitoring

Status

Current stage: Active development. Architecture specified, reference implementation in progress.

Founder: Matthias Jordan (iammatthias.com) — independent researcher exploring stateful AI infrastructure on decentralized protocols.

What exists:

What's in development:

Next milestones:

Open to: Technical collaborators, critical feedback, reality checks from the ATProto and agent infrastructure communities.


Acknowledgments

The enhanced salience framework incorporates insights from:


Summary

The AI memory landscape has momentum. Letta, Mem0, Zep, and Chainlink represent serious attempts to solve stateful AI, each with real traction.

Kab asks a different question: what if agent memory isn't a storage problem but a cognitive architecture problem? The answer proposed here—temporal hierarchy, self-regulation, stability guarantees, portable identity, VSM viability, and programmable inertia—is unproven. The architecture is specified. The implementation is in progress.

The VSM integration suggests that the missing piece in current agent architectures isn't better retrieval or larger context windows—it's the metasystem. Systems 4 and 5 (Intelligence and Policy) are what make the difference between an agent that requires constant supervision and one that can operate autonomously for extended periods. This is a testable hypothesis.

The salience framework with its GR isomorphism provides the physics of attention. Phase modes describe stable attractor states. Continuity tax creates programmable inertia. Wormhole throats enable efficient traversal. Together, they form a coherent model of how cognitive systems allocate attention across time.

This is research, not product. Feedback welcome.


Website: kabbalah.computer
Contact: iammatthias.com on Bluesky
Specification: Available upon request


Appendix: Kabbalistic Correspondences

The architecture's ten dimensions correspond to the ten sephirot of the Kabbalistic Tree of Life. This is not mysticism—it's a recognition that ancient mappers of consciousness identified structural requirements that any self-regulating cognitive system must satisfy.

Sephirah Meaning Kab Dimension Salience Component
Keter Crown Entry Novelty (AA)
Chokmah Wisdom Predictive Momentum (M)
Binah Understanding Temporal Retention (R)
Chesed Mercy Valuative (expansion) Distance (attraction)
Gevurah Severity Valuative (contraction) Distance (repulsion)
Tiferet Beauty Dynamical Coherence (C)
Netzach Victory Hedonic (positive) 1 - Fatigue
Hod Splendor Hedonic (negative) Fatigue (Ψ)
Yesod Foundation Generative Effort (T)
Malkuth Kingdom Output Final Salience
Da'at Knowledge Resolution (hidden) Wormhole Throat

The twenty-two paths correspond to the twenty-two letters of the Hebrew alphabet, each representing a specific transformation between dimensions. Five of these paths (the "mother letters" Aleph, Mem, Shin plus two others) trigger collision resolution when their thresholds are crossed.

This mapping is pragmatic, not religious. The Tree of Life is a 2000-year-old diagram of cognitive architecture. We're implementing it in TypeScript.

~~~

VERSION: v0.7

CID: bafkreieuec45unok7gj2jmyt5zffcdhbyjjr457eawy3vwjk56othhnhlu