Praefex is a five-layer governance stack that gives AI systems the structural properties of a trustworthy institution — not just a well-prompted model. What follows is an honest technical overview: what's built, what's designed, and what we're still measuring.
Each layer solves a distinct governance problem. They compose vertically — higher layers depend on the guarantees provided by lower ones.
What the system knows — and cannot forget
Every decision, observation, and outcome is written to a distributed ledger before the system proceeds. The ledger is append-only and hash-chained — each entry references the SHA-256 digest of the prior entry, making retroactive alteration computationally detectable across all nodes. The system never operates from transient RAM state alone; all working context is materialized and addressable by content hash.
No single node decides alone
Decisions require agreement across a quorum of independent nodes before they are committed to the ledger. Nodes communicate over an encrypted private network; each vote is signed with an Ed25519 keypair unique to that node. A dissenting minority cannot prevent a quorum decision, but its dissent is recorded permanently. Node failures are tolerated up to the quorum threshold — the system degrades gracefully rather than failing open.
Decision quality shaped by cognitive science
Praefex embeds 13 empirically-grounded cognitive frameworks as first-class architectural constraints — not advisory guidelines. Each framework governs a specific aspect of how the system forms, evaluates, and commits to decisions. Frameworks from memory science, dual-process theory, emotional intelligence, and analogical reasoning are encoded structurally, so their properties hold regardless of which model or agent is running on top of them.
Every action is attributable and permanent
All events — decisions, dissents, overrides, errors, and recoveries — are written to the hash-chain ledger with microsecond timestamps and node attribution before any downstream action is taken. The ledger is replicated across the mesh; no single node's failure can produce a gap. Auditors can verify chain integrity by recomputing hashes without access to private keys. Override events are first-class entries, not annotations.
Governance that improves from experience
The governance parameters themselves are subject to learning. The system tracks decision outcomes against predicted outcomes, identifies systematic biases in its own reasoning patterns, and proposes parameter adjustments that must pass through L2 consensus before taking effect. Self-modification requires the same quorum threshold as any other committed decision — the system cannot unilaterally relax its own constraints.
Technical specification of the distributed agreement protocol. Designed for correctness over throughput — governance decisions are not a high-frequency path.
hash(prev_hash + entry_content) — a standard hash-chain construction.These are not prompt engineering suggestions. Each framework is encoded as a structural constraint on how the system may reason, remember, and decide. The frameworks were selected for their empirical foundations and direct applicability to machine cognition.
Praefex's architecture is grounded in decades of cognitive science research from the leading academic voices in the field. We didn't invent memory consolidation, dual-process reasoning, somatic signaling, case-based retrieval, or prospective scheduling — we stood on their shoulders and combined their work into a unified governance layer for AI systems. Each entry below cites the original source so a cognitive scientist or AI researcher can verify the work exists independently of any claim Praefex makes about it.
Endel Tulving & D.M. Thomson (1973)
"Encoding specificity and retrieval processes in episodic memory." Psychological Review, 80(5), 352–373.
Retrieval is most reliable when the context at recall matches the context at encoding. The system stores decision context alongside the decision itself — not just the outcome.
Daniel Kahneman (2011)
Thinking, Fast and Slow. Farrar, Straus and Giroux.
Fast pattern-matching and slow deliberative reasoning are structurally separate pathways. High-stakes decisions are routed through the deliberative path regardless of confidence from the fast path.
Antonio Damasio (1994)
Descartes' Error: Emotion, Reason and the Human Brain. Putnam.
Good decision-making requires an analog of affective weighting — outcomes that have previously caused harm must influence future decisions without explicit recall. Outcome valence is a first-class ledger field.
Janet Kolodner (1993)
Case-Based Reasoning. Morgan Kaufmann.
New problems are addressed by retrieving structurally similar past cases, adapting their solutions, and retaining the result as a new case. The ledger is the case library; every commit is a retrievable precedent.
Gilles O. Einstein & Mark A. McDaniel (2004)
Memory Fitness: A Guide for Successful Aging. Yale University Press.
Memories that require effortful retrieval are encoded more durably than those retrieved with no friction. The system applies spaced access patterns to important precedents to maintain retrieval fidelity over time.
Daniel J. Simons & Christopher F. Chabris (1999)
"Gorillas in our midst: Sustained inattentional blindness for dynamic events." Perception, 28(9), 1059–1074.
Systems attending narrowly miss salient information outside the focus window. The governance layer enforces broad-scope context assembly before narrow-scope action execution.
Raymond S. Nickerson (1998)
"Confirmation bias: A ubiquitous phenomenon in many guises." Review of General Psychology, 2(2), 175–220.
Reasoning systems default toward evidence that confirms prior beliefs. Praefex requires that disconfirming evidence be explicitly retrieved and weighed before any decision is committed — this is structural, not advisory.
John H. Flavell (1979)
"Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry." American Psychologist, 34(10), 906–911.
Competent reasoners track their own confidence calibration and know when they are operating near the edge of their knowledge. The system maintains a running calibration score and flags decisions made in low-confidence regimes.
Edwin Hutchins (1995)
Cognition in the Wild. MIT Press.
Cognition is not confined to a single mind — it is distributed across agents, artifacts, and environment. The mesh architecture is a direct instantiation: reasoning quality improves with node participation, not just model capability.
Dedre Gentner (1983)
"Structure-mapping: A theoretical framework for analogy." Cognitive Science, 7(2), 155–170.
Analogical reasoning succeeds when structural relations — not surface features — are matched between domains. Case retrieval in Praefex matches on structural problem descriptors, not keyword similarity.
Peter Graf & Bob Uttl (2001)
"Prospective memory: A new focus for research." Consciousness and Cognition, 10(4), 437–450.
Remembering to do something in the future is a distinct memory system from remembering what happened in the past. The system maintains a first-class commitment registry — future obligations are not stored as notes but as typed, queryable records.
George Ainslie (1992)
Picoeconomics: The Strategic Interaction of Successive Motivational States Within the Person. Cambridge University Press.
Humans and AI systems both irrationally devalue future consequences relative to immediate ones. Praefex applies explicit temporal weighting to outcome projections, preventing hyperbolic discounting of long-range risks.
James P. Walsh & Gerardo Rivera Ungson (1991)
"Organizational memory." Academy of Management Review, 16(1), 57–91.
Organizations retain knowledge in people, culture, structure, and stored records. Praefex treats the hash-chain ledger as institutional memory — not a log, but the organization's primary cognitive artifact. Decisions without ledger entries did not, structurally, happen.
The specific combination of these 13 cognitive science frameworks into a unified AI governance architecture — including the mapping of each framework to system components, the dispatch logic between frameworks, and the consensus integration — is the subject of US utility patent application 19/632,364 and is patent-pending.
Praefex has no affiliation with any of the cited researchers. Framework selection was informed by peer-reviewed literature in cognitive psychology, organizational behavior, and cognitive science. All citations refer to works by their original authors.
Security in Praefex is structural, not perimeter-based. The system is designed so that compromising a single node cannot compromise the ledger's integrity or allow unauthorized decisions.
A utility patent application has been filed for the core GNOSIS architecture — the distributed cognitive governance system that underlies Praefex. The application is pending review. No patent has been granted.
The patent application covers the structural combination of: (1) a quorum-based distributed consensus mechanism, (2) a hash-chained append-only ledger, (3) cognitive science frameworks encoded as structural constraints, and (4) the integration of these components as an AI governance layer.
We're not claiming to have solved AI governance. Here are the hard problems we're actively working on. Honest disclosure of open questions is a design principle, not a weakness.
At 4 nodes, quorum thresholds are straightforward. At 40 or 400 nodes with heterogeneous reliability profiles, optimal threshold setting becomes a dynamic optimization problem. We have a design for adaptive thresholds; it hasn't been stress-tested at scale.
The current design tolerates crash failures well. A node that signs inconsistent votes (Byzantine behavior) is detectable from the ledger but handling it requires operator intervention today. An automated Byzantine fault response is on the roadmap.
How do you know your structural encoding of, say, confirmation bias mitigation is actually working? We have a theoretical answer (outcome calibration over time) and no meaningful empirical answer yet. This requires a decision corpus we don't have at this stage of deployment.
A hash-chain ledger that is append-only grows indefinitely. At current scale this is not a problem. At enterprise scale with high decision frequency, ledger compaction (without breaking the chain) and efficient case-based retrieval become real engineering challenges.
The governance layer should not need to be rebuilt for each LLM or agent framework. Defining a clean, stable API boundary that works across GPT, Claude, Gemini, and open models — without requiring model-specific adapters — is an active design problem.
EU AI Act, emerging US federal AI governance requirements, and sector-specific regulations (financial services, healthcare, defense) each imply different audit trail and explainability standards. The ledger provides the raw material; mapping it to regulatory schemas is ongoing work.
We're interested in conversations with engineers, researchers, and architects working on distributed systems, AI safety, cognitive science applications, or enterprise AI governance.
[email protected]