Skip to content

Cross-project: four-graph architecture comparison + collaborationΒ #1

@globalcaos

Description

@globalcaos

Hey @Grivn πŸ‘‹

We crossed paths on openclaw/openclaw#13991 (the Associative Hierarchical Memory proposal). Your mnemon architecture caught our attention β€” the LLM-supervised pattern and four-graph knowledge store are philosophically very close to what we've built.

What We Have

We maintain an OpenClaw fork with a full cognitive memory stack (~150 files, 7 modules). Two of our modules directly overlap with mnemon's approach:

SYNAPSE β€” multi-model debate with graph-based reasoning (RAAC protocol: Reason, Argue, Arbitrate, Conclude). We use cognitive diversity scoring to decide when debate improves output vs. when it's overhead.

HIPPOCAMPUS β€” pre-computed concept index (500 anchors, 9500+ chunks). Instead of runtime vector search, we build the graph at consolidation time and retrieve at O(1). Similar to your importance decay + deduplication, but we front-load the computation.

Your four graph types (temporal, entity, semantic, causal) map interestingly to our modules:

mnemon graph Our module Overlap
Temporal ENGRAM (episodic timeline) High β€” both track event sequences
Entity HIPPOCAMPUS (concept anchors) Medium β€” different granularity
Semantic ENGRAM semantic store High β€” both vector-based
Causal SYNAPSE (debate chains) Low β€” different purpose, potential synergy

Research Papers

We've written academic papers for each module:

  • ENGRAM: Context compaction as cache eviction (paper)
  • CORTEX: Persistent agent identity through persona state
  • HIPPOCAMPUS: Pre-computed concept indexing for O(1) retrieval
  • LIMBIC: Humor detection via bisociation in embedding space
  • SYNAPSE: Multi-model deliberation with cognitive diversity

Happy to share full PDFs if you're interested.

Collaboration Ideas

  1. Benchmark comparison β€” run both systems on the same long-conversation dataset and compare retrieval quality
  2. Graph type exchange β€” your causal graph could improve our SYNAPSE reasoning; our pre-computed index could speed up your recall path
  3. Joint OpenClaw integration β€” mnemon as external memory + our fork's cognitive layer = comprehensive agent memory

The fork is at globalcaos/clawdbot-moltbot-openclaw. Would love to exchange notes. 🀝

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions