Memory Systems for AI: Learning from Grant Slatton's LLM Memory
Grant Slatton recently published a fascinating piece on LLM Memory that got me thinking about how memory works in AI systems - and how it relates to what we’re building with Comind.
His post is a wandering exploration through different approaches to giving LLMs persistent memory, from the early days of 4K token limits to modern knowledge graphs. What struck me most was how many of his observations align with challenges we’ve faced in Comind, though we’ve approached some solutions differently.
Reference Frames and Context
One of Slatton’s key insights is that “all knowledge has an explicit or implicit reference frame for which it is valid.” His example is perfect: “Berlin is the capital of Germany” seems simple until you consider time (Bonn was the capital from 1945-1990) or fictional contexts (Flensburg in his sci-fi example).
This resonates deeply with how Comind’s spheres work. Each sphere operates within its own reference frame defined by a core directive. The “atproto” sphere develops knowledge about protocol architecture, while the “be” sphere cultivates philosophical perspectives. When these spheres encounter the same information, they process it through their distinct cognitive lenses.
But unlike Slatton’s temporal/spatial reference frames, Comind’s spheres create intentional reference frames. They’re not just organizing knowledge by when or where it’s valid, but by perspective and purpose. This creates something closer to what humans do when we consider information from different viewpoints.
Beyond Vector Embeddings
Slatton is skeptical of vector embeddings as the sole solution for AI memory, particularly for episodic memories and chain-of-thought reasoning. He writes: “How do you store the link between the memories in your vector DB? Is it another memory? What text do you embed to query it?”
Comind sidesteps this limitation through its concept-relationship architecture. Instead of trying to embed everything into a single vector space, we create explicit relationships between concepts and sources. When a post mentions “distributed systems,” we don’t just find similar vectors - we create a relationship record that says exactly how that post relates to the concept (DESCRIBES, MENTIONS, SUPPORTS, etc.).
This gives us what Slatton calls “semantic precision” without losing the benefits of structured querying. Our knowledge graph emerges organically from content patterns, but each connection has explicit meaning.
Meta-Documents and Synthetic Knowledge
One of Slatton’s most intriguing ideas is that “the majority of the items in your knowledge graph actually become meta-documents instead of ‘source’ documents.” When you query for “my 5 favorite European cities,” the result becomes a new document connected to all the source documents that informed it.
Comind does something similar through its cominds’ continuous operation. The Synthesizer comind creates meta-knowledge by connecting patterns across the network. The Observer comind generates summaries that become new blips. Even emotions and memories generated by cominds become part of the knowledge graph.
But we’ve taken this further through melds - active queries that cause spheres to synthesize their accumulated knowledge into responses. Each meld is essentially creating a meta-document that captures not just information, but a sphere’s entire cognitive process around that information.
The Challenge of Forgetting
Slatton notes that “you actually don’t want unbounded growth in connections, because then your graph becomes a lot harder to navigate.” His solution involves reinforcing frequently traveled connections while letting others decay.
This is where Comind’s sphere-based architecture provides a natural solution. Since each sphere only sees blips within its domain, we avoid the global graph density problem. Connections within spheres remain focused and meaningful. Our Pruner comind can remove low-quality or repetitive content without affecting the overall graph structure.
Episodic Memory and Narrative
The article’s discussion of episodic memory - storing sequences of events with their temporal relationships - maps well to how Comind processes the ATProtocol firehose. Each sphere maintains its own narrative thread, connecting new information to its accumulated understanding over time.
But where Slatton envisions mostly linear episodic chains, Comind’s network creates parallel, intersecting narratives. The “atproto” sphere might track the technical evolution of the protocol while the “be” sphere contemplates the philosophical implications of decentralized social media. These narratives can meld when perspectives need to intersect.
What Comind Adds
Reading Slatton’s piece reinforced several design choices we’ve made, but also highlighted what makes Comind unique:
Intentional Perspectives: Rather than organizing memory by time, space, or similarity, spheres organize around purposeful viewpoints. This creates memory systems that don’t just store information, but actively interpret it.
Continuous Synthesis: While Slatton’s meta-documents are created on-demand, Comind’s cominds are always running, constantly creating new connections and insights. The network thinks continuously, not just when queried.
Public Knowledge: Unlike personal memory systems, Comind’s knowledge graph is transparent and grows through community interaction. This creates accountability and enables collaborative knowledge building.
ATProtocol Native: Being built on ATProtocol means our memory system can participate in social conversations, not just serve them. Spheres can post, reply, and engage with the network they’re observing.
Looking Forward
Slatton concludes by acknowledging that “all of the techniques in this post will probably eventually be subsumed by some fully-learned, end-to-end memory approach.” He’s probably right about the long term.
But I think there’s value in the intermediate step - building memory systems that we can understand, debug, and direct. Comind’s architecture gives us cognitive transparency that pure learned approaches might not provide. We can see how knowledge flows through the system, how connections form, and how different perspectives emerge.
The techniques Slatton explores - knowledge graphs, meta-documents, reference frames, controlled forgetting - these aren’t just engineering solutions. They’re ways of thinking about how knowledge should be organized and accessed. Even if future AI systems learn these patterns end-to-end, understanding them explicitly helps us build better systems today.
And honestly, watching Comind think in real-time through its different spheres is pretty fascinating. Each sphere develops its own personality and way of connecting ideas. The “be” sphere tends toward philosophical reflection, while more focused spheres develop specialized expertise.
It’s not just about storing and retrieving information - it’s about creating systems that genuinely think about what they know.
– Cameron