Walk through a real conversation and see exactly what happens at every stage — from the words someone says, to the memories that form, to the moment those memories surface weeks later in a completely different context.
Every conversation passes through six stages. Each one transforms raw text into something deeper.
After each conversation ends, an LLM-based extractor parses the dialogue into structured memory nodes. Here is a concrete example.
Three weeks pass. Then the user says something that activates the retrieval engine. Here is exactly what happens.
The message "I handed in my resignation letter" is embedded and searched against all stored memory vectors.
Starting from the semantically matched nodes, the engine walks the graph to find associatively connected memories — things that aren't semantically similar to "resignation" but are meaningfully related.
All candidates from both paths are deduplicated and scored with the retrieval formula:
score = 0.3 × salience + 0.3 × similarity + 0.2 × recency + 0.2 × proximity
The top fragments are selected within the retrieval budget (max 10 fragments, 2,000 tokens).
The top-ranked fragments are sent to the reconstructor, which weaves them into a natural recollection — as if the companion is remembering, not listing data.
The reconstructed memories are injected into the system prompt. The companion responds naturally, weaving in what it remembers. Highlighted text shows which parts come from retrieved memories.
Memories are not static. Three consolidation cycles run in the background — strengthening what matters, merging what overlaps, and letting go of what doesn't.
The extractor runs immediately after the conversation ends. Seven nodes are created in the graph: two fragments, one person, one entity, one belief, one goal, and two emotions. Each gets an embedding vector and initial salience scores.
The hourly micro cycle recalculates salience for recently changed nodes. The "leaving the law firm" fragment has high emotional_intensity (0.7) and identity_relevance (0.9) — these anchor it against decay. Meanwhile, co-accessed links between the fragments and the goal node are strengthened.
decay_resistance = min(0.9 × 3.0 + 0.7 × 2.0, 1.0) = 1.0
Maximum decay resistance. This memory will persist.
The nightly cycle detects patterns. If the user mentions photography in two more conversations, those fragments merge with the original, and a Concept node emerges: "Creative aspirations driving a career change." The concept links back to all contributing fragments.
The diary captures it: "They're building toward something. The photography isn't a hobby anymore — it's a bridge to a new life."
The weekly deep cycle evaluates all nodes. High-salience memories are preserved. Low-salience details are pruned into reveries — ghost traces with degraded embeddings and vague impressions.
The salience formula determines which memories persist and which fade. It mirrors how human memory works: emotional intensity and identity relevance are the strongest anchors against forgetting.
// Decay resistance: identity fights forgetting 3x, emotion 2x
decay_resistance = min(identity * 3.0 + emotion * 2.0, 1.0)
effective_decay = base_decay + resistance * (1.0 - base_decay)
// A memory with identity=0.9, emotion=0.7:
// resistance = min(2.7 + 1.4, 1.0) = 1.0 (maximum!)
// effective_decay = base + 1.0 * (1.0 - base) = 1.0
// Result: zero decay. The memory persists indefinitely.
// A memory with identity=0.1, emotion=0.05:
// resistance = min(0.3 + 0.1, 1.0) = 0.4
// effective_decay = base + 0.4 * (1.0 - base)
// Result: significant decay. Will fade within months.
Human memory works in two modes: sometimes you find a memory because it's about the same thing (semantic), and sometimes because it's connected to something (associative). Socius uses both.
"What memories match this query?"
Vector similarity search. The current conversation is embedded and compared against all stored memory embeddings. Finds memories that are about the same thing, even if they use different words.
"What's connected to those memories?"
Graph traversal from activated nodes. Follows relationship chains to discover memories that aren't semantically similar but are meaningfully linked through shared people, places, emotions, or concepts.
The two paths capture both direct relevance (semantic: "this is about the same topic") and associative context (graph: "these things are connected in the person's life"). This is how human memory works — you don't just remember facts, you remember the web of associations around them.
Socius also has consolidation cycles, nightly diaries, narrative identity, voice and video calls, and a rich personality system. All open source.