Retrieval is solved. Adaptation is missing.
geDIG implements the autonomic nervous system of "when to learn and when to forget"
into RAG using a single gauge:
$\mathcal{F} = \Delta EPC - \lambda \Delta IG$
Plain English: A single scalar that decides: "Does this new
fact deserve a permanent place in the graph?"
$\Delta EPC \approx$ "Rewiring Cost", $\Delta IG \approx$ "Path Shortening"
Benchmark Result: On 25x25 Maze + RAG, geDIG reduced redundant exploration by
40% while keeping FMR < 2%.
Real-time simulation of the gauge $\mathcal{F}$ as the graph grows.
Click anywhere in the graph to inject a new query node and see if it triggers an Insight (DG) or
Ambiguity (AG).
AG (Ambiguity Gauge): 0-hop error (High Cost).
DG (Discovery Gauge): Multi-hop shortcut (Insight Found).
Most RAG systems only optimize "what to retrieve". geDIG optimizes "when to integrate".
This adds a metacognitive layer to the AI's memory system.
Static RAG systems lack a norm for rejection. They accumulate noise, redundancies, and contradictions, leading to performance degradation over time (The "Context Window" trap).
We unified Structural Cost (Graph Edit Distance) and Information Gain (Entropy + Shortcuts) into a single scalar.
geDIG provides an operational correspondence between the Free Energy Principle (minimizing surprise) and Minimum Description Length (maximizing compression).
0-hop detects immediate prediction error (FEP), while Multi-hop validates global compression (MDL).
*For readers familiar with FEP/MDL: This is an operational analogy, not a formal proof of equivalence.
Draft papers, code entrypoints, and maze reproduction commands.
"geDIG: Gauge what Knowledge Graph needs"
v5.0 Draftmiyauchikazuyoshi/InsightSpike-AI
Brain-Inspired Multi-Agent Architecture for “Spike of Insight”
CLI / Quick Start
Quick start: python examples/public_quick_start.py
CLI: python -m insightspike.cli.spike --help
Smoke tests: make codex-smoke
Reproduce (Maze 25×25, 500 steps)
Phase-1 PoC: Maze + RAG under equal resources.
L3 batch (60 seeds):
python scripts/run_maze_batch_and_update.py --mode l3 --seeds 60 --workers 4 --update-tex
Eval batch (60 seeds):
python scripts/run_maze_batch_and_update.py --mode eval --seeds 60 --workers 4 --update-tex
Aggregates land in docs/paper/data/;
the 25×25 table updates automatically.
Call for arXiv Endorsement / Reviewers
We are seeking an arXiv endorsement for cs.AI or cs.LG to publish our v5 paper.
We’re also looking for collaborators on:
How to engage: open an Issue with “Review” label, PR small fixes, or DM on X (Twitter): @kazuyoshim5436.
See also: geDIG spec, Phase‑1 (maze & RAG), Trace a spike.
Citation (BibTeX)