Brain-Inspired Multi-Agent Architecture
for "Spike of Insight"

Retrieval is solved. Adaptation is missing.
geDIG implements the autonomic nervous system of "when to learn and when to forget" into RAG using a single gauge:
$\mathcal{F} = \Delta EPC - \lambda \Delta IG$ Plain English: A single scalar that decides: "Does this new fact deserve a permanent place in the graph?"
$\Delta EPC \approx$ "Rewiring Cost", $\Delta IG \approx$ "Path Shortening"
Benchmark Result: On 25x25 Maze + RAG, geDIG reduced redundant exploration by 40% while keeping FMR < 2%.

$\Delta EPC$ (Structure Cost) $\Delta H$ (Entropy) $\Delta SP$ (Shortcuts) FEP-MDL Bridge

Visualizing "The Pulse"

Real-time simulation of the gauge $\mathcal{F}$ as the graph grows.
Click anywhere in the graph to inject a new query node and see if it triggers an Insight (DG) or Ambiguity (AG).
AG (Ambiguity Gauge): 0-hop error (High Cost).
DG (Discovery Gauge): Multi-hop shortcut (Insight Found).

Observation Guide:
  • Click near a cluster: $\Delta EPC$ is low, but $\Delta IG$ is also low. $\mathcal{F}$ stays high (Reject).
  • Click between clusters: If a shortcut is found, $\Delta SP$ spikes (negative cost), driving $\mathcal{F} < \theta$ (Insight!).
IDLE
Click to Add Node
Base Node
Query
1-hop (Ego)
2-hop+ (Insight)

Gauge Telemetry

Structural Cost ($\Delta EPC$) --
wiring cost (normalized)
Information Gain ($\Delta IG$) --
Entropy ($\Delta H$) --
Shortcuts ($\Delta SP$) --
Total Gauge ($\mathcal{F}$) --
Threshold: < 0.30 IDLE
Manual Override

Conceptual Architecture

Most RAG systems only optimize "what to retrieve". geDIG optimizes "when to integrate".
This adds a metacognitive layer to the AI's memory system.

Static RAG
Always Retrieve -> Always Append
Infinite Growth / Pollution
vs geDIG
LLM $\to$ Retriever $\to$ Gauge $\mathcal{F}$ $\to$ [Accept/Reject]
01. THE PROBLEM

Blind Accumulation

Static RAG systems lack a norm for rejection. They accumulate noise, redundancies, and contradictions, leading to performance degradation over time (The "Context Window" trap).

02. THE SOLUTION

One-Gauge Control ($\mathcal{F}$)

We unified Structural Cost (Graph Edit Distance) and Information Gain (Entropy + Shortcuts) into a single scalar.

  • $\Delta EPC$: Cost of wiring. Penalizes complexity.
  • $\Delta H$: Entropy reduction. Rewards order.
  • $\Delta SP$: Path shortening. Rewards insight (shortcuts).
$\mathcal{F} < \theta$
Accept
Reject

FEP-MDL Bridge

0-hop (AG) Ambiguity / Error → FEP
Multi-hop (DG) Compression / Insight → MDL
"Operational Correspondence"
03. THEORETICAL BACKBONE

Bridging FEP & MDL

geDIG provides an operational correspondence between the Free Energy Principle (minimizing surprise) and Minimum Description Length (maximizing compression).

0-hop detects immediate prediction error (FEP), while Multi-hop validates global compression (MDL).

*For readers familiar with FEP/MDL: This is an operational analogy, not a formal proof of equivalence.

Downloads, Code & Reproducibility

Draft papers, code entrypoints, and maze reproduction commands.

For Researchers

Research Paper

"geDIG: Gauge what Knowledge Graph needs"

v5.0 Draft
For Engineers

Source Code

miyauchikazuyoshi/InsightSpike-AI

Brain-Inspired Multi-Agent Architecture for “Spike of Insight”

View Repository Browse Experiments & Logs

CLI / Quick Start

Quick start: python examples/public_quick_start.py

CLI: python -m insightspike.cli.spike --help

Smoke tests: make codex-smoke

Reproduce (Maze 25×25, 500 steps)

Phase-1 PoC: Maze + RAG under equal resources.

L3 batch (60 seeds):

python scripts/run_maze_batch_and_update.py --mode l3 --seeds 60 --workers 4 --update-tex

Eval batch (60 seeds):

python scripts/run_maze_batch_and_update.py --mode eval --seeds 60 --workers 4 --update-tex

Aggregates land in docs/paper/data/; the 25×25 table updates automatically.

Call for arXiv Endorsement / Reviewers

We are seeking an arXiv endorsement for cs.AI or cs.LG to publish our v5 paper.

We’re also looking for collaborators on:

  • Information thermodynamics / FEP:
    Help formalize the free-energy mapping and check for missing assumptions.
  • Graph RAG / Multi-hop:
    Stress-test geDIG on your existing GraphRAG benchmarks.
  • Phase-2 (Offline Rewiring):
    Co-design offline "sleep phase" rewiring experiments.

How to engage: open an Issue with “Review” label, PR small fixes, or DM on X (Twitter): @kazuyoshim5436.

See also: geDIG spec, Phase‑1 (maze & RAG), Trace a spike.

Citation (BibTeX)

@article{miyauchi2025gedig, title={geDIG: Gauge what Knowledge Graph needs}, author={Miyauchi, Kazuyoshi}, year={2025}, journal={GitHub repository}, url={https://github.com/miyauchikazuyoshi/InsightSpike-AI} }