展示HN:河马,受生物学启发的AI代理记忆
Show HN: Hippo, biologically inspired memory for AI agents

原始链接: https://github.com/kitfunso/hippo-memory

## 河马:AI 代理的共享记忆 河马是一个记忆系统,旨在克服当前 AI 代理“记忆”的局限性——本质上只是归档和搜索所有内容。受人类海马体的启发,河马专注于*知道什么该忘记*,从而创建更高效、更有效的长期记忆。 它适用于 Claude、Codex 和 Cursor 等工具,解决了在它们之间切换时上下文丢失的问题。河马将记忆存储在 Git 可跟踪的 SQLite 数据库中,并提供 Markdown/YAML 镜像,仅需 Node.js 22.5+(可选的嵌入通过 `@xenova/transformers`)。 **主要功能包括:**衰减机制(错误会保留,噪音会消退)、置信度级别、自动冲突解决、具有转移评分的多代理共享以及工作记忆层。河马从 git 提交、对话记录和用户输入中学习,并提供用于回忆、导入和整合(“睡眠”)的命令。 它通过自动安装钩子和 OpenClaw 和 Claude Code 的原生插件与现有工作流程无缝集成。河马不仅仅是存储;它*模拟*记忆,通过优先处理相关信息和防止上下文过载来提高代理性能。

## 河马:受生物学启发的AI记忆 一个名为“河马”(github.com/kitfunso)的新项目,旨在为AI智能体提供受生物学启发的记忆。作者在Hacker News上分享了它,引发了关于其实现的讨论。 一位评论员提出的关键点是,河马需要作为智能体的后台技能运作,在遇到困难时被激活,而不是一个独立的进程。他们提倡基于智能体*运行时间*的记忆衰减系统,而不是现实世界的时间(暗示使用单调时钟方法)。 此外,评论员强调了记忆回忆的**触发器**的重要性,特别是将记忆与物理位置(例如项目中的文件路径)联系起来,并使用“压缩”来概括记忆,当智能体在不同位置之间移动时。作者回应说,他们正在积极地添加这些功能。
相关文章

原文

The secret to good memory isn't remembering more. It's knowing what to forget.

npm license

Works with:  Claude Code, Codex, Cursor, OpenClaw, any CLI agent
Imports from: ChatGPT, Claude (CLAUDE.md), Cursor (.cursorrules), any markdown
Storage:     SQLite backbone + markdown/YAML mirrors. Git-trackable and human-readable.
Dependencies: Zero runtime deps. Requires Node.js 22.5+. Optional embeddings via @xenova/transformers.

AI agents forget everything between sessions. Existing solutions just save everything and search later. That's a filing cabinet, not a brain.

Your memories are also trapped. ChatGPT knows things Claude doesn't. Cursor rules don't travel to Codex. Switch tools and you start from zero.


  • Multi-tool developers. You use Claude Code on Monday, Cursor on Tuesday, Codex on Wednesday. Context doesn't carry over. Hippo is the shared memory layer across all of them.
  • Teams where agents repeat mistakes. The agent hit the same deployment bug last week. And the week before. Hippo's error memories and decay mechanics mean hard lessons stick and noise fades.
  • Anyone whose CLAUDE.md is a mess. Your instruction file grew to 400 lines of mixed rules, preferences, and stale workarounds. Hippo gives that structure: tags, confidence levels, automatic decay of outdated info.
  • People who want portable AI memory. No vendor lock-in. Markdown files in your repo. Import from ChatGPT, Claude, Cursor. Export by copying a folder.

npm install -g hippo-memory

hippo init
hippo remember "FRED cache silently dropped the tips_10y series" --tag error
hippo recall "data pipeline issues" --budget 2000

That's it. You have a memory system.

  • Auto-sleep on session exit. hippo hook install claude-code now installs a Stop hook in ~/.claude/settings.json so hippo sleep runs automatically when Claude Code exits. hippo init does this too when Claude Code is detected. No cron needed, no manual sleep.
  • Working memory layer (hippo wm push/read/clear/flush). Bounded buffer (max 20 per scope) with importance-based eviction. Current-state notes live separately from long-term memory.
  • Session handoffs (hippo handoff create/latest/show). Persist session summaries, next actions, and artifacts so successor sessions can resume without transcript archaeology.
  • Session lifecycle with explicit start/end events, fallback session IDs, and hippo session resume for continuity.
  • Explainable recall (hippo recall --why). See which terms matched, whether BM25 or embedding contributed, and the source bucket (layer, confidence, local/global).
  • hippo current show for compact current-state display (active task + recent session events), ready for agent injection.
  • SQLite lock hardening: busy_timeout=5000, synchronous=NORMAL, wal_autocheckpoint=100. Concurrent plugin calls no longer hit SQLITE_BUSY.
  • Consolidation batching: all writes/deletes happen in a single transaction instead of N open/close cycles.
  • --limit flag on hippo recall and hippo context to cap result count independently of token budget.
  • Plugin injection dedup guard prevents double context injection on reconnect.
  • Hybrid search blends BM25 keywords with cosine embedding similarity. Install @xenova/transformers, run hippo embed, recall quality jumps. Falls back to BM25 otherwise.
  • Schema acceleration auto-computes how well new memories fit existing patterns. Familiar memories consolidate faster; novel ones decay faster if unused.
  • Multi-agent shared memory with hippo share, hippo peers, and transfer scoring. Universal lessons travel between projects; project-specific config stays local.
  • Conflict resolution via hippo resolve <id> --keep <mem_id>. Closes the detect-inspect-resolve loop.
  • Agent eval benchmark validates the learning hypothesis: hippo agents drop from 78% trap rate to 14% over a 50-task sequence.

Zero-config agent integration

hippo init auto-detects your agent framework and wires itself in:

cd my-project
hippo init

# Initialized Hippo at /my-project
#    Directories: buffer/ episodic/ semantic/ conflicts/
#    Auto-installed claude-code hook in CLAUDE.md

If you have a CLAUDE.md, it patches it. AGENTS.md for Codex/OpenClaw. .cursorrules for Cursor. No manual hook install needed. Your agent starts using Hippo on its next session.

It also sets up a daily cron job (6:15am) that runs hippo learn --git and hippo sleep automatically. Memories get captured from your commits and consolidated every day without you thinking about it.

To skip: hippo init --no-hooks --no-schedule


Your memories shouldn't be locked inside one tool. Hippo pulls them in from anywhere.

# ChatGPT memory export
hippo import --chatgpt memories.json

# Claude's CLAUDE.md (skips existing hippo hook blocks)
hippo import --claude CLAUDE.md

# Cursor rules
hippo import --cursor .cursorrules

# Any markdown file (headings become tags)
hippo import --markdown MEMORY.md

# Any text file
hippo import --file notes.txt

All import commands support --dry-run (preview without writing), --global (write to ~/.hippo/), and --tag (add extra tags). Duplicates are detected and skipped automatically.

Extract memories from raw conversation text. No LLM needed: pattern-based heuristics find decisions, rules, errors, and preferences.

# Pipe a conversation in
cat session.log | hippo capture --stdin

# Or point at a file
hippo capture --file conversation.md

# Preview first
hippo capture --file conversation.md --dry-run

Long-running work needs short-term continuity, not just long-term memory. Hippo can persist the current in-flight task so a later continue has something concrete to recover.

hippo snapshot save \
  --task "Ship SQLite backbone" \
  --summary "Tests/build/smoke are green, next slice is active-session recovery" \
  --next-step "Implement active snapshot retrieval in context output"

hippo snapshot show
hippo context --auto --budget 1500
hippo snapshot clear

hippo context --auto includes the active task snapshot before long-term memories, so agents get both the immediate thread and the deeper lessons.

Manual snapshots are useful, but real work also needs a breadcrumb trail. Hippo can now store short session events and link them to the active snapshot so context output shows the latest steps, not just the last summary.

hippo session log \
  --id sess_20260326 \
  --task "Ship continuity" \
  --type progress \
  --content "Schema migration is done, next step is CLI wiring"

hippo snapshot save \
  --task "Ship continuity" \
  --summary "Structured session events are flowing" \
  --next-step "Surface them in framework hooks" \
  --session sess_20260326

hippo session show --id sess_20260326
hippo context --auto --budget 1500

Hippo mirrors the latest trail to .hippo/buffer/recent-session.md so you can inspect the short-term thread without opening SQLite.

When you're done for the day (or switching to another agent), create a handoff so the next session knows exactly where to pick up:

hippo handoff create \
  --summary "Finished schema migration, tests green" \
  --next "Wire handoff injection into context output" \
  --session sess_20260403 \
  --artifact src/db.ts

hippo handoff latest              # show the most recent handoff
hippo handoff show 3              # show a specific handoff by ID
hippo session resume              # re-inject latest handoff as context

Working memory is a bounded scratchpad for current-state notes. It's separate from long-term memory and gets cleared between sessions.

hippo wm push --scope repo \
  --content "Investigating flaky test in store.test.ts, line 42" \
  --importance 0.9

hippo wm read --scope repo        # show current working notes
hippo wm clear --scope repo       # wipe the scratchpad
hippo wm flush --scope repo       # flush on session end

The buffer holds a maximum of 20 entries per scope. When full, the lowest-importance entry is evicted.

See why a memory was returned:

hippo recall "data pipeline" --why --limit 5

# --- mem_a1b2c3 [episodic] [observed] [local] score=0.847
#     BM25: matched [data, pipeline]; cosine: 0.82
#     ...memory content...

Input enters the buffer. Important things get encoded into episodic memory. During "sleep," repeated episodes compress into semantic patterns. Weak memories decay and disappear.

New information
      |
      v
+-----------+
|  Buffer   |  Working memory. Current session only. No decay.
| (session) |
+-----+-----+
      |  encoded (tags, strength, half-life assigned)
      v
+-----------+
|  Episodic |  Timestamped memories. Decay by default.
|   Store   |  Retrieval strengthens. Errors stick longer.
+-----+-----+
      |  consolidation (hippo sleep)
      v
+-----------+
|  Semantic |  Compressed patterns. Stable. Schema-aware.
|   Store   |  Extracted from repeated episodes.
+-----------+

         hippo sleep: decay + replay + merge

Every memory has a half-life. 7 days by default. Persistence is earned.

hippo remember "always check cache contents after refresh"
# stored with half_life: 7d, strength: 1.0

# 14 days later with no retrieval:
hippo inspect mem_a1b2c3
# strength: 0.25  (decayed by 2 half-lives)
# at risk of removal on next sleep

Use it or lose it. Each recall boosts the half-life by 2 days.

hippo recall "cache issues"
# finds mem_a1b2c3, retrieval_count: 1 -> 2
# half_life extended: 7d -> 9d
# strength recalculated from retrieval timestamp

hippo recall "cache issues"   # again next week
# retrieval_count: 2 -> 3
# half_life: 9d -> 11d
# this memory is learning to survive

Tag a memory as an error and it gets 2x the half-life automatically.

hippo remember "deployment failed: forgot to run migrations" --error
# half_life: 14d instead of 7d
# emotional_valence: negative
# strength formula applies 1.5x multiplier

# production incidents don't fade quietly

Every memory carries a confidence level: verified, observed, inferred, or stale. This tells agents how much to trust what they're reading.

hippo remember "API rate limit is 100/min" --verified
hippo remember "deploy usually takes ~3 min" --observed
hippo remember "the flaky test might be a race condition" --inferred

When context is generated, confidence is shown inline:

[verified] API rate limit is 100/min per the docs
[observed] Deploy usually takes ~3 min
[inferred] The flaky test might be a race condition

Agents can see at a glance what's established fact vs. a pattern worth questioning.

Memories unretrieved for 30+ days are automatically marked stale during the next hippo sleep. If one gets recalled again, Hippo wakes it back up to observed so it can earn trust again instead of staying permanently stale.

Hippo now detects obvious contradictions between overlapping memories and keeps them visible instead of silently letting both masquerade as truth.

hippo sleep       # refreshes open conflicts
hippo conflicts   # inspect them

Open conflicts are stored in SQLite, mirrored under .hippo/conflicts/, and linked back into each memory's conflicts_with field.


Memories aren't presented as bare assertions. By default, Hippo frames them as observations with dates, so agents treat them as context rather than commands.

hippo context --framing observe   # default
# Output: "Previously observed (2026-03-10): deploy takes ~3 min"

hippo context --framing suggest
# Output: "Consider: deploy takes ~3 min"

hippo context --framing assert
# Output: "Deploy takes ~3 min"

Three modes: observe (default), suggest, assert. Choose based on how directive you want the memory to be.


Run hippo sleep and episodes compress into patterns.

hippo sleep

# Running consolidation...
#
# Results:
#    Active memories:    23
#    Removed (decayed):   4
#    Merged episodic:     6
#    New semantic:        2

Three or more related episodes get merged into a single semantic memory. The originals decay. The pattern survives.


Did the recalled memories actually help? Tell Hippo. It tightens the feedback loop.

hippo recall "why is the gold model broken"
# ... you read the memories and fix the bug ...

hippo outcome --good
# Applied positive outcome to 3 memories
# half_life +5d on each

hippo outcome --bad
# Applied negative outcome to 3 memories
# half_life -3d on each
# irrelevant memories decay faster

Recall only what fits. No context stuffing.

# fits within Claude's 2K token window for task context
hippo recall "deployment checklist" --budget 2000

# need more for a big task
hippo recall "full project history" --budget 8000

# machine-readable for programmatic use
hippo recall "api errors" --budget 1000 --json

Results are ranked by relevance * strength * recency. The highest-signal memories fill the budget first.


Hippo can scan your commit history and extract lessons from fix/revert/bug commits automatically.

# Learn from the last 7 days of commits
hippo learn --git

# Learn from the last 30 days
hippo learn --git --days 30

# Scan multiple repos in one pass
hippo learn --git --repos "~/project-a,~/project-b,~/project-c"

The --repos flag accepts comma-separated paths. Hippo scans each repo's git log, extracts fix/revert/bug lessons, deduplicates against existing memories, and stores new ones. Pair with hippo sleep afterwards to consolidate.

Ideal for a weekly cron:

hippo learn --git --repos "~/repo1,~/repo2" --days 7
hippo sleep

Wrap any command with hippo watch to auto-learn from failures:

hippo watch "npm run build"
# if it fails, Hippo captures the error automatically
# next time an agent asks about build issues, the memory is there

Command What it does
hippo init Create .hippo/ + auto-install agent hooks
hippo init --global Create global store at ~/.hippo/
hippo init --no-hooks Create .hippo/ without auto-installing hooks
hippo remember "<text>" Store a memory
hippo remember "<text>" --tag <t> Store with tag (repeatable)
hippo remember "<text>" --error Store as error (2x half-life)
hippo remember "<text>" --pin Store with no decay
hippo remember "<text>" --verified Set confidence: verified (default)
hippo remember "<text>" --observed Set confidence: observed
hippo remember "<text>" --inferred Set confidence: inferred
hippo remember "<text>" --global Store in global ~/.hippo/ store
hippo recall "<query>" Retrieve relevant memories (local + global)
hippo recall "<query>" --budget <n> Recall within token limit (default: 4000)
hippo recall "<query>" --limit <n> Cap result count
hippo recall "<query>" --why Show match reasons and source buckets
hippo recall "<query>" --json Output as JSON
hippo context --auto Smart context injection (auto-detects task from git)
hippo context "<query>" --budget <n> Context injection with explicit query (default: 1500)
hippo context --limit <n> Cap memory count in context
hippo context --budget 0 Skip entirely (zero token cost)
hippo context --framing <mode> Framing: observe (default), suggest, assert
hippo context --format <fmt> Output format: markdown (default) or json
hippo import --chatgpt <path> Import from ChatGPT memory export (JSON or txt)
hippo import --claude <path> Import from CLAUDE.md or Claude memory.json
hippo import --cursor <path> Import from .cursorrules or .cursor/rules
hippo import --markdown <path> Import from structured markdown (headings -> tags)
hippo import --file <path> Import from any text file
hippo import --dry-run Preview import without writing
hippo import --global Write imported memories to ~/.hippo/
hippo capture --stdin Extract memories from piped conversation text
hippo capture --file <path> Extract memories from a file
hippo capture --dry-run Preview extraction without writing
hippo sleep Run consolidation (decay + merge + compress)
hippo sleep --dry-run Preview consolidation without writing
hippo status Memory health: counts, strengths, last sleep
hippo outcome --good Strengthen last recalled memories
hippo outcome --bad Weaken last recalled memories
hippo outcome --id <id> --good Target a specific memory
hippo inspect <id> Full detail on one memory
hippo forget <id> Force remove a memory
hippo embed Embed all memories for semantic search
hippo embed --status Show embedding coverage
hippo watch "<command>" Run command, auto-learn from failures
hippo learn --git Scan recent git commits for lessons
hippo learn --git --days <n> Scan N days back (default: 7)
hippo learn --git --repos <paths> Scan multiple repos (comma-separated)
hippo conflicts List detected open memory conflicts
hippo conflicts --json Output conflicts as JSON
hippo resolve <id> Show both conflicting memories for comparison
hippo resolve <id> --keep <mem_id> Resolve: keep winner, weaken loser
hippo resolve <id> --keep <mem_id> --forget Resolve: keep winner, delete loser
hippo promote <id> Copy a local memory to the global store
hippo share <id> Share with attribution + transfer scoring
hippo share <id> --force Share even if transfer score is low
hippo share --auto Auto-share all high-scoring memories
hippo share --auto --dry-run Preview what would be shared
hippo peers List projects contributing to global store
hippo sync Pull global memories into local project
hippo hook list Show available framework hooks
hippo hook install <target> Install hook (claude-code also adds Stop hook for auto-sleep)
hippo hook uninstall <target> Remove hook
hippo handoff create --summary "..." Create a session handoff
hippo handoff latest Show the most recent handoff
hippo handoff show <id> Show a specific handoff by ID
hippo session latest Show latest task snapshot + events
hippo session resume Re-inject latest handoff as context
hippo current show Compact current state (task + session events)
hippo wm push --scope <s> --content "..." Push to working memory
hippo wm read --scope <s> Read working memory entries
hippo wm clear --scope <s> Clear working memory
hippo wm flush --scope <s> Flush working memory (session end)
hippo dashboard Open web dashboard at localhost:3333
hippo dashboard --port <n> Use custom port
hippo mcp Start MCP server (stdio transport)

Auto-install (recommended)

hippo init detects your agent framework and patches the right config file automatically:

Framework Detected by Patches
Claude Code CLAUDE.md or .claude/settings.json CLAUDE.md + Stop hook in settings.json
Codex AGENTS.md or .codex AGENTS.md
Cursor .cursorrules or .cursor/rules .cursorrules
OpenClaw .openclaw or AGENTS.md AGENTS.md

No extra commands needed. Just hippo init and your agent knows about Hippo.

If you prefer explicit control:

hippo hook install claude-code   # patches CLAUDE.md + adds Stop hook to settings.json
hippo hook install codex         # patches AGENTS.md
hippo hook install cursor        # patches .cursorrules
hippo hook install openclaw      # patches AGENTS.md

This adds a <!-- hippo:start --> ... <!-- hippo:end --> block that tells the agent to:

  1. Run hippo context --auto --budget 1500 at session start
  2. Run hippo remember "<lesson>" --error on errors
  3. Run hippo outcome --good on completion

For Claude Code, it also adds a Stop hook to ~/.claude/settings.json so hippo sleep runs automatically when the session exits.

To remove: hippo hook uninstall claude-code

What the hook adds (Claude Code example)

## Project Memory (Hippo)

Before starting work, load relevant context:
hippo context --auto --budget 1500

When you hit an error or discover a gotcha:
hippo remember "<what went wrong and why>" --error

After completing work successfully:
hippo outcome --good

For any MCP-compatible client (Cursor, Windsurf, Cline, Claude Desktop):

hippo mcp   # starts MCP server over stdio

Add to your MCP config (e.g. .cursor/mcp.json or claude_desktop_config.json):

{
  "mcpServers": {
    "hippo-memory": {
      "command": "hippo",
      "args": ["mcp"]
    }
  }
}

Exposes tools: hippo_recall, hippo_remember, hippo_outcome, hippo_context, hippo_status, hippo_learn, hippo_wm_push.

Native plugin with auto-context injection, workspace-aware memory lookup, and tool hooks for auto-learn / auto-sleep.

openclaw plugins install hippo-memory
openclaw plugins enable hippo-memory

Plugin docs: extensions/openclaw-plugin/. Integration guide: integrations/openclaw.md.

Plugin with SessionStart/Stop hooks and error auto-capture. See extensions/claude-code-plugin/.

Full integration details: integrations/


Hippo is modeled on seven properties of the human hippocampus. Not metaphorically. Literally.

Why two stores? The brain uses a fast hippocampal buffer + a slow neocortical store (Complementary Learning Systems theory, McClelland et al. 1995). If the neocortex learned fast, new information would overwrite old knowledge. The buffer absorbs new episodes; the neocortex extracts patterns over time.

Why does decay help? New neurons born in the dentate gyrus actively disrupt old memory traces (Frankland et al. 2013). This is adaptive: it reduces interference from outdated information. Forgetting isn't failure. It's maintenance.

Why do errors stick? The amygdala modulates hippocampal consolidation based on emotional significance. Fear and error signals boost encoding. Your first production incident is burned into memory. Your 200th uneventful deploy isn't.

Why does retrieval strengthen? Recalled memories undergo "reconsolidation" (Nader et al. 2000). The act of retrieval destabilizes the trace, then re-encodes it stronger. This is the testing effect. Hippo implements it mechanically via the half-life extension on recall.

Why does sleep consolidate? During sleep, the hippocampus replays compressed versions of recent episodes and "teaches" the neocortex by repeatedly activating the same patterns. Hippo's sleep command runs this as a deliberate consolidation pass.

The 7 mechanisms in full: PLAN.md#core-principles

For how these mechanisms connect to LLM training, continual learning, and open research problems: RESEARCH.md


Feature Hippo Mem0 Basic Memory Claude-Mem
Decay by default Yes No No No
Retrieval strengthening Yes No No No
Hybrid search (BM25 + embeddings) Yes Embeddings only No No
Schema acceleration Yes No No No
Conflict detection + resolution Yes No No No
Multi-agent shared memory Yes No No No
Transfer scoring Yes No No No
Outcome tracking Yes No No No
Confidence tiers Yes No No No
Cross-tool import Yes No No No
Conversation capture Yes No No No
Auto-hook install Yes No No No
MCP server Yes No No No
Native plugins OpenClaw + Claude Code No No No
Multi-repo git learn Yes No No No
Zero dependencies Yes No No No
Git-friendly Yes No Yes No
Framework agnostic Yes Partial Yes No

Mem0, Basic Memory, and Claude-Mem all implement "save everything, search later." Hippo implements all 7 hippocampal mechanisms: two-speed storage, decay, retrieval strengthening, schema acceleration, conflict detection, multi-agent transfer, and explicit working memory. It's the only tool that models what memories are worth keeping.


Issues and PRs welcome. Before contributing, run hippo status in the repo root to see the project's own memory.

The interesting problems:

  • Better consolidation heuristics (LLM-powered merge vs current text overlap)
  • Web UI / dashboard for visualizing decay curves and memory health
  • Optimal decay parameter tuning from real usage data
  • Cross-agent transfer learning evaluation

MIT

联系我们 contact @ memedata.com