展示HN:Moltis – 具有记忆、工具和自我扩展技能的AI助手
Show HN: Moltis – AI assistant with memory, tools, and self-extending skills

原始链接: https://www.moltis.org

## Moltis:您的本地AI助手 – 摘要 Moltis是一款强大的、可自托管的AI助手,专为本地运行而设计,为注重隐私的用户提供云端解决方案的替代方案。安装方式灵活,支持Homebrew、Cargo、Docker以及针对各种Linux发行版的直接下载。 主要功能包括:本地运行模型,自动下载和设置;强大的安全性(密码密钥、API密钥、来源验证);以及支持多种LLM提供商——包括OpenAI、Copilot和本地模型。 Moltis通过混合搜索实现优秀的上下文保留,并通过插件、钩子以及受Pi启发的独特“自我扩展”能力提供可扩展性。 您可以通过Web UI、Telegram或API访问Moltis,并具有多设备同步和Prometheus指标等可观察性工具。虽然功能丰富,但Moltis仍被认为是alpha软件,需要谨慎使用和负责任的配置,尤其是在工具权限和系统访问方面。它以爱构建,采用MIT许可,并在GitHub和Discord上提供社区支持。

法比安·彭索 (Fabien Penso) 介绍了 Moltis,一个完全用 Rust 构建的自托管 AI 助手。Moltis 旨在保护隐私和可扩展性,优先考虑用户控制——允许检查、审计和分叉代码。与许多 AI 工具不同,它不需要 Node 或 Python 等依赖项,作为一个包含内置 Web UI 的 60MB 单一二进制文件运行。 Moltis 具有多提供商 LLM 支持(包括本地模型)、通过容器进行沙盒执行以及混合内存系统等功能。它能够在运行时创建自己的技能,灵感来自 OpenClaw,但具有改进之处,例如可能更好的内存连续性。 目前处于 Alpha 阶段,Moltis 提供 DigitalOcean 和 Fly.io 上的部署,以及用于自托管的 Docker 镜像。彭索强调拥有你的数据和工作流程的重要性,这与他之前关于内容和电子邮件所有权的文章相呼应。他欢迎反馈,特别是关于 Rust 架构和本地 LLM 设置的反馈。
相关文章

原文
Moltis: Your Personal AI Assistant

Fly command copied

Your personal AI assistant

~ quickstart

$ curl -fsSL https://www.moltis.org/install.sh | sh

Cloud is a great way to try Moltis. For the full experience, run it on your Mac, a VM, or any dedicated host.

Need setup details? See Cloud Deploy docs.

One self-contained binary. No runtime dependencies, just download and run.

Run your own models locally. Automatic download and setup included.

HTTPS by default. Password, token, and passkeys access.

Run browser sessions in isolated Docker containers for safer automation.

Blocks loopback, private, and link-local IPs from LLM fetch.

First token appears instantly. Smooth replies, even during long runs.

Plugins, hooks, MCP tool servers. Stdio or HTTP/SSE, auto-restart.

Full filesystem or per-session Docker/Apple Container isolation.

Hybrid vector + full-text search. Your agent remembers context.

Pi-Inspired Self-Extension

Pi-inspired self-extension: creates its own skills at runtime. Session branching, hot-reload.

Web UI, Telegram, or API. One agent, multiple frontends.

Talk to your assistant with multiple cloud and local TTS/STT providers.

And much more...

Security

  • Passkeys (WebAuthn)
  • Scoped API keys
  • Secrets zeroed on drop
  • Human-in-the-loop approval
  • Origin validation (CSWSH)
  • No unsafe code

LLM Providers

  • Multiple cloud providers
  • OpenAI Codex
  • GitHub Copilot
  • Local LLM (offline)
  • Provider fallback chains
  • Per-provider metrics

More providers coming soon.

Memory

  • Hybrid search (vector + FTS)
  • Local embeddings (GGUF)
  • OpenAI batch API (50% off)
  • Embedding cache
  • File watching / live sync
  • Session export

Extensibility

  • MCP server support
  • Hook system (lifecycle)
  • Cron job scheduling
  • TOML configuration
  • Environment overrides
  • Dynamic port allocation

Channels

  • Web UI
  • Telegram bot
  • JSON-RPC API
  • Mobile PWA
  • Push notifications
  • Multi-device sync

Observability

  • Prometheus metrics
  • OpenTelemetry tracing
  • Structured logging
  • Per-provider charts
  • SQLite persistence
  • Real-time WebSocket

Ferris molted. The shell cracked open.
Now the claw speaks on every channel,
remembers everything, and owns your localhost.

Local AI assistants are still early software. Treat Moltis as alpha: run it carefully, review tool permissions, and avoid giving broad system access you do not need.

Use responsibly.

MIT Licensed GitHub Discord

built with love ❤️ by Fabien Penso

联系我们 contact @ memedata.com