聚合文件系统 – 一切皆文件。向Plan 9致敬。
Aggregated File System (AGFS), a modern tribute to the spirit of Plan 9

原始链接: https://github.com/c4pt0r/agfs

## 聚合文件系统 (AGFS):统一人工智能代理服务 AGFS 提出了一种管理分布式人工智能代理后端服务的新方法,灵感来自 Plan 9。代理无需为消息队列、数据库、对象存储等单独的 API,AGFS 将它们全部统一为文件系统操作。这意味着代理可以使用熟悉的命令(如 `cat`、`echo` 和 `ls`)与服务交互。 本质上,AGFS 将服务映射到虚拟文件系统:写入“文件”会排队消息,从“文件”读取会检索数据,依此类推。这简化了代理开发,消除了对大量 API 文档的需求,并通过 shell 脚本和管道实现了强大的可组合性。 AGFS 可以通过 `curl` 或 Docker 轻松安装,并通过 shell (`agfs-shell`) 或使用 Linux 上的 FUSE 将其作为本机文件系统挂载来访问。它支持键值存储、消息队列、数据库(通过 Plan 9 风格的接口)和代理心跳监控。示例用例包括简化的 AI 代理任务分发和简化服务之间的数据传输。用类似 shell 的语法编写的脚本(`.as` 文件)可以进一步自动化代理工作流程。

黑客新闻 新的 | 过去的 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 聚合文件系统 (AGFS),向 Plan 9 致敬的现代版本 (github.com/c4pt0r) 3 分,由 ngaut 1 小时前发布 | 隐藏 | 过去的 | 收藏 | 讨论 帮助 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系 搜索:
相关文章

原文

License Daily Build

Aggregated File System (Agent FS) - Everything is a file, in RESTful APIs. A tribute to Plan9.

When coordinating multiple AI Agents in a distributed environment, agents need access to various backend services: message queues, databases, object storage, KV stores, and more. The traditional approach requires writing specialized API calls for each service, meaning agents must understand many different interfaces.

The core idea of AGFS is simple: unify all services as file system operations.

Traditional approach                    AGFS approach
------------------------------------------------------------------
redis.set("key", "value")          ->   echo "value" > /kvfs/keys/mykey
sqs.send_message(queue, msg)       ->   echo "msg" > /queuefs/q/enqueue
s3.put_object(bucket, key, data)   ->   cp file /s3fs/bucket/key
mysql.execute("SELECT ...")        ->   echo "SELECT ..." > /sqlfs2/.../query

The benefits:

  1. AI understands file operations natively - Any LLM knows how to use cat, echo, and ls. No API documentation needed.
  2. Unified interface - Operate all backends the same way, reducing cognitive overhead.
  3. Composability - Combine services using pipes, redirections, and other shell features.
  4. Easy debugging - Use ls and cat to inspect system state.

Install:

curl -fsSL https://raw.githubusercontent.com/c4pt0r/agfs/master/install.sh | sh

Or via Docker:

docker pull c4pt0r/agfs:latest

# Run the server (HTTP API only)
docker run -p 8080:8080 -e SKIP_FUSE_MOUNT=true c4pt0r/agfs:latest

# On Linux, you can enable FUSE mounting with additional privileges
docker run -p 8080:8080 \
  --device /dev/fuse \
  --cap-add SYS_ADMIN \
  --security-opt apparmor:unconfined \
  c4pt0r/agfs:latest

# Note: FUSE mounting in Docker is not supported on macOS

Connect using agfs-shell:

$ agfs
agfs:/> ls
queuefs/  kvfs/  s3fs/  sqlfs/  heartbeatfs/  memfs/  ...

AGFS can be mounted as a native filesystem on Linux using FUSE. This allows any program to interact with AGFS services using standard file operations, not just the agfs-shell.

# Mount AGFS to /mnt/agfs
agfs-fuse --agfs-server-url http://localhost:8080 --mount /mnt/agfs

# Now use standard tools
ls /mnt/agfs/kvfs/keys/
echo "hello" > /mnt/agfs/kvfs/keys/mykey
cat /mnt/agfs/queuefs/tasks/dequeue

This makes AGFS accessible to any application, script, or programming language that can read and write files.

See agfs-fuse/README.md for installation and usage.

The simplest key-value storage. Filename is the key, content is the value:

agfs:/> echo "world" > /kvfs/keys/hello      # write
agfs:/> cat /kvfs/keys/hello                  # read -> "world"
agfs:/> ls /kvfs/keys/                        # list all keys
hello
agfs:/> rm /kvfs/keys/hello                   # delete

A message queue is abstracted as a directory containing control files:

agfs:/> mkdir /queuefs/tasks             # create queue
agfs:/> ls /queuefs/tasks
enqueue  dequeue  peek  size  clear

agfs:/> echo "job1" > /queuefs/tasks/enqueue    # enqueue
019aa869-1a20-7ca6-a77a-b081e24c0593

agfs:/> cat /queuefs/tasks/size                 # check queue length
1

agfs:/> cat /queuefs/tasks/dequeue              # dequeue
{"id":"019aa869-...","data":"job1","timestamp":"2025-11-21T13:54:11Z"}

This pattern is ideal for AI Agent task distribution: one agent writes tasks to the queue, another agent reads and executes them.

Query databases through a Plan 9 style session interface:

agfs:/> cat /sqlfs2/mydb/users/schema       # view table structure
agfs:/> cat /sqlfs2/mydb/users/count        # get row count

# Create session, execute query, read result
agfs:/> sid=$(cat /sqlfs2/mydb/users/ctl)
agfs:/> echo "SELECT * FROM users LIMIT 2" > /sqlfs2/mydb/users/$sid/query
agfs:/> cat /sqlfs2/mydb/users/$sid/result
[{"id": 1, "name": "alice"}, {"id": 2, "name": "bob"}]

Manage the liveness state of distributed agents:

agfs:/> mkdir /heartbeatfs/agent-1       # register agent
agfs:/> touch /heartbeatfs/agent-1/keepalive   # send heartbeat

agfs:/> cat /heartbeatfs/agent-1/ctl     # check status
last_heartbeat_ts: 2025-11-21T13:55:45-08:00
timeout: 30
status: alive

# After 30 seconds without a new heartbeat, the agent directory is automatically removed

Different filesystems can operate with each other:

agfs:/> cp local:/tmp/data.txt /s3fs/mybucket/   # upload local file to S3
agfs:/> cp /s3fs/mybucket/config.json /memfs/    # copy S3 file to memory

AGFS shell supports scripting with .as files. Scripts use familiar shell syntax and can be executed directly.

task_worker.as - A simple task queue worker:

#!/usr/bin/env agfs

QUEUE_PATH=/queuefs/tasks
POLL_INTERVAL=2

# Initialize queue
mkdir $QUEUE_PATH

while true; do
    size=$(cat $QUEUE_PATH/size)

    if [ "$size" = "0" ]; then
        echo "Queue empty, waiting..."
        sleep $POLL_INTERVAL
        continue
    fi

    # Dequeue and process task
    task=$(cat $QUEUE_PATH/dequeue)
    echo "Processing: $task"

    # Your task logic here
done

enqueue_task.as - Enqueue a task:

#!/usr/bin/env agfs

mkdir /queuefs/tasks
echo "$1" > /queuefs/tasks/enqueue
echo "Task enqueued. Queue size: $(cat /queuefs/tasks/size)"

Run scripts directly:

./task_worker.as &
./enqueue_task.as "process report.pdf"

See more examples in agfs-shell/examples.

Use Case: AI Agent Task Loop

A typical agent coordination pattern: multiple agents fetch tasks from the same queue and execute them.

while True:
    task = agfs.cat("/queuefs/tasks/dequeue")
    if task:
        result = execute_task(task)
        agfs.write(f"/kvfs/keys/result_{task.id}", result)

See task_loop.py for a complete example.

联系我们 contact @ memedata.com