展示HN:Pollen – 分布式WASM运行时,无控制平面,单个二进制文件
Show HN: Pollen – distributed WASM runtime, no control plane, single binary

原始链接: https://github.com/sambigeara/pollen

## Pollen:一种自组织 WASM 运行时 Pollen 是一种新颖的、去中心化的网状网络和 WASM 运行时,使用 Go 构建,旨在将一组机器(甚至像树莓派这样资源受限的机器)转化为统一的计算资源。它无需中央协调器,而是依赖于基于一致共享状态的确定性、本地决策,该状态通过八卦 CRDT 运行时进行管理。 工作负载被“播种”到网络中,并根据需求进行有机扩展,节点智能地路由流量并应用反压。Pollen 支持运行用多种语言编写的 WASM 模块(Go、Rust、JS、Python 等),从而实现安全、基于策略的应用。它还促进通过内容寻址和 QUIC 点对点共享文件和提供静态站点。 主要特性包括通过 mTLS 实现的零信任安全、自动 NAT 穿透以及通过 SSH 或令牌进行简便的引导。Pollen 优先考虑简单性,具有符合人体工程学的 CLI 和约定俗成的默认设置,使其易于部署和管理具有弹性、分区容错性的网状网络。安装通过原生包管理器简化。 更多信息和演示可在 [pln.sh](https://pln.sh) 找到。

## 花粉:一种自组织 WASM 运行时 花粉是一个新的、单二进制的 Go 程序,旨在简化工作负载编排。由 Sam Bigeara 创建,它允许用户通过在每台机器上安装二进制文件来创建分布式集群——系统随后会自行组织。 花粉利用基于传播状态的确定性拓扑,根据容量和需求动态地放置工作负载,并自动从故障中恢复。它具有内置的 mTLS 安全性,用于 TCP/UDP 服务,并优先考虑直接点对点连接。 Bigeara 构建花粉源于对本地优先、收敛系统的渴望以及对现代云基础设施复杂性的沮丧。他致力于最小化的运营开销,并拥有一个简单的命令行界面。虽然目前仍处于早期阶段,但花粉已经运行着全球分布的实时集群。该项目开放讨论和反馈。
相关文章

原文

Pollen

CI Go Report Card Go Reference Release License

Pollen is a self-organising mesh and WASM runtime written in pure Go. Workloads are "seeded" into the cluster and organically scale and follow load. There is no central coordinator; decisions are made deterministically, locally, using a gossiped CRDT runtime state as their source of truth. Same view of the world; same workload placement and routing.

The goal is for Pollen to turn a collection of heterogeneous machines into a blob of generic compute that can run absolutely anywhere. Think: a Raspberry Pi acting as though it has the power of a server-farm.

Pollen demo

This demo shows a simple processing pipeline: two chained workloads and a single "sink" egress server running on my home laptop (all requests end up here). 10 freshly provisioned (global) nodes are bootstrapped into the cluster, workloads are seeded, and ~5000req/s calls spread across 5 locations simultaneously. The scale-up and workload placement all happens organically. The nodes gate and apply backpressure and gossip saturation across the cluster so other nodes know where to direct traffic. Pausable video at pln.sh.

  • WASM seeds. pln seed ./hello.wasm here, pln call hello greet there; artifacts distribute peer-to-peer by hash. One host call invokes another seed by name (pln://seed/<name>/<fn>), so authz, routing, and policy can live inside WASM. Authored in Go, Rust, JS, Python, C#, Zig via Extism.
  • Mesh services. pln serve 8080 api here, pln connect api there (or pln://service/<name> from a seed). TCP and UDP, end-to-end mTLS.
  • Static sites & blobs. pln seed ./public publishes a site; pln seed ./file shares a file. Same verb across workloads, sites, and blobs; kind is autodetected from what you point at. Content-addressed, gossiped, streamed peer-to-peer over QUIC.
  • Self-organising. No scheduler, no leader, no coordinator. Topology, placement, and routing emerge from local state; calls go to the nearest, least-loaded replica, and replicas migrate toward demand.
  • CRDT-native. A converging document on every node; changes gossip, conflicts resolve.
  • Partition-tolerant. Both sides of a split keep running; state converges on rejoin; survivors rehost workloads from failed nodes.
  • QUIC transport. One multiplexed, encrypted, UDP-based connection per peer carries gossip, services, and seeds. Connections punch direct between peers; otherwise they relay through any cluster node both peers can reach.
  • Cryptographic admission. No shared secrets, no firewall rules. Every link is mTLS.
  • Edge-ready. Pure Go, no CGO. Raspberry Pi to cloud host.
  • Ergonomic. Opinionated defaults, opt-in configuration.
curl -fsSL https://pln.sh/install.sh | bash

A thin wrapper around your platform's package manager (Homebrew on macOS, apt or yum on Linux), so upgrades, uninstalls, and service files are managed natively. On macOS, see the FAQ for a first-connect permissions note.

Two commands to a cluster

pln init                                # creates a new cluster rooted here
pln bootstrap ssh user@host [--admin]   # requires passwordless SSH + sudo

You have a zero-trust mesh, a peer-to-peer artifact store, and a WASM runtime. Public nodes automatically become relays, so the mesh handles NAT traversal without configuration. Pass --admin to delegate admin authority to the new node, so your root machine doesn't need to stay online.

With SSH. From any admin node:

pln bootstrap ssh user@host [--admin]

# Or pipe labelled targets from stdin or a file:
echo "[email protected]" | pln bootstrap ssh -

Installs Pollen, enrols in the cluster, and starts. Linux targets only; needs SSH as root or passwordless sudo. --admin delegates admin authority; prefix a target with name= to label the node. Run pln bootstrap ssh --help for the full flag set.

Out-of-band. Mint a token on an admin node, ship it to the joiner:

# Admin node:
pln invite [--subject foo]   # subject key can be retrieved with `pln id` on the subject node

# New node:
pln join <token>

The token is self-contained: signed admission credentials, the cluster's root key, and every public relay address the cluster has organically learned. Any public node you've bootstrapped is already acting as a relay, and its address is woven into new invites automatically, so a joiner behind NAT has a route in without you plumbing anything. Ship the token over any channel; it's signed and valid until its TTL expires.

# Machine A:
pln serve 8080 api

# Machine B:
pln connect api
curl localhost:8080           # served from A, over the mesh

TCP and UDP. Connections punch directly if both peers can reach each other, and relay over the shortest mesh path otherwise. No ingress controller, no DNS, no port forwarding.

pln seed ./hello.wasm
pln call hello greet '{"name":"world"}'

pln seed publishes a WASM binary into the cluster. Nodes decide locally whether to claim a replica, scoring themselves on available capacity, cached artifacts, and proximity to traffic. There is no central scheduler. When a node goes down, survivors pick up the slack.

Example modules live in examples/. Run pln --help for the full CLI reference.

# Delegate admin authority to an existing peer; handy for keeping
# the mesh operable (admissions, cert re-issues, etc.) with the root
# node offline:
pln grant <peer-id> --admin

# Bake arbitrary key/value metadata into a peer's cert. Seeds see
# the caller's peer key and properties on every invocation, so auth,
# routing, and policy decisions can live inside the workload:
pln grant <peer-id> --prop role=lead --prop team=backend

# Or bake them in at join time:
pln invite --prop role=engineer --prop team=backend

# Pipe a JSON payload from a file:
cat props.json | pln grant <peer-id> --prop -

# Set the root node's own properties at init time (or later by
# editing `properties:` in config.yaml and restarting):
pln init --prop role=primary --prop region=eu
# On each node that should serve HTTP. Port is optional;
# defaults to :8080. `restart` to apply.
pln set static-http               # or `pln set static-http 9000`

# From any node:
pln seed ./public my-site

# Fetch via any serving node:
curl -H "Host: my-site" http://<node-addr>:8080/

pln seed on a directory hashes every file into the local content-addressed store and publishes the site under <name>. Other nodes replicate the files and serve the site themselves. Each node's HTTP listener routes requests by Host header to the matching site.

# From any node:
pln seed ./big-file.bin           # prints sha-256 digest
pln seed ./big-file.bin payload   # …or publish under a name

# From any other node:
pln fetch <digest|name>           # pulls peer-to-peer over QUIC into the local store

Blobs are the primitive behind static sites: content-addressed, gossip-advertised, streamed peer-to-peer over QUIC. Receivers verify the digest on arrival.

  • macOS: sendmsg: no route to host on LAN dials

    Most likely macOS Local Network Privacy. Grant pln access in System Settings → Privacy & Security → Local Network. The prompt appears the first time pln tries to reach a LAN peer; if you miss it, or the binary's signature changes after an upgrade, LAN dials silently fail while WAN traffic keeps working. Re-granting access fixes it.

Licensed under the Apache License, Version 2.0.

联系我们 contact @ memedata.com