持久队列、流、发布/订阅和 cron 调度器——在你的 SQLite 文件中
Durable queues, streams, pub/sub, and a cron scheduler – inside your SQLite file

原始链接: https://honker.dev/

## Honker:SQLite 的持久发布/订阅与队列 Honker 为 SQLite 添加了强大的发布/订阅、任务队列和事件流功能,无需像 Redis 这样的独立代理。它实现了极低的延迟(~0.7ms p50),无需客户端轮询或专用守护进程。 作为一个简单的 SQLite 扩展,Honker 与支持 SQLite 的 `load_extension()` 函数的任何语言协同工作,使用共享的磁盘格式,通过 Python、Node、Rust、Go、Ruby、Bun 和 Elixir 提供一致的访问。它已被 Bluesky 的 PDS 和 Turso 等实际应用所采用。 Honker 的主要优势是事务完整性:队列操作(`enqueue`)与业务逻辑合并在一个 SQLite 事务中,确保原子性——回滚会同时取消两者。它利用 SQLite 的 `PRAGMA data_version` 实现高效的唤醒信号,最大限度地减少开销。本质上,Honker 将专用队列系统的可靠性 *带入* 您的 SQLite 数据库,简化了架构和操作。

黑客新闻 新的 | 过去的 | 评论 | 提问 | 展示 | 招聘 | 提交 登录 持久队列、流、发布/订阅和 cron 调度器 – 在你的 SQLite 文件中 (honker.dev) 19 分,由 ferriswil 1 小时前发布 | 隐藏 | 过去的 | 收藏 | 讨论 帮助 考虑申请 YC 的 2026 年夏季批次!申请截止至 5 月 4 日 指南 | 常见问题 | 列表 | API | 安全 | 法律 | 申请 YC | 联系方式 搜索:
相关文章

原文

honker adds Postgres-style NOTIFY/LISTEN semantics to SQLite, with a durable pub/sub, task queue, and event streams on the side, without client polling or a daemon/broker. Cross-process wake latency is ~0.7 ms p50 on an M-series laptop.

In its basic form it’s a plain SQLite loadable extension, so any language that can SELECT load_extension('honker_ext') gets the same queue, streams, and notifications on the same file. Bindings for Python, Node, Rust, Go, Ruby, Bun, and Elixir share one on-disk format.

SQLite is backing real work now — Bluesky’s PDS, Fly’s LiteFS, Turso, weekend projects that somehow ended up in production. Once real work flows through a SQLite-backed app, you need a queue. The usual answer is “add Redis + Celery.” That works, but introduces a second datastore with its own backup story, a dual-write problem between your business table and the queue, and the operational overhead of running a broker.

honker takes the approach that if SQLite is the primary datastore, the queue should live in the same file. That means INSERT INTO orders and queue.enqueue(...) commit in the same transaction. Rollback drops both. The queue is just rows in a table with a partial index.

Enqueue atomically with a business write, then consume. Same .db file, same on-disk format, seven languages.

db = honker.open("app.db")

# Enqueue in the same transaction as the business write.

with db.transaction() as tx:

tx.execute("INSERT INTO orders (id, total) VALUES (?, ?)", [42, 99])

# Worker wakes on any commit to the db, no polling.

async for job in q.claim("worker-1"):

await send_email(job.payload)

Or with Huey-style decorators:

@q.task(retries=3, timeout_s=30)

def send_email(to, subject):

return {"sent_at": time.time()}

print(r.get(timeout=10)) # blocks until worker runs it

honker polls SQLite’s PRAGMA data_version every millisecond. That’s a monotonic counter SQLite increments on every commit from any connection, journal mode, or process — a ~3 µs read for a precise wake signal. A background thread fans the tick out to every subscriber, which runs SELECT ... WHERE id > last_seen and yields new rows. One poller thread per database regardless of subscriber count.

Idle cost is that one lightweight SELECT per millisecond per database — no page-cache pressure, no writer-lock contention, no kernel file watcher in the mix. Listener count scales for free because the wake signal is one shared poll, not one query per listener.

The queue, stream, and pub/sub primitives are all INSERTs into tables managed by the extension. Calling queue.enqueue(payload, tx=tx) inside your business transaction means the job row is ACID with the INSERT INTO orders that preceded it. Rollback drops the job along with everything else.

pg_notify gives you fast cross-process triggers but no retry or visibility. Huey is the SQLite-backed Python task queue honker draws the most from. pg-boss and Oban are the Postgres-side gold standards. If you already run Postgres, use those.

联系我们 contact @ memedata.com