honker adds Postgres-style NOTIFY/LISTEN semantics to SQLite, with a durable pub/sub, task queue, and event streams on the side, without client polling or a daemon/broker. Cross-process wake latency is ~0.7 ms p50 on an M-series laptop.
In its basic form it’s a plain SQLite loadable extension, so any language that can SELECT load_extension('honker_ext') gets the same queue, streams, and notifications on the same file. Bindings for Python, Node, Rust, Go, Ruby, Bun, and Elixir share one on-disk format.
SQLite is backing real work now — Bluesky’s PDS, Fly’s LiteFS, Turso, weekend projects that somehow ended up in production. Once real work flows through a SQLite-backed app, you need a queue. The usual answer is “add Redis + Celery.” That works, but introduces a second datastore with its own backup story, a dual-write problem between your business table and the queue, and the operational overhead of running a broker.
honker takes the approach that if SQLite is the primary datastore, the queue should live in the same file. That means INSERT INTO orders and queue.enqueue(...) commit in the same transaction. Rollback drops both. The queue is just rows in a table with a partial index.
Enqueue atomically with a business write, then consume. Same .db file, same on-disk format, seven languages.
db = honker.open("app.db")
# Enqueue in the same transaction as the business write.
with db.transaction() as tx:
tx.execute("INSERT INTO orders (id, total) VALUES (?, ?)", [42, 99])
# Worker wakes on any commit to the db, no polling.
async for job in q.claim("worker-1"):
await send_email(job.payload)
Or with Huey-style decorators:
@q.task(retries=3, timeout_s=30)
def send_email(to, subject):
return {"sent_at": time.time()}
print(r.get(timeout=10)) # blocks until worker runs it
const { open } = require('@russellthehippo/honker-node');
const db = open('app.db');
const q = db.queue('emails');
// Enqueue in the same transaction as the business write.
const tx = db.transaction();
tx.execute("INSERT INTO orders (id, total) VALUES (?, ?)", [42, 99]);
// Worker wakes on any commit to the db, no polling.
const waker = q.claimWaker();
const job = await waker.next('worker-1');
await sendEmail(job.payload);
use honker::{Database, QueueOpts, EnqueueOpts};
let db = Database::open("app.db")?;
let q = db.queue("emails", QueueOpts::default());
let tx = db.transaction()?;
tx.execute("INSERT INTO orders (id, total) VALUES (?, ?)",
rusqlite::params![42, 99])?;
EnqueueOpts::default())?;
if let Some(job) = q.claim_one("worker-1")? {
send_email(&job.payload)?;
import honker "github.com/russellromney/honker-go"
db, _ := honker.Open("app.db", "./libhonker_ext.dylib")
q := db.Queue("emails", honker.QueueOptions{})
tx.Exec("INSERT INTO orders (id, total) VALUES (?, ?)", 42, 99)
q.EnqueueTx(tx, map[string]any{
}, honker.EnqueueOptions{})
if job, _ := q.ClaimOne("worker-1"); job != nil {
db = Honker::Database.new("app.db", extension_path: "./libhonker_ext.dylib")
tx.execute("INSERT INTO orders (id, total) VALUES (?, ?)", [42, 99])
if (job = q.claim_one("worker-1"))
import { open } from "@russellthehippo/honker-bun";
const db = open("app.db", "./libhonker_ext.dylib");
const q = db.queue("emails");
const tx = db.transaction();
tx.execute("INSERT INTO orders (id, total) VALUES (?, ?)", [42, 99]);
const job = q.claimOne("worker-1");
await sendEmail(job.payload as { to: string });
{:ok, db} = Honker.open("app.db", extension_path: "./libhonker_ext.dylib")
q = Honker.queue(db, "emails")
Honker.transaction(db, fn tx ->
Honker.execute(tx, "INSERT INTO orders (id, total) VALUES (?, ?)", [42, 99])
case Honker.Queue.claim_one(q, "worker-1") do
honker::Database db{"app.db", "./libhonker_ext.dylib"};
auto q = db.queue("emails");
honker::Transaction tx{db.raw()};
tx.execute("INSERT INTO orders (id, total) VALUES (?, ?)", {42, 99});
q.enqueue_tx(tx, R"({"to":"alice","order_id":42})");
auto job = q.claim_one("worker-1");
send_email(job->payload());
SELECT honker_bootstrap();
INSERT INTO orders (id, total) VALUES (42, 99);
SELECT honker_enqueue('emails', '{"to":"alice","order_id":42}',
SELECT honker_claim_batch('emails', 'worker-1', 32, 300);
SELECT honker_ack_batch('[1,2,3]', 'worker-1');
honker polls SQLite’s PRAGMA data_version every millisecond. That’s a monotonic counter SQLite increments on every commit from any connection, journal mode, or process — a ~3 µs read for a precise wake signal. A background thread fans the tick out to every subscriber, which runs SELECT ... WHERE id > last_seen and yields new rows. One poller thread per database regardless of subscriber count.
Idle cost is that one lightweight SELECT per millisecond per database — no page-cache pressure, no writer-lock contention, no kernel file watcher in the mix. Listener count scales for free because the wake signal is one shared poll, not one query per listener.
The queue, stream, and pub/sub primitives are all INSERTs into tables managed by the extension. Calling queue.enqueue(payload, tx=tx) inside your business transaction means the job row is ACID with the INSERT INTO orders that preceded it. Rollback drops the job along with everything else.
pg_notify gives you fast cross-process triggers but no retry or visibility. Huey is the SQLite-backed Python task queue honker draws the most from. pg-boss and Oban are the Postgres-side gold standards. If you already run Postgres, use those.
npm install @russellthehippo/honker-node
go get github.com/russellromney/honker-go
bun add @russellthehippo/honker-bun
git clone https://github.com/russellromney/honker-cpp.git
# Build from source — it's one crate
cargo build --release -p honker-extension
# → target/release/libhonker_ext.{dylib,so}