OpenBSD: PF队列突破4 Gbps 限制
OpenBSD: PF queues break the 4 Gbps barrier

原始链接: https://undeadly.org/cgi?action=article;sid=20260319125859

## OpenBSD PF 数据包过滤器的带宽限制已移除 OpenBSD 的 PF 数据包过滤器最近的补丁移除了长期存在的带宽配置限制。 之前,HFSC 调度器中的 32 位整数上限会静默地将队列带宽限制在约 4.29 Gbps,导致与现代 10G、25G 和 100G 网络接口出现问题。 该更新将带宽字段扩展到 64 位整数,允许精确配置高达 999G。 这解决了较高速度下的不可预测的调度行为,并修复了 `pftop(1)` 中一个显示错误,该错误之前错误地表示超过 4Gbps 的带宽值。 使用低于 4G 带宽的现有配置不受影响。 此更改确保 PF 队列带宽配置能够与当前和未来的高速网络硬件按预期工作。 该补丁计划于 2026 年 3 月 20 日提交。

## OpenBSD PF 过滤更新 OpenBSD 的数据包过滤器 PF 收到更新,去除了一个 32 位限制,该限制将带宽值限制在约 4.29 Gbps。 此限制之前导致尝试使用 10G 或更高速度的网络接口时,带宽配置发生静默环绕,从而导致不可预测的行为。 此次更新将内核 HFSC 调度器中的带宽字段扩展为 64 位整数,现在支持高达约 1 Tbps 的值。 尽管 OpenBSD 传统上不侧重于原始性能,但更快的网络硬件和内核改进的日益普及促成了这一变化。 讨论强调了对 OpenBSD 历史上优先考虑安全而非速度的担忧,有时难以理解的错误报告,以及与 FreeBSD 和 Linux 相比的驱动程序支持。 一些用户质疑在如此高的速度下整形流量的必要性,而另一些用户则强调其在数据中心管理各种网络流量方面的重要性。 此更新解决了瓶颈,但关于 OpenBSD 在高带宽环境中的作用,更广泛的问题仍然存在。
相关文章

原文

Contributed by Peter N. M. Hansteen on from the queueing for Terabitia dept.

OpenBSD's PF packet filter has long supported HFSC traffic shaping with the queue rules in pf.conf(5). However, an internal 32-bit limitation in the HFSC service curve structure (struct hfsc_sc) meant that bandwidth values were silently capped at approximately 4.29 Gbps, ” the maximum value of a u_int ".

With 10G, 25G, and 100G network interfaces now commonplace, OpenBSD devs making huge progress unlocking the kernel for SMP, and adding drivers for cards supporting some of these speeds, this limitation started to get in the way. Configuring bandwidth 10G on a queue would silently wrap around, producing incorrect and unpredictable scheduling behaviour.

A new patch widens the bandwidth fields in the kernel's HFSC scheduler from 32-bit to 64-bit integers, removing this bottleneck entirely. The diff also fixes a pre-existing display bug in pftop(1) where bandwidth values above 4 Gbps would be shown incorrectly.

For end users, the practical impact is: PF queue bandwidth configuration now works correctly for modern high-speed interfaces. The familiar syntax just does what you'd expect:


queue rootq on em0 bandwidth 10G
queue defq parent rootq bandwidth 8G default

Values up to 999G are supported, more than enough for interfaces today and the future. Existing configurations using values below 4G continue to work - no changes are needed.

As always, testing of -current snapshots and donations to the OpenBSD Foundation are encouraged.

The editors note that the thread titled PF Queue bandwidth now 64bit for >4Gbps queues on tech@ has the patch and a brief discussion with the conclusion that the code is ready to commit by Friday, March 20th, 2026.

联系我们 contact @ memedata.com