Zlib-rs的速度比C语言更快。
Zlib-rs is faster than C

原始链接: https://trifectatech.org/blog/zlib-rs-is-faster-than-c/

zlib-rs 0.4.2 版本大幅提升了性能,目前号称是解压速度最快的与 zlib API 兼容的实现,并在关键压缩场景下拥有顶级性能。一个新的性能监控面板追踪了与 zlib-ng 和 zlib-chromium 等竞争对手的性能对比,用于捕捉性能倒退并可视化进度。 主要改进包括:运行时针对最佳 CPU 指令集进行多版本选择,以及使用 `-Cllvm-args=-enable-dfa-jump-thread` 编译器标志通过优化确定性有限自动机来恢复性能。 基准测试显示,在大多数相关块大小的解压方面,zlib-rs 明显快于 zlib-ng(在 1KB 块大小下快超过 10%)。它在解压方面也优于 zlib-chromium。压缩结果喜忧参半,zlib-rs 在压缩级别 6(快 6%)和 9(快 13%)方面表现出色。 zlib-rs 可用于 C 和 Rust 项目。开发者正在寻求 95,000 欧元的资金来完成实现、增强性能和改进打包。

Hacker News 的讨论串围绕一篇声称 zlib-rs 比其 C 语言对应版本更快的文章展开,重点关注性能优化和 Rust 的安全特性。zlib-rs 的贡献者强调了 Rust 的优势,包括编译时检查、积极内联和更好的缓存局部性。讨论围绕着使用 `unsafe` 代码块进行 SIMD 操作和底层优化展开,权衡性能和内存安全。评论者们就 `unsafe` 是否会否定 Rust 的安全保证展开了辩论,一些人认为它是一个可控的逃生舱,另一些人则强调需要仔细的审计。Rust 的性能提升归因于不仅仅是语言本身的因素,还包括设计选择、算法选择以及更好地利用现代硬件特性。讨论串还涉及 Rust 生态系统的演变、稳定性和创新之间的权衡,以及与 C 和其他语言的比较。

原文

We've released version 0.4.2 of zlib-rs, featuring a number of substantial performance improvements. We are now (to our knowledge) the fastest api-compatible zlib implementation for decompression, and beat the competition in the most important compression cases too.

We've built a dashboard that shows the performance of the current main branch compared to other implementations, and tracks our performance over time to catch any regressions and visualize our progress.

This post compares zlib-rs to the latest zlib-ng and, for decompression, also to zlib-chromium. These are the leading C zlib implementations that focus on performance. We'll soon write a blog post with more technical details, and only cover the most impactful changes briefly.

Decompression

Last time, we benchmarked using the target-cpu=native flag. That gave the best results for our implementation, but was not entirely fair because our rust implementation could assume that certain SIMD capabilities would be available, while zlib-ng had to check for them at runtime.

We have now made some changes so that we can efficiently select the most optimal implementation at runtime too.

Multiversioning

Picking the best version of a function is known as multiversioning. We have a baseline implementation that works on all CPUs, and then some number of specialized versions that use SIMD instructions or other features that may or may not be available on a particular CPU. The challenge is to always pick the optimal implementation, but with minimal runtime cost. That means we want to do the runtime check as few times as possible, and then perform a large chunk of work.

Today, multiversioning is not natively supported in rust. There are proposals for adding it (which we're very excited about!), but for now, we have to implement it manually which unfortunately involves some unsafe code. We'll write more about this soon (for the impatient, the relevant code is here).

DFA optimizations

The C code is able to use switch implicit fallthroughs to generate very efficient code. Rust does not have an equivalent of this mechanism, and this really slowed us down when data comes in in small chunks.

Nikita Popov suggested we try the -Cllvm-args=-enable-dfa-jump-thread option, which recovers most of the performance here. It performs a kind of jump threading for deterministic finite automata, and our decompression logic matches this pattern.

LLVM does not currently enable this flag by default, but that is the plan eventually. We're also looking into supporting this optimization in rustc itself, and making it more fine-grained than just blindly applying it to a whole project and hoping for the best.

These efforts are a part of a proposed project goal and Trifecta Tech Foundation's code generation initiative.

Benchmarks

As far as we know, we're the fastest api-compatible zlib implementation today for decompression. Not only do we beat zlib-ng by a fair margin, we're also faster than the implementation used in chromium.

Like before, our benchmark is decompressing a compressed version of silesia-small.tar, feeding the state machine the input in power-of-2 sized chunks. Small chunk sizes simulate the streaming use case, larger chunk sizes model cases where the full input is availabe.

versus zlib-ng

decompression (ng vs. rs)

We're now significantly faster than zlib-ng for all but the smallest chunk size. A chunk size of 2^4 = 16 bytes is very unlikely to be relevant for performance in practice because the input can just be buffered and then decompressed in larger chunks.

We are however significantly faster than zlib-ng at the more relevant chunk sizes: well over 10% for inputs of 1kb, and over 6% for inputs of 65kb.

chunk sizezlib-ngzlib-rsΔ
4255.77M ± 179.04K259.40M ± 492.87K💩 +1.40%
5203.64M ± 305.47K190.91M ± 343.64K🚀 -6.67%
6164.30M ± 131.44K148.51M ± 193.07K🚀 -10.63%
7142.62M ± 156.88K126.24M ± 113.62K🚀 -12.98%
8131.87M ± 210.99K116.36M ± 116.36K🚀 -13.33%
9126.19M ± 227.14K111.99M ± 100.79K🚀 -12.68%
10125.58M ± 150.70K111.18M ± 111.18K🚀 -12.95%
11123.94M ± 136.34K112.16M ± 201.89K🚀 -10.50%
12121.81M ± 109.63K111.82M ± 89.45K🚀 -8.94%
13114.27M ± 114.27K106.27M ± 138.15K🚀 -7.53%
14102.34M ± 133.04K 95.13M ± 95.13K🚀 -7.57%
15 94.35M ± 132.09K 87.72M ± 96.49K🚀 -7.56%
16 90.40M ± 108.48K 84.53M ± 84.53K🚀 -6.94%

versus chromium

For decompression, the zlib implementation used in the chromium project (found here, which we use via a modified version of libz-sys) is often faster than zlib-ng. However, we also beat it at this benchmark for the most relevant chunk sizes.

decompression (chromium vs. rs)

Interestingly, zlib-chromium is mostly faster for the smaller chunk sizes, while for larger chunk sizes performance is fairly comparable to zlib-ng.

chunk sizezlib-chromiumzlib-rsΔ
4227.39M ± 363.82K259.40M ± 492.87K💩 +12.34%
5181.29M ± 471.36K190.91M ± 343.64K💩 +5.04%
6146.09M ± 160.70K148.51M ± 193.07K💩 +1.63%
7126.91M ± 164.98K126.24M ± 113.62K🚀 -0.53%
8118.13M ± 94.51K116.36M ± 116.36K🚀 -1.53%
9114.83M ± 91.86K111.99M ± 100.79K🚀 -2.53%
10113.20M ± 90.56K111.18M ± 111.18K🚀 -1.82%
11114.20M ± 102.78K112.16M ± 201.89K🚀 -1.81%
12114.55M ± 103.10K111.82M ± 89.45K🚀 -2.44%
13108.87M ± 87.09K106.27M ± 138.15K🚀 -2.44%
14 99.55M ± 129.41K 95.13M ± 95.13K🚀 -4.64%
15 92.35M ± 157.00K 87.72M ± 96.49K🚀 -5.28%
16 90.01M ± 180.02K 84.53M ± 84.53K🚀 -6.48%

Compression

We've been chipping away at compression too (shoutout to Brian Pane, who contributed numerous PRs in this area), but see more mixed results.

compression (ng vs. rs)

On x86_64 linux, we are faster for some of the compression levels that matter most, about 6% at the default level of 6, and over 10% at the "best compression" level 9. But we're still slightly slower for most of the other levels when comparing to zlib-ng.

compression levelngrsΔ
0 15.07M ± 272.75K 14.83M ± 260.97K🚀 -1.63%
1250.09M ± 300.11K258.71M ± 388.06K💩 +3.33%
2436.59M ± 698.54K465.33M ± 418.80K💩 +6.18%
3523.10M ± 156.93K542.28M ± 325.37K💩 +3.54%
4623.40M ± 436.38K648.43M ± 324.22K💩 +3.86%
5773.30M ± 463.98K711.81M ± 427.09K🚀 -8.64%
6939.52M ± 469.76K884.79M ± 442.39K🚀 -6.19%
7 1.23G ± 1.48M 1.24G ± 617.75K💩 +0.38%
8 1.59G ± 159.22K 1.60G ± 1.92M💩 +0.48%
9 1.94G ± 970.95K 1.71G ± 512.66K🚀 -13.64%

For most users, decompression is the most relevant operation, and even for compression we're a lot faster than stock zlib. Nevertheless, we'll continue to try and improve compression performance.

Conclusion

zlib-rs can be used both in C projects and as a rust crate in rust projects. For rust projects, we recommend using the 1.1.0 release of the flate2 crate with the zlib-rs feature flag. For use in C projects, zlib-rs can be built as a C dynamic library (see instructions) and used in any project that uses zlib today.

Our implementation is mostly done, and clearly performs extremely well. However, we're missing some less commonly used API functions related to gzip files that would make us a complete drop-in replacement in all cases.

To complete the work and improve performance and e.g. packaging, we're seeking funding for the amount of €95.000. See the workplan for details.

Please contact us if you are interested in financially supporting zlib-rs.



联系我们 contact @ memedata.com