(评论)
(comments)
原始链接: https://news.ycombinator.com/item?id=39812969
你的分析似乎很到位。 虽然 Fiber 和 async/await 都在高层提供类似的功能,但区别在于它们的实现细节。 Async/await 依赖于编译时转换来生成轻量级状态机,而纤程则依赖于运行时原语和手动控制流操作。 两者都有优点和缺点,选择最终取决于用例和所需的属性。
一个有趣的观察是,尽管低开销、手动控制的光纤具有潜在优势,但它们似乎仍然在很大程度上缺席该行业。 除了实施光纤的技术挑战之外,还有其他原因阻碍其广泛采用吗? 或者,该行业是否越来越受到托管环境的主导,使得细粒度、低开销的并发变得不再那么重要?
关于 Rust 上下文中的 Fiber 和 async/await 之间的比较,Rust 在该领域的产品(即 async Rust)似乎采用了这两种模型的元素。 通过为纤程式协程和异步/等待提供一流的支持,Rust 为开发人员提供了灵活性和简单性,使他们能够根据上下文选择最适合自己需求的模型。 然而,这种混合方法是否能够成为事实上的标准,或者一种方法最终会占上风,还有待观察。
最后,值得注意的是,无论选择哪种特定光纤或异步/等待实现,仔细考虑吞吐量和延迟之间的权衡对于设计高性能并发系统仍然至关重要。 最终的目标是最大限度地减少浪费,确保在整个计算过程中有效利用资源,并确保整个系统保持速度和响应能力的最佳平衡。
Multi-threaded async/await gets ugly. If you have serious compute-bound sections, the model tends to break down, because you're effectively blocking a thread that you share with others.
Compute-bound multi-threaded does not work as well in Rust as it should. Problems include:
- Futex congestion collapse. This tends to be a problem with some storage allocators. Many threads are hitting the same locks. In particular, growing a buffer can get very expensive in allocators where the recopying takes place with the entire storage allocator locked. I've mentioned before that Wine's library allocator, in a .DLL that's emulating a Microsoft library, is badly prone to this problem. Performance drops by two orders of magnitude with all the CPU time going into spinlocks. Microsoft's own implementation does not have this problem.
- Starvation of unfair mutexes. Both the standard Mutex and crossbeam-channel channels are unfair. If you have multiple threads locking a resource, doing something, unlocking the resource, and repeating that cycle, one thread will win repeatedly and the others will get locked out.[1] If you need fair mutexes, there's "parking-lot". But you don't get the poisoning safety on thread panic that the standard mutexes give you.
If you're not I/O bound, this gets much more complicated.
[1] https://users.rust-lang.org/t/mutex-starvation/89080
reply