因为它不需要。
Because It Doesn't Have To

原始链接: https://blog.computationalcomplexity.org/2026/04/because-it-doesnt-have-to.html

互联网和现代人工智能的惊人成功源于一个共同的原则:它们的设计目的是*容忍*失败,而不是阻止失败。就像互联网的基础层不保证数据传输——接受失败是过程的一部分一样,机器学习也依赖于概率输出。 神经网络使用诸如softmax之类的函数,从不完全消除任何可能性,甚至为错误的答案分配很小的概率。这使得它们能够优雅地处理复杂场景,在无法找到确定性解决方案时,提供潜在结果的分布。 通过拥抱“错误”的可能性,这些系统获得了灵活性并实现了更高的整体性能。它们不严格要求正确的答案,而是利用概率猜测来应对复杂性,最终能够解决其他方法无法解决的问题。

对不起。
相关文章

原文

My favorite quote about networking came from Jim Kurose.

The Internet works so well because it doesn't have to.

The IP and lower layers of the internet stack make no promises of delivery. Complete failure fulfills the protocol. This allows for simpler and more powerful protocols without the extra complexity needed to guarantee success. TCP aims for delivery basically by restarting the IP communication when it fails, and even TCP can report failure to the layers above.

We can say the same about modern artificial intelligence.

Machine learning works so well because it doesn't have to.

With the softmax function that neural nets use to determine the probability of outputs, neural nets never completely rule out a possibility, always giving it at least some tiny probability. In cases where the complexity is just too difficult, neural nets give several possibilities with nontrivial probabilities, as I described in my recent post, where a machine learning model would generate a uniform distribution to capture the output of a pseudorandom generator. Instead of rigidly forcing the model to give us a specific answer, by looking at distributions we allow the models to make mistakes.

Thus a machine learning model can be correct when it makes probabilistic guesses in situations too complicated to solve directly, which allows it to achieve its best possible performance. Because we allow the models to make mistakes, they have the flexibility to solve complex problems far more frequently.

联系我们 contact @ memedata.com