Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] One Third of the Web Will Stop Working in 4 Days (lowendbox.com)
58 points by Bender 4 months ago | hide | past | favorite | 26 comments


> Hi, I’m the author of this research. It’s great to see interest and I can promise some quality research and a strong argument to kill HTTP/1.1 but the headline of this article goes a bit too far. The specific CDN vulnerabilities have been disclosed to the vendors and patched (hence the past tense in the abstract) – I wouldn’t drop zero day on a CDN!

From the comment section. In other words, click-bait title.


Killing HTTP/1.1 is killing the open web, because HTTP/2 and HTTP/3 depend on CA infrastructure.


I think I figured out a way to do secure registration with a MITM without certificates, so there might be a way out from this mess.

But still you need to transfer the client and check it's hash for it to work and that's hard to implement in practice.

But you could bootstrap the thing over HTTPS (the download of the client) and then not need it ever again which is neat. Specially if you use TCP/HTTP now.


Appreciate you pointing that out. HTTP/1.1 may be relatively long in tooth, but this particular vulnerability seems straightforward to mitigate to me. Especially at the CDN level.

Following through the links referenced in the article, this appears to be the actual underlying research: https://portswigger.net/research/http-desync-attacks-request...


Thanks for commenting here! Much appreciate.


Forgive my optimism here but this seems overblown and trivial to detect and reject in firewalls/cdns.

Cloudflare most recently blocked a vulnerability affecting some php websites where a zip file upload contains a reverse shell. This seems plain in comparison (probably because it is).

This sensationalist headline, that doomsday style clock (as another poster shared) makes me question the motives of these researchers. Have they shorted any CDN stocks?


The underlying flaw is a parser differential. To detect that generically you'd need a model of both(/all) parsers involved, and to detect when they diverge. This is non-trivial.


You can have the CDN normalize requests so that it always outputs wellformed requests. This way only one parser deals with an untrusted / ambiguous input.


I have been working on this :)

https://github.com/narfindustries/http-garden


"Stop working" here apparently means scheduled release of an exploit of a vulnerability in proxies that do access control and incorrectly handle combinations of headers Content-Length and Transfer-Encoding: chunked.


> incorrectly handle combinations of headers Content-Length and Transfer-Encoding: chunked.

I think the article uses this as an example of the concept of Request Smuggling in general. This broad approach has been known for a long time. I assume the new research uses conceptually similar but concretely different approaches to trigger parser desyncs.


I might just be stupid but I'm not quite seeing the full issue I think.

From my reading this is a problem if:

1. Your CDN/Load balancer allows for HTTP/1.1 connections

2. You do some filtering/firewalling/auth/etc at your CDN/Load balancer for specific endpoints

I'm sure it's more than that, and I'm just missing it.

If you do all your filtering/auth/etc on your backend servers this doesn't matter right? Obviously people DO filtering/auth/etc at the edge so they would be affected but 1/3rd seems high. Maybe 1/3rd of traffic is HTTP/1.1 but they would also have to be doing filtering/auth at the edge to be hit by this right?

Again, for the 3rd time, I'm probably missing something, just trying to better understand the issue.


If it's similar to a vulnerability reported a couple of years back, the gist of it would be.

Some load balancers fronting multiple application connection through multiplexed requests on a single HTTP 1.1 connection, and bugs occur with the handling (generally handling request boundaries).

For example you can have a HTTP 1.1 front connection that behind the scenes operates a separate HTTP 1.0, 1.1, 2 or 3 connection.

When smuggling additional data through the request you trip up the handler at the load balancer handler to inject a response that shouldn't be there, which will be served for one of the clients (even the wrong client request).

Similar to HTTP response splitting attacks of the past.

Eg. 3 requests come into the load balancer, and request 2 smuggles in an extra response that could be served as a response to request 1 or 3.

That's how I understood the last such attack.


Cache poisoning is also possible.

See https://youtu.be/aKPAX00ft5s?feature=shared&t=8730 for a relevant demo.

You can also (in principle) steal responses intended for other clients, and control responses that get delivered to other clients.


Reading this and following links, eventually you'll get to this page https://portswigger.net/research/talks?talkId=32 which hypes a conference presentation at DefCon on Aug 8.

So, I can't tell if it's real(ish) or advertising.


Is it really impossible for the CDNs to mitigate the vulnerability without disabling the site altogether? I'm skeptical that is the case. I'm sure there is a way to properly separate different requests.


If you'd like even more melodrama, they have a doomsday-style countdown timer here: https://http1mustdie.com/


From the authors of "HTTP/2: The Sequel is Always Worse": https://portswigger.net/research/http2


Someone get the President on the line!


What is the purpose of this deliberate clickbait headline when you know it is wrong to do so? It only diminishes trust in future security issues.


This is just a paraphrazation of another blog post [0], but with a clickbait title that is not true.

[0]: https://flak.tedunangst.com/post/polarizing-parsers


This title is super misleading, it's not going to stop working, unless PortSwigger plans on using this to DDoS all HTTP/1.1 servers?


There are numerous HTTP header vulnerabilities that CDNs already fix and block, how is this different?


Doesn't compare to the mayhem and destruction we experienced during Y2K.


There was a mad dash to patch stuff but there wasn't much destruction. It became mostly a nothingburger event because of preparation to fix things ahead-of-time and triage for the bits that did break.


"Upstream HTTP/1.1 is inherently insecure and consistently exposes millions of websites to hostile takeover."

Perhaps the word "upstream" is significant

Not all proxies, i.e., authors of proxies, are created equal

Some might be incorrect

This may be the fault of the proxy authors, not the fault of the protocol designer^1

This blog post makes a claim that HTTP/1.1 only "power[s] a third of the web"

I have rarely found a website that will not accept HTTP/1.1

https://http1mustdie.com accepts HTTP/1.0 and 1.1

I have used HTTP/1.1 pipelining outside the browser for 17 years^2

It works beautifully for me outside the browser

Today I use 1.1 pipelining literally every day. Almost all websites I encounter still support and enable it

Would love to see some examples of sites that do not accept HTTP/1.1 amd only accept HTTP/[23]

1. Unlike the designers of HTTP/1.1, the designers of HTTP/2 have made mistakes, bad enough to warrant an immediate replacement

"This head-of-line blocking in HTTP/2 is now widely regarded as a design flaw, and much of the effort behind QUIC and HTTP/3 has been devoted to reduce head-of-line blocking issues.^[58]^[59]"

58. ^ Huston, Geoff (March 4, 2019). "A Quick Look at QUIC". www.circleid.com. Retrieved August 2, 2019.

59. ^ Gal, Shauli (June 22, 2017). "The Full Picture on HTTP/2 and HOL Blocking". Medium. Retrieved August 3, 2019.

This is interesting to me since this is allegedly the "problem" with HTTP/1.1 (cf. problem with advertising-sponsored web browsers) that HTTP/2 was supposed to "solve"^3

2. To retrieve multiple resources from same host in a single TCP connections. (Unlike browsers that routinely open up dozens of TCP connections to multiple hosts, usually for telemetry/advertising/tracking purposes.) It is desirable for me to receive the responses in the same order the requests were sent. As such, "HOL" is a on-issue for me.

3. See "Introduction" https://www.ietf.org/rfc/rfc7540.txt

HTTP/3 reminds me of CurveCP that was published years before QUIC^4

Except the HTTP/3 RFC is written by Akamai employee whereas CurveCP is from an academic, like HTTP/1.1

Also CurveCP needs no "SNI" that leaks domain names in plaintext like TLS and hence needs no complicated Band-Aid like ECH. Still waiting for ECH to be available outside of maybe a limited percentage of sites on a single CDN (Cloudflare)^5 It has been years

4. https://curvecp.org/addressing.html

5. https://test.defo.ie




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: