I'm kind of had enough of unnecessary policy ratcheting, it's a problem in a every industry where a solution is not possible or practical; so the knob that can be tweaked is always turned. Same issue with corporate compliance, I'm still rotating password, with 2fa, sometimes three or four factors for an environment, and no one can really justify it, except the fear that not doing more will create liability.
A bit off-topic, but I find this crazy. In basically every ecosystem now, you have to specifically go out of your way to turn on mandatory rotation.
It's been almost a decade since it's been explicitly advised against in every cybersec standard. Almost two since we've done the research to show how ill-advised mandatory rotations are.
PCI still recommends 90 day password changes. Luckily they've softened their stance to allow zero-trust to be used instead. They're not really equivalent controls, but clearly laid out as 'OR' in 8.3.9 regardless.
I think it's only a requirement if passwords are the sole factor, correct? Any other factor or zero-trust or risk-based authentication exempts you from the rotation. It's been awhile since I've looked at anything PCI.
But that would mean doing less, and that's by default bad. We must take action! Think of the children!
I tried at my workplace to get them to stop mandatory rotation when that research came out. My request was shot down without any attempt at justification. I don't know if it's fear of liability or if the cyber insurers are requiring it, but by gum we're going to rotate passwords until the sun burns out.
This was stated as a long-term goal long ago. The idea is that you should automate away certificate issuance and stop caring, and to eventually get lifetimes short enough that revocation is not necessary, because that's easier than trying to fix how broken revocation is.
The problem is when the automation fails, you're back to manual. And decreasing the period between updates means more chances for failure. I've been flamed by HN for admitting this, but I've never gotten automated L.E. certificate renewal to work reliably. Something always fails. Fortunately I just host a handful of hobby and club domains and personal E-mail, and don't rely on my domains for income. Now, I know it's been 90 days because one of my web sites fails or E-mail starts to complain about the certificate being bad, and I have to ssh into my VPS to muck around. This news seems to indicate that I get to babysit certbot even more frequently in the future.
Really? I've never had it fail. I simply ran the script provided by LE, it set everything up, and it renewed every time until I took the site down for unrelated (financial reasons). Out of curiousity, when did you last use LE? Did you use the script they provided you or a third party package?
I set it up ages ago, maybe before they even had a script. My setup is dead simple: A crontab that runs monthly:
0 2 1 * * /usr/local/bin/letsencrypt-renew
And the script:
#!/bin/sh
certbot renew
service lighttpd restart
service exim4 restart
service dovecot restart
... and so on for all my services
That's it. It should be bulletproof, but every few renewals I find that one of my processes never picked up the new certificates and manually re-running the script fixes it. Shrug-emoji.
I don't know how old "letsencrypt-renew" is and what it does. But you run "modern" acme clients daily. The actual renewal process starts with 30 days left. So if something doesn't work it retries at least 29 times.
I haven't touched my OpenBSD (HTTP-01) acme-client in five years:
acme-client -v website && rcctl reload httpd
My (DNS-01) LEGO client sometimes has DNS problems. But as I said, it will retry daily and work eventually.
I wasn't making fun of you. It wasn't obvious that's what you meant at all, because you said you didn't know "what it does". I'm sure you know what certbot does, so I thought you misinterpreted the post.
Yes, same for me. Every few months some kind internet denizen points out to me that my certificate has lapsed, running it manually usually fixes it. LE software is pretty low quality, I've had multiple issues over the years some of which culminated in entire systems being overwritten by LE's broken python environment code.
If it's happening regularly wouldn't it make sense to add monitoring for it? E.g. my daily SSL renew check sanity-checks the validity of the certificates actually used by the affected services using openssl s_client after each run.
I did manage to set it up and it has been working ok but it has been a PITA. Also for some reason they contact my server over HTTP, so I must open port 80 just to do the renovation.
Enforcing an arbitrary mitigation to a problem the industry does not know how to solve doesn't make it a good solution. It's just a solution the corporate world prefers.
Except this isn't really viable for any kind of internal certs, where random internal teams don't have access to modify the corporate DNS. TLS is already a horrible system to deal with for internal software, and browsers keep making it worse and worse.
Not to mention that the WEBPKI has made it completely unviable to deliver any kind of consumer software as an offline personal web server, since people are not going to be buying their own DNS domains just to get their browser to stop complaining that accessing local software is insecure. So, you either teach your users to ignore insecure browser warnings, or you tie the server to some kind of online subscription that you manage and generate fake certificates for your customer's private IPs just to get the browsers to shut up.
This doesn't help that much, since you still have to fiddle with installing the private CA on all devices. Not much of a problem in corporate environments, perhaps, but a pretty big annoyance for any personal network (especially if you want friends to join).
It also ignores the real world as the CA/Browser forum admits they don't understand how certificates are actually used in the real world. They're just breaking shit to make the world a worse place.
They are calibrated for organizations/users that have higher consequences for mis-issuance and revocation delay than someone’s holiday blog, but I don’t think they’re behaving selfishly or irrationally in this instance. There are meaningful security benefits to users if certificate lifetimes are short and revocation lists are short, and for the most part public PKI is only as strong as the weakest CA.
OCSP (with stapling) was an attempt to get these benefits with less disruption, but it failed for the same reason this change is painful: server operators don’t want to have to configure anything for any reason ever.
> OCSP failed for the same reason this change is painful: server operators don’t want to have to configure anything for any reason ever.
OCSP is going end-of-life because it makes it too easy to track users.
From Lets Encrypt[1]:
We ended support for OCSP primarily because it represents a considerable risk to privacy on the Internet. When someone visits a website using a browser or other software that checks for certificate revocation via OCSP, the Certificate Authority (CA) operating the OCSP responder immediately becomes aware of which website is being visited from that visitor’s particular IP address. Even when a CA intentionally does not retain this information, as is the case with Let’s Encrypt, it could accidentally be retained or CAs could be legally compelled to collect it. CRLs do not have this issue.
Client-side OCSP makes it too easy to track users. OCSP stapling largely solves that (plus the latency/fail-open issues) by having the server staple a recent OCSP response to the certificate during TLS negotiation. If OCSP stapling had succeeded, the privacy issues would have mostly disappeared (you could track that a server was serving traffic for a domain, but not the users).
OCSP stapling adds two more signatures to the TLS handshake. Bad enough with RSA keys but post-quantum signatures are much larger. OCSP stapling was always a band-aid.
If the server must automatically reach out to retrieve a new OCSP response for stapling every 7 days, why not just get automatically a whole new certificate which is simpler and results in a lots less data on the wire for every TLS connection?
You hit on a good point... a better solution would be a special class of certificates, sort of like EV certs, where the lifetime is extremely short, specifically for the sorts of enterprises, like banks, that need that level of care. Granted, most banks can't get their SPF, DKIM, and DMARC correct for years at a time, so they would definitely find a way to screw that up.
The problem with that solution is that EV already showed that a two-class system of certificates that only really differ in slight UI hints is not useful. Normies never had any idea what the green bar means, and even unusually savvy users are not likely to remember whether a particular website had an EV certificate or not last time they visited.
> They're just breaking shit to make the world a worse place.
Well, it's the people who want to MITM that started it, a lot of effort has been spent on a red queen's race ever since. If you humans would coordinate to stay in high-trust equilibria instead of slipping into lower ones you could avoid spending a lot on security.
That’s why the HTTP-01 challenge exists - it’s perfect for public single-server deployments. If you’re doing something substantial enough to need a load balancer, arranging the DNS updates (or centralizing HTTP-01 handling) is going to be the least of your worries.
Holding public PKI advancements hostage so that businesses can be lazy about their intranet services is a bad tradeoff for the vast majority of people that rely on public TLS.
and my IRC servers that don’t have any HTTP daemon (and thus have the port blocked) while being balanced by anycast geo-fenced DNS?
There are more things on the internet than web servers.
You might say “use DNS-01”; but thats reductive- I’m letting any node control my entire domain (and many of my registrars don’t even allow API access to records- let alone an API key thats limited to a single record; even cloud providers dont have that).
I don’t even think mail servers work well with the letsencrypt model unless its a single server for everything without redundancies.
I guess nobody runs those anymore though, and, I can see why.
I've operated things on the web that didn't use HTTP but used public PKI (most recently, WebTransport). But those services are ultimately guests in the house of public PKI, which is mostly attacked by people trying to skim financial information going over public HTTP. Nobody made IRC use public PKI for server verification, and I don't know why we'd except what is now an effectively free CA service to hold itself back for any edge case that piggybacks on it.
> and my IRC servers that don’t have any HTTP daemon (and thus have the port blocked) while being balanced by anycast geo-fenced DNS?
The certificate you get for the domain can be used for whatever the client accepts it for - the HTTP part only matters for the ACME provider. So you could point port 80 to an ACME daemon and serve only the challenge from there. But this is not necessarily a great solution, depending on what your routing looks like, because you need to serve the same challenge response for any request to that port.
> You might say “use DNS-01”; but thats reductive- I’m letting any node control my entire domain (and many of my registrars don’t even allow API access to records- let alone an API key thats limited to a single record; even cloud providers dont have that).
The server using the certificate doesn't have to be the one going through the ACME flow, and once you have multiple nodes it's often better that it isn't. It's very rare for even highly sophisticated users of ACME to actually provision one certificate per server.
Are we pretending browsers aren’t a universal app delivery platform, fueling internal corporate tools and hobby projects alike?
Or that TLS and HTTPS are unrelated, when HTTPS is just HTTP over TLS; and TLS secures far more, from APIs and email to VPNs, IoT, and non-browser endpoints? Both are bunk; take your pick.
Or opt for door three: Ignore how CA/B Forum’s relentless ratcheting burdens ops into forking browsers, hacking root stores, or splintering ecosystems with exploitable kludges (they won’t: they’ll go back to “this cert is invalid, proceed anyway?” for all internal users).
Nothing screams “sound security” like 45-day cert churn for systems outside the public browser fray.
And hey, remember back in the day when all the SMTP submission servers just blindly accepted any certificate they were handed because doing domain validation broke email… yeah
> Or opt for door three: Ignore how CA/B Forum’s relentless ratcheting burdens ops into forking browsers, hacking root stores, or splintering ecosystems with exploitable kludges (they won’t: they’ll go back to “this cert is invalid, proceed anyway?” for all internal users).
It does none of these. Putting more elbow grease into your ACME setup with existing, open source tools solves this for basically any use case where you control the server. If you're operating something from a vendor you may be screwed, but if I had a vote I'd vote that we shouldn't ossify public PKI forever to support the business models of vendors that don't like to update things (and refuse to provide an API to set the server certificate programmatically, which also solves this problem).
> Nothing screams “sound security” like 45-day cert churn for systems outside the public browser fray.
Yes, but unironically. If rotating certs is a once a year process and the guy who knew how to do it has since quit, how quickly is your org going to rotate those certs in the event of a compromise? Most likely some random service everyone forgot about will still be using the compromised certificate until it expires.
> And hey, remember back in the day when all the SMTP submission servers just blindly accepted any certificate they were handed because doing domain validation broke email… yeah
Everyone likes to meme on this, but TLS without verification is actually substantially stronger than nothing for server-to-server SMTP (though verification is even better). It's much easier to snoop on a TCP connection than it is to MITM it when you're communicating between two different datacenters (unlike a coffeeshop). And most mail is between major providers in practice, so they were able to negotiate how to establish trust amongst themselves and protect the vast majority of email from MITM too.
> Everyone likes to meme on this, but TLS without verification is actually substantially stronger than nothing for server-to-server SMTP (though verification is even better). It's much easier to snoop on a TCP connection than it is to MITM it when you're communicating between two different datacenters (unlike a coffeeshop). And most mail is between major providers in practice, so they were able to negotiate how to establish trust amongst themselves and protect the vast majority of email from MITM too.
No, it's literally nothing, since you can just create whatever TLS cert you want and just MITM anyway.
What do you think you're protecting from? Passive snooping via port-mirroring?
Taps are generally more sophisticated than that.
How do I establish trust with Google? How do they establish trust with me: I mean, we're not using the system designed for it, so clearly it's not possible- otherwise they would have enabled this option at the minimum.
> No, it's literally nothing, since you can just create whatever TLS cert you want and just MITM anyway.
> What do you think you're protecting from? Passive snooping via port-mirroring?
Yes, exactly. For datacenter to datacenter traffic, passing snooping is much easier for small-time criminals to achieve than a MITM. You can do it just by having a device on the same L2 switch domain and spoofing the MAC table (MAC spoofing/port security being un- or mis-configured is typical in those environments). No need to compromise routing at all.
> How do I establish trust with Google? How do they establish trust with me: I mean, we're not using the system designed for it, so clearly it's not possible- otherwise they would have enabled this option at the minimum.
Establishing trust with Google specifically is super-simple: their SMTP servers all have valid public PKI certificates and have for a long time. Even if they didn’t, they could give you an internal CA root to verify them. This doesn’t scale to lots of orgs, but almost all legitimate email traffic is between Google, Microsoft, Yahoo, and the top 10 marketing/transactional email services.
That’s why nobody was in a rush to solve the SMTP MITM problem. Plus, since SMTP for delivery is not authenticated at the application level, you only have to really worry about snooping/preventing delivery. If you want to send fake emails, the certificates provided by the server are irrelevant - there’s no password that you need to steal.
Because the service needs to be usable from non-managed devices, whether that be on the internet or on an isolated wifi network.
Very common in mobile command centres for emergency management, inflight entertainment systems and other systems of that nature.
I personally have a media server on my home LAN that I let my relatives use when they’re staying at our place. It has a publicly trusted certificate I manually renew every year, because I am not going to make visitors to my home install my PKI root CA. That box has absolutely no reason to be reachable from the Internet, and even less reason to be allowed to modify my public DNS zones.
It might never 'touch' the internet, but the certificates can be easily automated. They don't have to be reachable on the internet, they don't have to have access to modify DNS - but if you want any machine in the world to trust it by default, then yes - there'll need to be some effort to get a certificate there (which is an attestation that you control that FQDN at a point-in-time).
and we're back to: How do I create an API token that only enables a single record to be changed on any major cloud provider?
Or.. any registrar for that matter (Namecheap, Gandi, Godaddy)?
The answer seems to be: "Bro, you want security so the way you do that is to give every device that needs TLS entire access to modify any DNS record, or put it on the public internet; that's the secure way".
(PS: the way this was answered before was: "Well then don't use LE and just buy a certificate from a major provider", but, well, now that's over).
There are ways to do this as pointed out below - CNAME all your domains to one target domain and make the changes there.
There’s also a new DCV method that only needs a single, static record. Expect CA support widely in the coming weeks and months. That might help?
One answer I've seen to this (very legitimate) concern is using CNAME delegation to point _acme-challenge.$domain to another domain (or a subdomain) that has its own NS records and dedicated API credentials.
It’s a stupid policy. To solve the non-existent problem with certificates, we are pushing the problem to demonstrating that we have access to a DNS registrar’s service portal.
Yeah the best/worst part of this is that nobody was stopping the 'enlightened' CA/Browser Forum from issuing shorter certificates for THIER fleets, but no we couldn't be allowed to make our own decisions about how we best saw the security of the communications channel between ourselves and our users. We just weren't to be allowed to be 'adult' enough.
The ignorance about browser lock-in too, is rad.
I guess we could always, as they say, create a whole browser, from scratch to obviate the issue, one with sane limitations on certificate lifetimes.
First, one of the purposes of shorter certificates is to make revocation easier in the case of misissuance. Just having certificates issued to you be shorter-lived doesn't address this, because the attacker can ask for a longer-lived certificate.
Second, creating a new browser wouldn't address the issue because sites need to have their certificates be acceptable to basically every browser, and so as long as a big fraction of the browser market (e.g., Chrome) insists on certificates being shorter-lived and will reject certificates with longer lifetimes, sites will need to get short-lived certificates, even if some other browser would accept longer lifetimes.
I always felt like #1 would have better been served by something like RPKI in the BGP world. I.e. rather than say "some people have a need to handle ${CASE} so that is the baseline security requirement for everyone" you say "here is a common infrastructure for specifying exactly how you want your internet resources to be able to be used". In the case of BGP that turned into things like "AS 42 can originate 1.0.0.0/22 with maxlength of /23" and now if you get hijacked/spoofed/your BGP peering password leaks/etc it can result in nothing bad happening because of your RPKI config.
The same in web certs that could have been something like "domain.xyz can request non-wildcard certs for up to 10 days validity". Where I think certs fell apart with it is they placed all the eggs in client side revocation lists and then that failure fell to the admins to deal with collectively while the issuers sat back.
For the second note, I think that friction is part of their point. Technically you can, practically that doesn't really do much.
> "domain.xyz can request non-wildcard certs for up to 10 days validity"?
You could be proposing two things here:
(1) Something like CAA that told CAs how to behave.
(2) Some set of constraints that would be enforced at the client.
CAA does help some, but if you're concerned about misissuance you need to be concerned about compromise of the CA (this is also an issue for certificates issued by the CA the site actually uses, btw). The problem with constraints at the browser is that they need to be delivered to the browser in some trustworthy fashion, but the root of trust in this case is the CA. The situation with RPKI is different because it's a more centralized trust infrastructure.
> For the second note, I think that friction is part of their point. Technically you can, practically that doesn't really do much.
I'm not following. Say you managed to start a new browser and had 30% market share (I agree, a huge lift). It still wouldn't matter because the standard is set by the strictest major browser.
The RPKI-alike is more akin to #1, but avoids the step of trying to bother trusting compromised CAs. I.e., if a CA is compromised you revoke and regenerate CA's root keys and that's what gets distributed rather than rely on individual revocation checks for each known questionable key or just sitting back for 45 days (or whatever period) to wait for anything bad to expire.
> I'm not following. Say you managed to start a new browser and had 30% market share (I agree, a huge lift). It still wouldn't matter because the standard is set by the strictest major browser.
Same reasoning between us I think, just a difference in interpreting what it was saying. Kind of like sarcasm - a "yes, you can do it just as they say" which in reality highlights "no, you can't actually do _it_ though" type point. You read it as solely the former, I read it as highlighting the latter. Maybe GP meant something else entirely :).
That said, I'm not sure I 100% agree it's really related to the strictest major browser does alone though. E.g. if Firefox set the limit to 7 days then I'd bet people started using other browsers vs all sites began rotating certs every 7 days. If some browsers did and some didn't it'd depend who and how much share etc. That's one of the (many) reasons the browser makers are all involved - to make sure they don't get stuck as the odd one out about a policy change.
.
Thanks for Let's Encrypt btw. Irks about the renewal squeeze aside, I still think it was a net positive move for the web.
I don't feel the tradeoff for trying to to fix the idea of a rogue CA misissuing is addressed by the shorter life either though, the tradeoff isn't worth it.
The best assessment of the whole CA problem can be summed up the best by Moxie,
https://moxie.org/2011/04/11/ssl-and-the-future-of-authentic...
And, Well the create-a-browser was a joke, its what ive seen suggested for those who don't like the new rules.
I just post the password semi-publicly on some scratchpad (like maybe a secret gist that's always open in browser or for 2fa a custom web page with generator built in) if any of those policies get too annoying. Bringing number of factors back to one and bypassing 'cant use previous 300000' passwords bs. Works every time.