Hacker Newsnew | past | comments | ask | show | jobs | submit | IcyWindows's commentslogin

I believe some programs used to let you even drag menu items to the toolbar.

Many KDE apps (Dolphin, Kate, Okular, etc.) let you configure their tool bars (or get rid of them entirely) and set them to show just icons, text, or both (with the text to the side or below). It's the kind of thing most people won't bother with, but for frequently used applications it's nice to be able to customize it to suit your needs. It's done via a config option though, not by dragging menu items to the toolbar (which strikes me as something you could initiate by mistake).

MS Office’s fully customisable toolbars, complete with built-in icon editor.

…ripped out when the Office Ribbon was introduced in 2007; the now-limited customisation is now considered an improvement because of the IT support problems caused by users messing up their own toolbars.

I mean, yes; but that’s what Group Policy is for! And the removal of the icon editor is just being downright mean to bored school kids.


You made me feel old by saying "I believe".

Most users do not own two computers.

A huge percentage do, whether it's most or not would be a stupid metric by which to design a cloud storage product in such a way that causes issues when one does sign in on multiple devices.

Also, signing in on multiple devices is the entire point of a file sync app.

well, onedrive is the ms solution to dropbox which is main purpose is syncing between computers

No, but the biggest users who pay the most do.

Aren't some of these government regulations for cloud, etc.?


SOC2/FIPS/HIPAA/etc don't mandate zero-CVE, but a zero-CVE posture is an easy way to dodge all the paperwork that would be involved in exhaustively documenting exactly why each flagged CVE doesn't actually apply to your specific scenario (and then potentially re-litigating it all again in your annual audit).

So it's more of a cost-cutting/cover-your-ass measure than an actual requirement.


There are several layers of translation between public regulations, a company's internal security policy and the processes used to enforce those policies.

Let's say the reg says you're liable for damages caused by software defects you ship due to negligence, giving you broad leeway how to mitigate risks. The corporate policy then says "CVEs with score X must be fixed in Y days; OWASP best practices; infrastructure audits; MFA; yadda yadda". Finally the enforcement is then done by automated tooling like sonarqube, prisma. dependabot, burpsuite, ... and any finding must be fixed with little nuance because the people doing the scans lack the time or expertise to assess whether any particular finding is actually security-relevant.

On the ground the automated, inflexible enforcement and friction then leads to devs choosing approaches that won't show up in scans, not necessarily secure ones.

As an example I witnessed recently: A cloud infra scanning tool highlighted that an AppGateway was used as TLS-terminating reverse proxy, meaning it used HTTP internally. The tool says "HTTP bad", even when it's on an isolated private subnet. But the tool didn't understand Kubernetes clusters, so a a public unencrypted ingress, i.e. public HTTP didn't show up. The former was treated as a critical issue that must be fixed asap or the issue will get escalated up the management chain. The latter? Nobody cares.

Another time I got pressure to downgrade from Argon2 to SHA2 for password hashing because Argon2 wasn't on their whitelist. I resisted that change but it was a stressful bureaucratic process with some leadership being very unhelpful and suggesting "can't you just do the compliant thing and stop spending time on this?".

So I agree with GP that some security teams barely correlate with security, sometimes going into the negative. A better approach would to integrate software security engineers into dev teams, but that'd be more expensive and less measurable than "tool says zero CVEs".


Did you forget a link?


Sometimes you can't get every staff engineer to agree to the plan, but you can't sit around forever unto someone caves.


No, but especially of staff+ engineers I'd expect that they know how to disagree and commit after a reasonable (not too long!) time.


I don't understand the bicycle density numbers in the article.

At high speeds, bicycles also have to spread out. Add the bike trailers mentioned, and it seems even more unlikely.


Hi, author of the article. I'm assuming urban traffic speeds, which is what I observe all the time myself, but you can look at the video of those kids, and count, and look at the seconds. 125 bikes in 45 seconds, between 0:02 and 0:47. Understanding it is another issue, but it's a fact. (This is one of those things that I do myself and would not claim that I exactly understand the details, I just do it.)

There have been more academic studies. e.g. https://nacto.org/wp-content/uploads/5_Zhou-Xu-Wang-and-Shen... estimates 2512 bicycles per hour per meter of road width, or 7536 bikes per hour on a 3-meter (10 feet) wide lane. That's only 4.2x car throughput, versus those kids who managed 5.5x.

You are right about the trailers, but at least where I ride, they are not common-case for carrying things, lots more cargo bikes instead, and those are "better" than trailers -- it's possible to ride two cargo bikes side-by-side even in a US protected lane (specifically on Garden Street in Cambridge, MA), though this of course assumes competent riders.


They had a deal with AT&T with a data plan that no one else could get. It made every other what phone useless in comparison.


Passkeys are a private key stored on your device with the public key registered with the server.

Servers should allow multiple passkeys per user (so you can register multiple devices), but many don't.


X.509 already does that, and in a better way. It also makes it unnecessary to register multiple devices, if you allow certificate chains (the server would check the certificate chain; one of the was issued by the service and contains information about which account it is associated with; the other ones you can issue to yourself, optionally with more restricted permissions, and can be revoked or expire). That would also allow you to have passworded private keys, and/or to store one private key on a separate computer that is not connected to the internet to issue the other one to yourself in order to mitigate security issues (and you can revoke the certificate and make a new one if it is compromised or expires). X.509 also is not limited to only WWW, so it can be used with other protocols too.


That's an implementation detail users should not care about.

The bigger question is... why don't we replace the login/password combination with just a string of randomly generated characters and call it a day?

Why protect these strings of random characters from users, call them passkeys and advertise them on all street corners?

Feels like a devil's plot to strip us from all the rights to our devices.


public/private keypairs (and therefore passkeys) provide cryptographically secure anti-phishing properties that passwords cannot.


More complete astronomy data from telescopes showed that epicycles needed to be even more complicated then they were.

If we manage to find better tools for QM where we don't need to perform as much post-selection of experimental data, perhaps we'll also find a simpler model.


"Who wants to live without horses for transportation?"

Isn't the point that we don't even consider alternatives?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: