Hacker Newsnew | past | comments | ask | show | jobs | submit | fwlr's commentslogin

15MB of JavaScript is 15MB of code that your browser is trying to execute. It’s the same principle as “compiling a million lines of code takes a lot longer than compiling a thousand lines”.


It's a lot more complicated than that. If I have a 15MB .js file and it's just a collection of functions that get called on-demand (later), that's going to have a very, very low overhead because modern JS engines JIT compile on-the-fly (as functions get used) with optimization happening for "hot" stuff (even later).

If there's 15MB of JS that gets run immediately after page load, that's a different story. Especially if there's lots of nested calls. Ever drill down deep into a series of function calls inside the performance report for the JS on a web page? The more layers of nesting you have, the greater the overhead.

DRY as a concept is great from a code readability standpoint but it's not ideal performance when it comes to things like JS execution (haha). I'm actually disappointed that modern bundlers don't normally inline calls at the JS layer. IMHO, they rely too much on the JIT to optimize hot call sites when that could've been done by the bundler. Instead, bundlers tend to optimize for file size which is becoming less and less of a concern as bandwidth has far outpaced JS bundle sizes.

The entire JS ecosystem is a giant mess of "tiny package does one thing well" that is dependent on n layers of "other tiny package does one thing well." This results in LOADS of unnecessary nesting when the "tiny package that does one thing well" could've just written their own implementation of that simple thing it relies on.

Don't think of it from the perspective of, "tree shaking is supposed to take care of that." Think of it from the perspective of, "tree shaking is only going to remove dead/duplicated code to save file sizes." It's not going to take that 10-line function that handles with <whatever> and put that logic right where its used (in order to shorten the call tree).


That 15mb still needs to be parsed on every page load, even if it runs in interpreted mode. And on low end devices there’s very little cache, so the working set is likely to be far bigger than available cache, which causes performance to crater.


Ah, that's the thing: "on page load". A one-time expense! If you're using modern page routing, "loading a new URL" isn't actually loading a new page... The client is just simulating it via your router/framework by updating the page URL and adding an entry to the history.

Also, 15MB of JS is nothing on modern "low end devices". Even an old, $5 Raspberry Pi 2 won't flinch at that and anything slower than that... isn't my problem! Haha =)

There comes a point where supporting 10yo devices isn't worth it when what you're offering/"selling" is the latest & greatest technology.

It shouldn't be, "this is why we can't have nice things!" It should be, "this is why YOU can't have nice things!"


When you write code with this mentality it makes my modern CPU with 16 cores at 4HGz and 64GB of RAM feel like a Pentium 3 running at 900MHz with 512MB of RAM.

Please don't.


THANK YOU


This really is a very wrong take. My iPhone 11 isn't that old but it struggles to render some websites that are Chrome-optimised. Heck, even my M1 Air has a hard time sometimes. It's almost 2026, we can certainly stop blaming the client for our shitty webdevelopment practices.


>There comes a point where supporting 10yo devices isn't worth it

Ten years isn't what it used to be in terms of hardware performance. Hell, even back in 2015 you could probably still make do with a computer from 2005 (although it might have been on its last legs). If your software doesn't run properly (or at all) on ten-year-old hardware, it's likely people on five-year-old hardware, or with a lower budget, are getting a pretty shitty experience.

I'll agree that resources are finite and there's a point beyond which further optimizations are not worthwhile from a business sense, but where that point lies should be considered carefully, not picked arbitrarily and the consequences casually handwaved with an "eh, not my problem".


Tangentially related: one of my favourite things about JavaScript is that it has so many different ways for the computer to “say no” (in the sense of “computer says no”): false, null, undefined, NaN, boolean coercion of 0/“”, throwing errors, ...

While it’s common to see groaning about double-equal vs triple-equal comparison and eye-rolling directed at absurdly large tables like in https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guid... but I think it’s genuinely great that we have the ability to distinguish between concepts like “explicitly not present” and “absent”.


From quickly messing around in the playground, it seems (in math mode) Typst treats multiple spaces identical to single spaces. A simple, consistent, flexible, and probably-not-majorly-breaking-old-documents rule would be “anything with no spaces has higher precedence / tighter binding than anything with one space, anything with one space has higher precedence / tighter binding than anything with two spaces”, etc, and then - only within each spaces category - you apply one of the precedence rulesets described in the article. Any confusion or surprise can be solved intuitively and without thought by mashing spacebar.


I agree with you conceptually, and am also laughing a bit thinking about how many people get angry about significant whitespace in Python and how much deeper down that rabbit hole "operator precedence changes based on whitespace" this proposal is :D


Scrolling up and down the list, just how onerous is this reporting regulation? It seems almost cartoonishly excessive, even for critical safety applications.


Literally no amount of incident reporting is excessive when it comes to nuclear power. Not just because of the safety of the plant itself, but because so much is reliant on it.

It's important to identify even small defects or incidents so that patterns can be noticed before they turn into larger issues. You see the same breaker tripping at 3x the rate of other ones, and even though maybe nothing was damaged you now know there's something to investigate.


Aaaand it’s this alarmist attitude which is why we don’t have abundant cheap nuclear energy.

Sea-drilling rigs (oil) have far more potential for environmental damage than modern nuclear plants

Yet they have no federal public register for when a worker falls overboard (an incident far more likely to result in death).


> Sea-drilling rigs (oil) have far more potential for environmental damage than modern nuclear plants

Key word: "modern". A key aspect of a modern nuclear plant, that supports its high level of safety, is the required incident reporting and followup.

The relevant issue is not really about a single worker being injured or dying. It's about detecting safety issues which could lead to a catastrophe far beyond what a sea oil drilling rig can, at least when it comes to human life and habitability of the surrounding area.

For example, after Chernobyl, much of Europe had to deal with contamination from cesium 137.

The entire planet's geological history shows when the nuclear age started, because humans are irresponsible in aggregate. (See also global warming.)

> Aaaand it’s this alarmist attitude ...

You're providing an object lesson in why humans can't really be trusted to operate systems like this over the long term.


> You're providing an object lesson in why humans can't really be trusted to operate systems like this over the long term.

Ironically so are you. The coal we burn puts far more radioactivity into the environment than nuclear plants do. Yet we make sure nuclear isn't viable and burn coal like crazy. We do this only because of the type of risk telescoping you are doing. If you do a rational risk assessment, you will see that even operating nuclear plants as shown in the Simpsons would have less risk than what we are doing now. There is a risk to doing nothing. You are missing that part in your assessment.


> It's about detecting safety issues which could lead to a catastrophe far beyond what a sea oil drilling rig can

A worker falling into a reactor pool (which is just room temp water with very little risk) is not a catastrophe, yet due to the absurd safetyism surrounding nuclear it requires a federal report.

We don’t require this level of cost insanity for far more deadly worker events at oil, gas, solar or wind facilities.

There is no systemic risk from worker falls. MAYBE the plant in question should address hand railing heights from pre-ADA construction. It certainly shouldn’t require multiple federal government employees to create a report on it and be publicly listed in federal register and reported on by hundreds of news outlets.

You’re making my point.


> We don’t require this level of cost insanity for far more deadly worker events at oil, gas, solar or wind facilities.

This is not saying what you think it's saying.


> Not just because of the safety of the plant itself, but because so much is reliant on it.

When an oil rig has an incident, cities and hospitals and food storage and logistics aren't disrupted.


Having the infrastructure for reporting incidents is the expensive part.

Doing it often doesn’t really add to the cost. More reporting is helpful because it explicitly makes it clear even operational issues can have lessons to be learned from. It also keeps the reporting system running and operationally well maintained.

WebPKI does this as well.


I believe this is a case of “developers who went into the wallet business”, actually.


In code the semantic difference is pretty small between “select one at random” and “select two at random and perform a trivial comparison” - roughly about the same difference as to “select three at random and perform two trivial comparisons”. That is, they are all just specific instances of the “best-of-x” algorithm: “select x at random and perform x-1 comparisons”. Natural to wonder why going from “best-of-1” to “best-of-2” makes such a big difference, but going from “best-of-2” to “best-of-3” doesn’t.

In complexity analysis however it is the presence or absence of “comparisons” that makes all the difference. “Best-of-1” does not have comparisons, while “best-of-2”, “best-of-3”, etc., do have comparisons. There’s a weaker “selections” class, and a more powerful “selections+comparisons” class. Doing more comparisons might move you around within the internal rankings of the “selections+comparisons” class but the differences within the class are small compared to the differences between the classes.

An alternative, less rigorous intuition: behind door number 1 is a Lamborghini, behind door number 2 is a Toyota, and behind door number 3 is cancer. Upgrading to “best of 2” ensures you will never get cancer, while upgrading again to “best of 3” merely gets you a sweeter ride.


JavaScript promises are objects with a resolver function and an internal asynchronous computation. At some point in the future, the asynchronous computation will complete, and at that point the promise will call its resolver function with the return value of the computation.

`prom.then(fn)` creates a new promise. The new promise’s resolver function is the `fn` inside `then(fn)`, and the new promise’s asynchronous computation is the original promise’s resolver function.


Seems very likely this will lead to “professional repackagers” whose business model is “for a fee you may install our fork of curl and we will promptly reply to emails like this”, unfortunately.


Red Hat would be smart to get in on this


The FTC was warned at the time that they were flouting required procedures and that their rule would therefore not survive legal scrutiny. Lo and behold it did not.


Please point to an example of these warnings.


Another example from the article

> At the time of the vote, Holyoak's dissenting statement accused the majority of hurrying to finalize the rule before the November 2024 election and warned that the new regulation "may not survive legal challenge."


> The FTC is required to conduct a preliminary regulatory analysis when a rule has an estimated annual economic effect of $100 million or more. The FTC estimated in a Notice of Proposed Rulemaking (NPRM) that the rule would not have a $100 million effect.

> But an administrative law judge later found that the rule's impact surpassed the threshold, observing that compliance costs would exceed $100 million "unless each business used fewer than twenty-three hours of professional services at the lowest end of the spectrum of estimated hourly rates," the 8th Circuit ruling said. Despite the administrative law judge's finding, the FTC did not conduct a preliminary regulatory analysis and instead "proceeded to issue only the final regulatory analysis alongside the final Rule," the judges' panel said.

It says it in the article


Why are you pasting the article when it doesn't include any warnings that were given to the ftc at the time?


It literally says they were warned by the administrative judge that a preliminary regulatory analysis was required to make such a rule.

> Despite the administrative law judge's finding, the FTC did not conduct a preliminary regulatory analysis and instead "proceeded to issue only the final regulatory analysis alongside the final Rule,"


The fact that it takes more than 24 hours to put a 1 click cancel button is alien to me.


You must not work on these sorts of systems. It can easily take more than 24 hours. In case you’re genuinely interested in learning more, here’s how it works.

There are good reasons for it working this way, BTW. The needs of a company with hundreds or thousands of people are different than the needs of hobbyists and early-stage startups.

1. A user experience designer analyzes the user flow and decides where to put the cancellation button. They make decision about style, layout, and wording. This isn’t a ton of work, but something so critical to the company’s business and retention numbers will probably involve a lot of review, discussion, and bike shedding. This could easily take 24 people-hours of work on its own.

2. Somebody programs the front-end change. They probably have to put it behind a feature flag so it’s not visible until the back end is ready.

3. Somebody programs the back-end. They think about security, authentication, authorization, CSRF. That’s probably handled, but again, this is a critical feature and deserves extra care.

4. Somebody programs the interface to the company’s internal systems. They’re usually kind of a pain to work with. Billing, marketing, support, customer success. Something probably sends an email to the user. Maybe there’s a follow up flow to try to get them back with a special offer a month later. Etc.

5. The change is tested. Preferably with automated tests, but a feature like this has tendrils into systems throughout the company, and a lot of moving parts, so manual testing is also important. If it goes wrong, it’s a big deal, involving the potential for chargebacks and lawsuits, both of which are expensive at scale.

Throughout all this, you’re dealing with legacy code, because billing is one of the oldest systems the company has, and the one with the most risk of change, so the code is nasty and doesn’t follow current conventions. Every change is painful and tedious.

It’s alien to you that this could take more than 24 hours? At any company of size, I have trouble imagining it taking less.


Of course now that the FTC rule is well known, anyone designing a new system would require click-to-cancel. The new burden is low, but at the time the big companies probably spent millions to fix it.

A more extreme example would be the US Clean Air Act and how the EPA extended the rules to regulate carbon dioxide emissions. Obviously going to cost a lot of money, but a necessary change to dodge climate disaster. That rule had to wait for Congress to pass the Inflation Reduction Act to become legal. Hopefully this minor consumer protection rule will be supported by Congress as well.


We are assuming the calculation for the number of companies affected is correct. If they are using a provider like Shopify or a WordPress plugin, the cost will only be to upgrade the plugin.

I don't know that the backend is necessarily needed. If the button only opened a support ticket/sent an email then the rest can be done by the employees who already processed cancellations on the phone. They just don't need to be on the phone with the customer to do it.


Exactly, currently at some point on page 10 of click-throughs, is the "real" cancel button, and essentially you just need to make that the "first" cancel button and link straight to it.


How many companies of "size" you know of? Because that process looks HORRIBLY inefficient and only primed to extract as much money of the consumer. You just need to put it in the account screen. A big red button. Your _workflow_ is there to make excuses. If the move was the other way, you would gladly pay the cost, but because it actually hurts your "business model" then it is suddenly a problem. No buddy, I call BS on all that, and call BS on the law itself.


So, you're holding a strong opinion about something that you're completed uneducated about and have no experience with?

ANY software change in a non-hobby business goes through a change process.

One as significant as an entirely new account cancelation flow requires extensive planning, design and testing.

What if you have equipment like a set top box? What if a shipping label needs to be mailed out? What if there are state-by-state regulations that must be complied with? What if you have to issue prorated returns of prepaid subscription fees? What if different accounts have different cancelation terms because of bulk pricing? And a million other things that you have to think about, design for and test.

Of course you can solve all this. But it's certainly not "BS" that it'll take more than 24 hours.

The FTC knew this. They cheated their process to ram through a rule. But you like the rule they tried to cheat to implement, so it's ok then, I guess.


For the types of issues we’re discussing here, we’re talking about companies making more than $50mm yearly, which is about 75-100 employees. So successful small businesses and larger. I don’t have exact numbers, but this size business is very common. Most professional programmers will have seen the issues I’m talking about.


Even a company of 100 people should have a change process. I work in infra mostly and to even shut down a VM that has already been decommissioned I have to go through a change process. You can't have a dozen IT/dev cowboys just doing things they think are fine on a whim because you have to take into account what all the other teams in the company are doing. We've got 30 people in our IT dept and change processes are absolutely crucial to not fucking things up on a regular basis.


Well, after you factor in some of these companies are probably large corps with layers of middle management. It will probably require at least 3 months of premeetings


Which explains the issue with the law neatly:

1. Not pegged at inflation, so the threshold is continually moving downward. 2. All it takes is a couple of bad actor companies to blow out the threshold. If you take the companies at their word, then you will never get under this threshold. Why trust them?


Yes, Congress should probably revisit the threshold. But all thresholds are arbitrary, and for this purpose a hundred million is arguably as good as any. It’s not the end of the world if they only do that every few decades. In fact I would argue that instead of being inflation adjusted, the threshold should be adjusted so that the FTC always has to do a review on some percentage of their new rules. But simplicity has a lot going for it too.

> All it takes is a couple of bad actor companies…

Keep in mind that this threshold is not about bad actors at all. It’s about the impact to the legitimate companies that are not defrauding people at all. The FTC estimated that there were over a hundred thousand companies that would be effected by these new rules, and possibly that there were 5× that. As the judge noted, the implementation cost of these new rules would have to be less than $1000 _per company_ in order for it to be below the threshold. That’s two days of an average engineer’s salary, or even less if they’re getting paid well instead of just average. And since the rules involve more than just adding a button to your webpage the work involved would need to be done by more than just an engineer.

And hitting that threshold is no bad thing, since it just means that the FTC has to allow an extra public comment period with the specific purpose of coming up with alternative rules. If any of those alternative rules would be effective but cheaper to implement then the FTC is supposed to drop their own rules and adopt the alternative rules instead. That keeps the cost down for the legitimate companies while still allowing the FTC to go after the illegitimate ones that aren’t going to bother following the rules anyway.


Don’t forget that the FTC’s new rules were about a more than just adding a button to your website. You should go read them, but be warned the pdf is 230 pages long.


Because systematic corruption presumably?


More that they mistakenly thought that doing the right thing meant they didn't have to do the thing right.


But, if you want to make it look like you are doing the right thing but don't want to be remembered as having done that right thing, maybe this was the right thing to do given that now it won't be done.


Right, if they were screwing over customers, we’d call it disruption and give them a medal, if not $1 billion dollars. Since they’re trying to help people, we wag our fingers at them.


>they were flouting required procedures


If you are sniffing out corruption, aren’t the ones flouting required procedures likely the corrupt ones?


Almost never.

Whistleblowers are almost always revealing information that they are legally prevented from revealing, otherwise you wouldn’t need a whistleblower. A simple FOIA request would suffice.


Kinda, but corruption in my favor is unlikely to see me complain about it.


That’s obviously no justification, all corruption is in someone’s favour. Society functions by rules. Break those founding principles and you break everything.


What if the “required procedures” are held in place by corruption?


who warned them?


A then-commissioner who is now the head of the FTC.


Ah so a Trump appointee and 2 judges appointed by Trump. Now that's a group I would never trust to follow the law.


That commissioner also hated the fact that consumers were going to stop being robbed by big corps.


I find it unproductive to assign emotion to such blatant corruption. I'd rather frame it as "That comissioner sees it to be in his personal best interest to not stop consumers being robbed by big corps."


“The commissioner sees it as important that his government department actually follows the rules as written down by Congress” seems like a fairer interpretation.

Congress literally wrote “You must do X when Y”, and the FTC said “Well, probably Y isn’t true anyway, so we can skip doing X”. It’s true that Y involves an estimate, since there’s no way to calculate the exact number, but their estimate was clearly cooked with the specific purpose of letting them rush.

The new rules seem like good rules, too, so it’s really a shame that they decided that it was more important to rush than to do it right. That makes the old commissioner a bungler at best.


“Tradition”, perhaps, although that word is also weighed down with the kind of baggage that is likely to trigger the same negative responses that “ritual” did here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: