Hacker Newsnew | past | comments | ask | show | jobs | submit | jasonpeacock's commentslogin

Growing up (born in late 70s), all I heard was “OMG OVER POPULATION” and how the planet can’t support the projected N billion people who will be living on it.

Now the birth rate actually slows down to correct itself and we’re not all breeding like rabbits, that’s a bad thing?

This feels like a capitalist concern, “we won’t have enough workers to produce goods and then consume them!”


The system at large hasn't been great at forward planning so the whole pyramid shape might collapse.

Elderly care is basically going to wipe generational savings from the 20th century off the map and all that wealth will be reallocated to PE.


Is AI going to take all the jobs, or isn't it?

> Elderly care is basically going to wipe generational savings from the 20th century off the map

Probably for the best.

Currently most of that wealth is being hoarded by the top 0.1%, at the expense of 8 billion people having to deal with global warming for the foreseeable future (e.g. - centuries).

If that's the best humanity can do with wealth, then burn it all down. As long as we keep some advances from medicine (vaccines, dentistry) and technology which aren't as energy intensive, it should all work itself out in the end.


I mean elder care is unlikely to wipe out billionaires as much as low-single-digit-millionaires

What's going to wipe out billionaires is lack of a highly-educated workforce, because no one is having babies.

And no, you can't completely solve this by immigration (because the demographic crisis is global).

They might still stay billionaires in absolute terms, but a lot of their wealth will be wiped out as companies struggle to sell their goods to a population with reduced purchasing power (since we're too busy taking care of elderly folks)


Overpopulation is still a concern when considering biodiversity, groundwater loss, etc.

The latest UNEP report includes it - see page 37 from https://www.unep.org/resources/global-environment-outlook-7 -> https://wedocs.unep.org/rest/api/core/bitstreams/902187bf-ea...

"Among the major global environmental crises – climate change, biodiversity loss and land degradation, and pollution and waste – population growth is most evidently a key factor in biodiversity decline. This is largely due to increased demand for food production, which leads to agricultural expansion and land degradation (Cafaro, Hansson and Götmark 2022). As the population grows and consumption rises, fewer resources and less habitat are available for non-human species (Crist 2019). Overpopulation occurs when the total human population multiplied by per capita consumption surpasses the capacity of sustainable ecosystems and resources. Although the global human population continues to grow, per capita consumption is increasing at a faster rate. To the extent that people are disrupting natural habitats and degrading ecosystem services for future generations, despite regional heterogeneity, some research suggests that most of the world’s nations may be considered overpopulated (Lianos and Pseiridis 2016; Tucker 2019)"

Specifically going back to 70s overpopulation concerns, thing shifted with the Green Revolution / Norman Borlaug but it came at the cost of reducing groundwater supply and reducing agricultural diversity. See 'The Globalization of Wheat' and https://climatewaterproject.substack.com/p/groundwater-and-c...


I see slowing birth rates as a net positive.

People in these comments are considering to enslave women like The Handmaid's Tale before even asking if it’s a problem.


It's possible to have both overpopulation(too large of a population for a given metric like water, energy, pollution, etc) and demographic collapse(too many old people, not enough young workers). It's not intuitive but they are separate phenomenon.

The reaction to overpopulation concerns probably discouraged people from having kids but it's unlikely to be the main cause.


Without enough children, who will be taking care of you when you are older?

All of society and industrial functions require young people.


Less consumer demand means fewer jobs. When people can't find good well-paying jobs, they become pretty unhappy, and they won't be magically enlightened and out of misery by being told it's the capitalist wheel turning.

Capitalist concern is human concern.


One is a problem of humanity, the other of capitalism.

Capitalism needs constant growth


Not even capitalism. Every economic system has pensions and healthcare costs rising with age, coupled with a decreased productivity.

You decreased productivity of the elder, as a society we are getting more and more productive. We create more and more billionaires.

Productivity isn’t our problem, distribution is.


I see these as separate issues.

On a macro scale you want to see country wide economic statistic numbers go up, regardless of who the money gets to in the end. When your population's age isn't evenly distributed it causes spikes in productivity and costs associated with the elderly which makes the metrics go down. Combined with short term politics that are not incentivized to prepare for it, but rather to play hot potato with it, it makes for interesting situations. If, in the worst case, the country is functioning paycheck to paycheck, you have every member of the workforce supporting multiple elderly and children via taxes, since their taxes were already spent on X or stolen long ago during the productivity boom.


Capitalism needs private property and free markets. Everything beyond that is cultural

Capitalism does not need and has never had free markets, though some arguments for capitalism being ideal rest on the assumption of free markets, along with a stack of other idealized assumptions, like human behavior conforming to rational choice theory.

Then explain why capitalism made China its workbench.

China encourages exports and has no recent history of confiscating property owned by foreigners. Combined with cheap labor this makes it a great place to set up sweat shops. If you are selling a good that can be made in a low labor area but you use high cost labor you will be outcompeted and the market wont buy your expensive products so over time all the successful firms make their low skill products in sweatshop zones

OTECs are amazing, and step 1 of "The Millennial Project: Colonizing the galaxy in eight easy steps"[0]

[0]: https://en.wikipedia.org/wiki/The_Millennial_Project

There's a shore-based research OTEC in Hawaii, but the best is a floating, closed-loop OTEC in the ocean.


Interesting link. I would think step 7 would come before step 6 though. I thought about this for a few minutes and can't come with a reason otherwise.

The timelines are increasing powers of 2. It’ll take much longer to colonize all asteroids than to settle Mars.

wiki article states "Up to 10,000 TWh/yr of power could be generated from OTEC without affecting the ocean's thermal structure". which converts to about 500GW which... isn't that much

10 000 TWh/yr is one third of the current total electric energy generation of the whole planet, is not a small amount.

Source, page 39 of the full report:

https://www.iea.org/reports/global-energy-review-2025/electr...


This can't be correct.

10,000 TWh/y = 1e+7 GWh/y, divide it by 365.25 days/y to produce daily output of 27,379 GWh/day, then by 24 h/day to get pure power of 1,141 GW. It's still more than a terawatt, three orders of magnitude larger than the largest nuclear reactors.


oops. yes. still not that much though. i mean it's a lot but it's "one more large industrialized country" a lot not "kardashev 2" a lot

Those goalposts of yours are on a FTL ship...

Kardashev 2 has a Dyson sphere. Of course anything on a single planet can never have that much.

This is neat, but it's not zettelkasten - it's building a browse-able knowledge DB from content.

Zettelkasten is about about writing down your ideas in response to content, with a link to that content, and then linking to other ideas that your already logged. It's not an extraction of ideas from that content. This is a common mis-understanding of zettelkasten.


> rampant monkeypatching that made the standard library hard to count on

That was very frustrating when doing regular development after using Rails, all the "built-ins" were actually patched into the stdlib by Rails and not available without it.


> What I want is a dispassionate discussion of how different language features impact code quality

This can be difficult because code quality, productivity, safety are hard to objectively define and measure, so we always fall back to differences in interpretation and experience.


I would be interested in serious attempts to argue for this, even if they can't reasonably be backed up by data.

For example, I think there's a pretty strong argument that immutability makes it easier to write multithreaded code (perhaps at some performance cost), because it entirely prevents common types of bugs.

Similarly there's a good argument that making methods open to extension (like Kotlin or Julia) makes it easier for an ecosystem to adopt unified APIs without explicit coordination.

There's obviously a very strong argument that Garbage Collection prevents a lot of memory safety bugs, at costs to interoperability and performance.


Perforce’s binary support is basically equivalent to Git LFS, it does the same thing.

What does Perforce binary support have that Git LFS doesn’t?

AFAIK, the base issue that Perforce is already in use and it has enterprise support.


I am the last person to ever promote perforce, but as of last yearish it has the option for binary delta transfer using fastCDC.

Even without that, it is a just straight up a lot faster than git lfs. I know this because I benchmark it against git pretty frequently as I am creating my own large file capable VCS.


What do you mean by this? It's hardly equivalent to LFS. The binary files aren't replaced with a text pointer with actual content stored on a server elsewhere. Binary files are stored in the same place as text files.


From the user's perspective, when setup correctly Git LFS is transparent and they don't see the text pointers - the binary files are replaced on push and pull to the server.

It's the same user experience as Perforce?

Yes, Git is more low-level and it's possible to see those text pointers if you want to.


This is what you want to believe but its not true.

I’m really sorry, git lfs is an ugly hack, and its always painful when you discover that some gamedev team has been forced into it by “better knowing” software developers.

It reminds me a lot of “features” of software that is clearly a box ticking exercise, like technically MS Teams has a whiteboard feature. Yet it lacks any depth: its not persistent so its gone after the call, and it’s clunky to use and to save.

… but technically the feature exists, so it’s harder to argue for better software thats fit for purpose, like miro or mural.


Not a belief, but my experience. Maybe I've had a blessed experience with LFS? It's always "just worked" for me.


But I’d make a guess that the majority of the files you’re working on are text based.

If the primary filetype you use is binary, you’ll start to feel the jank.


1. Perforce checkout requests always do the round trip to the server.

2. Artists can actually understand Perforce.


> Perforce checkout requests always do the round trip to the server

That's literally the antithesis of Git. If that's a requirement, then yeah - Git's the wrong thing.

It's like complaining that bicycles don't have motors like motorcycles. If it had a motor, it wouldn't be a bicycle.


The question as I recall was what Perforce does that Git LFS doesn't, so I'm sorry to disappoint but my hands were tied.

Anyway, I dunno, man. If you want binary files to work, some form of per-file mutex is indeed a requirement. And for this to work well, without being a lot of hassle (and regarding that, see point 2, which I note has been accepted without comment - not that I expected anything else, the argument that Git is the artist-friendly choice would be a difficult one to make), any modification of the mutex's state has to involve a round trip to ensure the info is up to date. You can't rely on something local, that only gets updated sometimes, because then the info can be out of date! Worst case, N people find out too late that they've all been making changes simultaneously, and now N-1 of them will almost certainly lose work.

(You might be inclined to moan at people for not going through the full process, but: we have computers now! They can do the full process for us!)


"If productivity can be measured by throughput then it shouldn't be done by humans."

I forget the author, or the exact quote, but basically this. Brainless jobs should be automated, nobody should be an automaton.

This doesn't mean we give up on craftsmanship, but mass production and busy work should be eliminated from human roles.


I agree but the problem is there aren't enough other jobs/ways for people to survive in our current system.


There's also the simpler/smaller git-worktree-switcher (`wt`):

https://github.com/yankeexe/git-worktree-switcher

It does what it says on the tin.


I mean, the same thing would happen if Bash stopped writing to `~/.bash_history` and its last item was `rm`, right?


when was the last time this happened to you?


> produce memory safe software with a bit of discipline

"a bit of discipline" is doing a lot of work here.

"Just don't write (memory) bugs!" hasn't produced (memory) safe C, and they've been trying for 50yrs. The best practices have been to bolt on analyzers and strict "best practice" standards to enforce what should be part of the language.

You're either writing in Rust, or you're writing in something else + using extra tools to try and achieve the same result as Rust.


As Rust Zig has type-safe enums/sum types. That alone eliminates a lot of problems with C. Plus sane error handling with good defaults that are better than Rust also contributes to code with less bugs.

Surely there is no borrow checker, but a lot of memory-safety issues with C and C++ comes from lack of good containers with sane interfaces (std::* in C++ is just bad from memory safety point of view).

If C++ gained the proper sum types, error handling and templates in Zig style 15 years ago and not the insanity that is in modern C++ Rust may not exist or be much more niche at this point.


> If C++ gained the proper sum types

AFAIK "P2688 R5 Pattern Matching: match Expression" exists and is due C++29 (what actually matters is when it's accepted and implemented by compilers anyway)

Also, cheap bound checks (in Rust) are contingent to Rust's aliasing model.


Buffer overruns are most common memory related RCE's. So bounds checking arrays/strings BY DEFAULT is needed.


I actively dislike Zig's memory safety story, but this isn't a real argument until you can start showing real vulnerabilities --- not models --- that exploit the gap in rigor between the two languages. Both Zig and Rust are a step function in safety past C; it is not a given that Rust is that from Zig, or that that next step matters in practice the way the one from C does.


I like Zig, although the Bun Github tracker is full of segfaults in Zig that are presumably quite exploitable. Unclear what to draw from this, though.

[1]: https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3...


Wasn't Bun the project where the creator once tweeted something along the lines of "if you're not willing to work 50+ hours a week don't bother applying to my team"? Because if so then I'm not surprised and also don't think Zig is really to blame for that.


Not clear to me there's a correlation between hours worked and number of memory safety vulnerabilities


I think the implication is something like "overwork / fraying morale from long hours means shipping more bugs".


The point of memory-safe languages is to foreclose on a set of particularly nasty bugs, regardless of how frayed engineer morale is.


I'm pretty sure that in an overworked environment the engineers would reach for Rust's unsafe mode pretty quickly because they're too tired to make sense of the borrow checker.


I'm no expert, but I've been hacking in Rust for several years now, and the only unsafe I've written was required as part of building a safe interface over some hardware peripherals. Exactly as intended.

The borrow checker is something new Rust devs struggle with for a couple months, as they learn, then the rules are internalized and the code gets written just like any other language. I think new devs only struggle with the borrow checker because everyone has internalized the C memory model for the last 50 years. In another 50, everyone will be unlearning Rust for whatever replaces it.


Web browsers and operating systems are full of memory safety bugs, and are not written by engineers in crunch these days.


>I actively dislike Zig's memory safety story

Why? Interested to know.

Just for background, I have not tried out either Zig or Rust yet, although I have been interestedly reading about both of them for a while now, on HN and other places, and also in videos, and have read some of the overview and docs of both. But I have a long background in C dev earlier. And I have been checking out C-like languages for a while such as Odin, Hare, C3, etc.


Modula-2 was already a step function in safety past C, but people did not care because it wasn't given away alongside UNIX.


> "Just don't write (memory) bugs!" hasn't produced (memory) safe C

Yes it did, of course. Maybe it takes years of practice, the assistance of tools (there are many, most very good), but it's always been possible to write memory safe large C programs.

Sure, it's easier to write a robust program in almost every other language. But to state that nobody ever produced a memory safe C program is just wrong. Maybe it was just rethoric for you, but I'm afraid some may read that and think it's a well established fact.


>Yes it did, of course. Maybe it takes years of practice, the assistance of tools (there are many, most very good), but it's always been possible to write memory safe large C programs.

Can you provide examples for it? Because it honestly doesn't seem like it has ever been done.


I don't understand where you stand. Surely, you don't mean that all C programs have memory bugs. But on my side, I'm not claiming that discipline makes C a memory safe language either. This discussion has taken a weird turn.


> you don't mean that all C programs have memory bugs

Well all of them "potentially" do, which is enough from a security standpoint

There have been enough zero days using memory leaks that we know the percentage is also non trivial.

So yes, if programmers can write bugs they will, google SREs were the first to famously measure bugs per release as a metric instead of the old fashioned (and naive) "we aren't gonna write any more bugs"


postfix

sqlite

billions of installations and relatively few incidents


Few incidents != memory safe

Few incidents != not badly exploitable

Few incidents != no more undiscovered safety bugs/issues

I don't think your examples quite cut it.


[flagged]


Haven't written C in a while but I think this program has an integer overflow error when you input 2 really large integers such that the sum is more than a 32 bit signed integer.

Also I believe in entering null values will lead to undefined behaviour.


Memory safe doesn't mean protection from integer overflow unless you use that integer to index into some array.

I'm not sure how you'd enter NULL given scanf.


I'm not sure how showing that gp can't even write a dozen lines of memory safe C proves that doing so for the exponentially harder 100+k LoC projects is feasible.

The program contains potential use of uninitialized memory UB, because scanf error return is not checked and num1 and num2 are not default initialized. And a + b can invoke signed integer overflow UB. A program with more than zero UB cannot be considered memory safe.

For example if the program runs in a context where stdin can't be read scanf will return error codes and leave the memory uninitialized.


> num1 and num2 are not default initialized

num1 and num2 are declared on the stack and not the heap. The lifetimes of the variables are scoped to the function and so they are initialized. Their actual values are implementation-specific ("undefined behavior") but there is no uninitialized memory.

> And a + b can invoke signed integer overflow UB. A program with more than zero UB cannot be considered memory safe.

No, memory safety is not undefined behavior. In fact Rust also silently allows signed integer overflow.

Remember, the reason memory safety is important is because it allows for untrusted code execution. Importantly here, even if you ignore scanf errors and integer overflow, this program accesses no memory that is not stack local. Now if one of these variables was cast into a pointer and used to index into a non-bounds-checked array then yes that would be memory unsafety. But the bigger code smell there is to cast an index into a pointer without doing any bounds checking.

That's sort of what storing indexes separately from references in a lot of Rust structures is doing inadvertently. It's validating accesses into a structure.


Regarding initialization, if one wants portable code that works for more than one machine+compiler version, it's advisable to program against the C++ virtual machine specified in the standard. This virtual machine does not contain a stack or heap.

Generally your comment strikes me as assuming that UB is some kind of error. In practice UB is more a promise the programmer made to never do certain things, allowing the compiler to assume that these things never happen.

How UB manifests is undefined. A program that has more than zero UB cannot be assumed to be memory safe because we can't make any general assumptions about its behavior because. UB is not specified to be localized it can manifest in any way, rendering all assumptions about the program moot. In practice when focusing on specific compilers and machines we can make reasonable localized assumptions, but these are always subject to change with every new compiler version.

Memory safety is certainly critical when it comes to exploits, but even in a setting without adversaries it's absolutely crucial for reliability and portability.

> In fact Rust also silently allows signed integer overflow.

Silently for release builds, and panic in debug builds. The behavior is implementation defined and not undefined, in practice this is a subtle but crucial difference.

Take this example https://cpp.godbolt.org/z/58hnsM3Ge the only kind of UB AFAIKT is signed integer overflow, and yet we get an out-of-bounds access. If instead the behavior was implementation defined the check for overflow would not have been elided.


har har... have my upvote!


I wasn't trying to be a dick, I am saying that my experience is that no big C program is ever safe. You replied that it is possible and I asked for an example. Providing a small script to prove that big C programs are safe isn't enough.


Making a broad statement like there has never been a memory safe C program is a bit of a dickish thing to say.

especially when you phrase it as

> Can you provide examples for it? Because it honestly doesn't seem like it has ever been done.

it comes off as pedantic and arrogant.

It obviously is possible to write memory safe software in C and obviously it has been done before otherwise we would not be currently communicating over the goddamn internet.

Asking for evidence of something this obvious is akin to asking for a source on if water is in fact wet.


I think pretty much any non trivial C example has memory safety issues. It doesn't mean that they aren't useful and can't be used. But time and time again we have seen security reports that point to memory issues. So no, I don't think I'm asking for something obvious, quite the contrary. I think the claim that it's possible to write big C programs that are memory safe is really strong and I heavily disagree with it.


It's not dickish, and it's weird you seem to feel attacked/offended by that. It is a realistic conclusion, that we have come to over the course of decades of C usage. One could call it wisdom or collective learning.


>> Can you provide examples for it? Because it honestly doesn't seem like it has ever been done.

>it comes off as pedantic and arrogant.

Interesting the way this was perceived. I thought he was just asking a honest question.

Again shows online discussion and communication is hard.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: