A small nitpick that doesn't take away from the rest of your comment: staying alive and fed was not necessarily a laborious activity for hunter-gatherers living in good climates [0]. It's our expansion into less hospitable environments that made it so.
> Woodburn offers this “very rough approximation” of subsistence-labor requirements: “Over the year as a whole, probably an average of less than two hours a day is spent obtaining food.”
> Reports on hunters and gatherers of the ethnological present--specifically on those in marginal environments--suggest a mean of three to five hours per adult worker per day in food production.
The "original affluent society" theory is based on several false premises and is fundamentally outdated, but people keep it alive because it fits certain Rousseauean assumptions we have. I recommend reading this:
I just read the 'original affluent society' and (most of) your linked essay, I kind of agree with you. That said, the conclusions of Kaplan lead to estimates or 35-60 hours a week (excluding some depending on the group) and that surprised me a lot. That's very different from the image I got from some other comments in this thread talking about extremely long days with constant back-breaking work. Would you agree?
Constant, backbreaking work was not a feature of hunter-gatherer societies in the way it was of early agricultural societies, yes; at the same time, they still worked equal to or longer hours than we did, at things we would likely consider quite grueling and boring (mostly food processing), and what they got out of it was a level of nutrition even they regularly considered inadequate; moreover, a lot of the reason the average per day work estimate is so low, as the paper covers briefly, is that there were very often times, especially during the winter, where food simply wasn't accessible, or during the summer, where it was so hot it was dangerous to work, so there was enforced idleness, but that's not the same thing as leisure.
It's a detailed, complicated anthropological argument made by an expert — and he also does it in a very well-written way. I could attempt to lay out the argument myself, but ultimately everyone would be better served by just... reading the primary source, because I doubt I could do it sufficient justice. I recommend you actually just do the reading. But a general TLDR of the points made are:
- the estimates of how much time hunter-gatherers spent "working" were based on studies that either (a) watched hunter-gatherers in extremely atypical situations (no children, tiny band, few weeks during the most plentiful time of the year, and they were cajoled into traditional living from their usual mission-based lifestyle) or (b) didn't count all the work processing the food so it could even be cooked as time spent providing for subsistence, and when those hours are included, it's 35-60 hours a week of work even including times of enforced idleness pulling down the average
- the time estimates also counted enforced idleness from heat making it dangerous to work, or from lack of availability of food, or from diminishing returns, or from various "egalitarian" cultural cul de sacs, as "leisure" but at the same time...
- ... even the hunter gatherers themselves considered their diet insufficiently nutritious and often complained of being underfed, let alone the objective metrics showing that the were
The anthropological research that came up with 2-3 hours of work per day only looked at time spent away from camp gathering, hunting, and fishing. When you account for food processing, cooking, water collection, firewood gathering, tool making, shelter maintenance, and textile production the numbers go way up.
Yes, pretty much this. If they worked in the fields 12 hour per day as in a Victorian industrial setting, they would have perished from exposure, not having time to attend obligatory work around the house and to process the food and materials used to make food. Basically peasants worked all the time to maintain a level of "comfort" like in the article's picture: https://i0.wp.com/juliawise.net/wp-content/uploads/2025/12/S...
Also idealization of rural life and past rural life tends to come almost exclusively from city dwellers, basically people who never set foot in a rural area let alone grow or live there.
I grew up in rural Romania and even though the conditions were (and are) exponentially better than what the non-industrial non-mechanized non-chemical (herbicides, pesticides and fertilizers) past offered, all I thought growing up was get the funk out of there. Agriculture (and it's relatives, animal husbandry) sucks and I hate it! :)
And without mechanization it's incredibly labor intensive to tend to a farm. Just to keep the animals alive over winter you have to dry and deposit a lot of hay, but before that you gotta scythe it. Scything is no walk in the park and basically you gotta do a lot of that every day to cover enough area to keep the cattle fed. Then plowing without a tractor and using animals: not just dangerous but backbreaking work. Then hoeing the weeds, funking need to do it all the time because without herbicides, the weeds grow everywhere and by the time you "finished" going once over all crops, they've grown back where you first started. At some point my father had this fantasy of what is now called "organic" crops, in fact cheapskating at paying the price for herbicides, so I did so much hoeing that it got out of my nose. I don't recall me saying it but my mother told me that at some point in a middle of a potatoes hoeing session I said that I'd rather solve 1000 math problems than do even just another row of potatoes. Definitive moment in my career choice, which is a lot closer to solving math problems now than hoeing organic potatoes :)
Not necessarily back, but to the right environments. As quoted above, we see the same today in isolated tribes that live off of hunting and foraging. All of this also doesn't account for the lack of all other modern convenience such as medicine, hygiene, etc. So it isn't about chill and romantic, but rather the time commitment specifically.
Without modern entertainment devices, or even books, what else are they going to do? Some “work” could have a lot of crossover into hobby. Some people enjoy cooking, making tools, spending time with kids, etc. They need to do something to pass the time. The stuff is also for a clear purpose. Making a tool to solve a problem right in front of you feels different than performing a seemingly arbitrary task everyday because a boss says so.
The Bush People previously called The Pygmies are modern humans who eat the diet of the previous homonids and get stunted by the caloric deficits. The only thing they plant is hemp, which doesnt scale to actual agriculture.
There is a simple solution to this problem, but it's not very popular: do the same thing Workers do, require using a separate file. All the tooling works out of the box, you have no issues with lexical scoping, etc. The only downside is it's (currently) clunky to work with, but that can be fixed with better interfaces.
UDP gives you practically no guarantees about anything. Forget exactly once processing, UDP doesn't even give you any kind of guarantees about delivery to begin with, whether delivery will happen at all, order of delivery, lack of duplicates, etc, nothing. These things are so far from comparable that this idea makes no sense even after trying real hard to steelman it.
UDP plus increment means that the client can request a snapshot to be re-sent. This mechanism is used in financial exchanges and works amazing.
This illustrates that the webdevs who write articles on "distributed system" don't really understand what is already out there. These are all solved problems.
You are 100% correct. UDP can be used to solve this problem, in fact, UDP can be used to solve any (software) networking problem, because its kind of what networking is.
The thing that webdevs want to solve is related but different, and whether the forest is missed for the trees is sometimes hard to tell.
What webdevs want to solve is data replication in a distributed system of transactions where availability is guaranteed, performance is evaluated horizontally, change is frequent and easy, barrier to entry is low, tooling is widely available, tech is heterogeneous, and the domain is complex relational objects.
Those requirements give you a different set of tradeoffs vs financial exchanges, which despite having their own enormous challenges, certainly have different goals to the above.
So does that mean this article is a good solution to the problem? I'm not sure, its hard to tell sometimes whether all the distributed aircastles invented for web-dev really pay out vs just having a tightly integrated low-level solution, but regardless of the hypothetical optimum, its hard to argue that the proposed solution is probably a good fit for the web dev culture vs UDP, which unfortunately is something very important to take into account if you want to get stuff done.
> in a distributed system of transactions where availability is guaranteed, performance is evaluated horizontally, change is frequent and easy,
Isn't that the situation inside a CPU across its multiple cores? Data is replicated (into caches) in a distributed system of transactions, because each core uses its own L2 cache with which it interacts, and has to be sent back to main memory for consistence. Works amazing.
Another even more complex system: a multi CPU motherboard supporting NUMA access: 2 CPUs coordinate their multiple cores to send over RAM from the other CPU. I have one of these "distributed systems" at home, works amazing.
Indeed, again you are right. I've gone through the same motions as you trying to understand why the webdev people make this so complicated.
For your specific question here: NUMA & cpu cores don't suffer from the P in CAP: network partitions. If one of your CPU cores randomly stops responding, your system crashes, and that's fine because it never happens. If one of your web servers stops responding, which may happen for very common reasons and so is something you should absolutely design for, your system should keep working because otherwise you cannot build a reliable system out of many disconnected components (and I do mean many).
Also note that there is no way to really check if systems are available, only that you cannot reach them, which is significantly different.
Then we've not even reached the point that the CPU die makes communication extremely fast, whereas in a datacenter you're talking milliseconds, and if you are syncing with a different system accross data centers or even with clients, that story becomes wildely different.
I don't think I'm qualified to answer the question, and I also think it depends on terminology where maybe 'core' is the wrong thing to say, but regardless: my general point is that the assumptions that hold for CPUs don't hold for webservices, and that's where the design ethos between them splits.
I'd tell you a joke about UDP, but you might not get it.
More seriously, you are confident and very incorrect on your understanding of distributed systems. The easiest lift, you can fix being very incorrect (or at least appearing that way) by simply changing your statements to questions.
Personally, I recommend studying. Start with the two generals problem. Read Designing Data Intensive Applications; it is a great intro into real problems and real solutions. Very smart and very experienced people think there is something to distributed systems. They might be on to something.
I'm not sure which endpoint gp meant, but as I understood it, as an example, imagine a three-way handshake that's only available to enterprise users. Instead of failing a regular user on the first step, they allow steps one and two, but then do the check on step three and fail there.
It tracks with my experience in software quality engineering. Asked to find problems with something already working well in the field. Dutifully find bugs/etc. Get told that it's working though so nobody will change anything. In dysfunctional companies, which is probably most of them, quality engineering exists to cover asses, not to actually guide development.
It is not dysfunctional to ignore unreachable "bugs". A memory leak on a missile which won't be reached because it will explode long before that amount of time has passed is not a bug.
It's a debt though. Because people will forget it's there and then at some point someone changes a counter from milliseconds to microseconds and then the issue happens 1000 times sooner.
It's never right to leave structural issues even if "they don't happen under normal conditions".
In hard real-time software, you have a performance budget otherwise the missile fails.
It might be more maintainable to have leaks instead of elaborate destruction routines, because then you only have to consider the costs of allocations.
Java has a null garbage collector (Sigma GC) for the same reason. If your financial application really needs good performance at any cost and you don't want to rewrite it, you can throw money at the problem to make it go away.
I don't think this argument makes sense. You wouldn't provision a 100GB server for a service where 1GB would do just in case unexpected conditions come up. If the requirements change, then the setup can change, doing it just because is wasteful. What if we forget is not a valid argument to over engineer and over provision.
If a fix is relatively low cost and improves the software in a way that makes it easier to modify in the future, it makes it easier to change the requirements. In aggregate these pay off.
If a missile passes the long hurdles and hoops built into modern Defence T&E procurement it will only ever be considered out of spec once it fails.
For a good portion of platforms they will go into service, be used for a decade or longer, and not once will the design be modified before going end of life and replaced.
If you wanted to progressively iterate or improve on these platforms, then yes continual updates and investing in the eradication of tech debt is well worth the cost.
If you're strapping explosives attached to a rocket engine to your vehicle and pointing it at someone, there is merit in knowing it will behave exactly the same way it has done the past 1000 times.
Neither ethos in modifying a system is necessarily wrong, but you do have to choose which you're going with, and what the merits and drawbacks of that are.
Again, when you're building a missile nobody should "forget" a detail.
You have very clearly in the specification, "this missile SHALL not have a run time before reboot of greater than 36 hours ref. donut_count.c:423 integer counter overflows"
Seriously, there's a military standard for pop tarts and they'd get rejected if they had out of spec amounts of frosting on top. It is not the software world you live in.
It's not that they don't ever make mistakes, just an extraordinary amount of effort is put into not making mistakes and oftentimes things are done "wrong" on purpose because of tradeoffs ordinary silicon valley software engineers have no context about.
The way it always seemed to go for me, when I was in that role, is the product is already complete, development is done, you're handed all the tests/etc that the disinterested developers care to give you, and you're told to make those tests presentable and robust, and increase test coverage. The process of doing that inevitably uncovers issues, but nobody cares because the thing is already done and working, so what was the point of any of it? The point was just to check off a box. At companies like this, the role is bullshit work.
There are many contexts where this comment would apply, but border crossing is not one of them. If you're a foreigner trying to enter another country, then by definition you have less rights than natives.
The essay was a thought experiment based around the popularity of the so-called "sharing economy" at the time, not a WEF strategy document and certainly no government's policy.
Even the author of the piece said it was not a description of her vision of the future, but intended to start a discussion about technology.
But it's been picked up by wackaloons around the world as part of some overarching conspiracy theory.
it's because it's so easy to simply blame the ills of society on some illusory few pulling the strings behind the scenes. It used to be the migrants, or blacks, or the chinese (still is apparently) or the japanese...and now, it's the rich/shadowy figures etc.
The actual truth is that the collective actions of everybody leads to certain outcomes - today's outcomes. It can't really have happened any other way.
Please don't dilute the argument by comparing racial groups with the ultra-rich.
The (ultra-)rich form a class in the classical Marxist sense - a group whose interests naturally align, and they work together to further their interests.
There is deliberate government policy behind what's going on with housing - free money for the rich, which they can in turn invest into speculative assets to make yet even more free money.
Then they ensure that their money has weight by putting said money into housing, pricing out common folk, and building new units to serve as price control to preserve the value of their assets.
Then it's a good thing that the World Economic Forum are not government and do not have lawmaking powers. It's essentially a lobbying firm. I wouldn't worry too much about random slop they publish.
But you see that everywhere.
You buy the hardware but the company may block your property totally or partially.
You buy a car but pays to use heated seat? Who eats this thing that you haven't paid extra for the seats. The costs are just sunk in other parts.
You buy a movie but it can be revoked.
You buy an smart tv that can have features revoked.
You barely can pay your rent because now everyone needs to rent because no one has money to buy except equity firms.
There is no need for "they" to be a centralised being. It is just happening. Doesn't matter who or if there are a "they".
We live in a world where people think that the homeless man is the enemy because people are to simple minded to not understand why there are incentives to keep make it worse and worse.
They are the proponents of The Great Reset. Here’s an excerpt from a book I read:
‘As Hitler declared in 1934, “The German revolution will be concluded only when the entire German Volk has been totally created anew, reorganized and reconstructed” (cited in Koonz, 2003, p. 87). The “Great Reset,” announced by World Economic Forum (WEF) director Klaus Schwab, son of Nazi industrialist Eugen Schwab, attempts the same thing on a global scale, promising to “revamp all aspects of our societies and economies, from education to social contracts and working conditions. Every country [ . . . ] must participate, and every industry [ . . . ] must be transformed” (Schwab, 2020).’
The book is:
Wall Street, the Nazis, and the Crimes of the Deep State
I can see where you might get that sentiment, but where do you plan to go when new tech rolls around, the docs don't cut it and your LLM of choice hallucinates APIs that don't exist? This was always Stackoverflow's bread and butter, and people who only use it as noob search tend to miss that fact. SO can be a tough crowd, yes, but mostly it's people who didn't read the rules before posting who get burnt. That aside, it still has a very high concentration of experts that you'll struggle to find anywhere else.
>I can see where you might get that sentiment, but where do you plan to go when new tech rolls around, the docs don't cut it and your LLM of choice hallucinates APIs that don't exist?
Not Stackoverflow, because all my questions are either ignored or closed, even when extremely detailed and unique.
Well my last attempt turned out to be a vendor issue. It was officially acknowledged by a vendor support rep. It had to do with their unique method of populating a whitelist with DNS entries, and weird response chain to blocks in that state.
I read through the backlog, ensured I had completely exhausted every other avenue (short of 3 more weeks of yelling at one of the involved vendors for information about how they were performing their whitelisting) and had available captures for network and application, full reproducibility steps and details of everything that had been interrogated. I even remember linking similar issues, and explaining how I ruled out their causes.
In fact I am still relatively confident that the way mobile browsers were responding to the bug, probably constitutes a bug in itself but honestly, cant be assed to pursue it.
When enough time has passed I will probably produce a blog about the issue, so it can be digested by the next iteration of the troubleshooting machine. But I really dont feel compelled to provide further data to the stackoverflow community directly considering their complete lack of response. Even closing it as a duplicate of an existing issue would have been helpful, but it wasn't a duplicate so it was just ignored.
Unfortunately the answer to that is the Discord server of whatever technology I'm working with. Communities are now separated each in their silo on Discord, far away from the public internet, where nothing can be indexed.
Yet. By the time stackoverlow shuts down, AIs will be powerful enough to take data from docs or just from the source code alone. I mean the new version of opus is pretty good at understanding my front end source code. I think that should be the goal of AIs (that they are so advanced they don’t need to read code examples from a third party website like stackoverflow)
This is a pretty narrow view. Most APIs out there are not source available, frontend libraries are the odd one out in that regard. Likewise for docs, it doesn't matter if the LLM can read the docs or not if the docs literally don't include the things you need to figure out. I suspect this is a generational divide, people who grew up with SO can't imagine what life was like before it, but right now we're on a straight course back to isolationist communities, if we aren't already there.
While LLMs may have used Stack Overflow data to get their start, I think it's reasonable to assume that this source of training data will no longer continue to be useful.
Therefore, as both a data source and a QA website, Stack Overflow has lost its relevance.
If an LLM can read the source
of the library you’re trying to use - or examples of others using the library in GitHub, or official documentation - then there is less of a need for a fellow SOer to put the pieces together to debug issues and answer questions.
Setting up a server with SSH and GitLab is more work than setting up a server with SSH. Dropbox is great and I use it but only because I can’t get the same functionality out of rsync without major additional orchestration. But if I am the only one working on my own project why would I need a second read-only UI for my own code?
If you're working alone you can also send raw IP packets down the wire by way of telegraph key if you'd like. What you do alone behind closed doors isn't really anyone's business and is up to you. For everyone else, the benefit of using Gitlab is that once it's set up, a wide range of users of varying skill levels and backgrounds can use it to collaborate.
Which is why I asked if there is any benefit to that type of setup for personal projects. Your answer so far has been the least helpful or informative.
> Woodburn offers this “very rough approximation” of subsistence-labor requirements: “Over the year as a whole, probably an average of less than two hours a day is spent obtaining food.”
> Reports on hunters and gatherers of the ethnological present--specifically on those in marginal environments--suggest a mean of three to five hours per adult worker per day in food production.
[0] https://fifthestate.anarchistlibraries.net/library/370-fall-...
reply