Whether people are interchangeable with each other isn't the point. The point is whether AI is interchangeable with jobs currently done by humans. Unless and until AI training requires 1000 different domain experts, the current projection is that at some point AI will be interchangeable with all kinds of humans...
Doesn't matter. The countries with most chaos and internal strife gets a lot of practice fighting wars (civil war). Then the winner of the civil war, who's used to grabbing resources by force, and the one that has perfected war skills due to survival of the fittest, goes round looking for other countries to invade.
Historically, advanced civilizations with better production capabilities don't necessarily do better in war if they lack "practice". Sad but true. Maybe not in 21st century, but who knows.
Yeah none of that fever dream is real. There's no "after" a civil war, conflicts persist for decades (Iraq, Afghanistan, Syria, Myanmar, Colombia, Sudan).
Check this out - https://data.worldhappiness.report/chart. The US is increasingly a miserable place to live in, and the worse it gets the more their people double down on being shitty.
Fun fact: Fit 2 lines on that data and you can extrapolate by ~2030 China will be a better place to live. That's really not that far off. Set a reminder on your phone: Chinese dream.
Not seeing any horse heavens, do you have reason to believe humans (i.e. those not among the ruling class) are going to have a different fate from the horses?
I agree we can kinda make the argument that abundance is soon upon us, and humanity as a whole embraces the ideas of equality and harmony etc etc... but still there's a kinda uncanny dissociation if you're happily talking about horses disappearing and humans being next while you work on the product that directly causes your prediction to come true and happen earlier...
You decide you need a web server. Ask management chain for approval. Ask IT dept for approval. Ask finance for approval for the expense. Contact Microsoft sales. Buy it.
Now you can start developing on it…
With open source it’s not just the cost of software you save, but also potentially all the other bureaucracy that you save due to not having to pay money to do something. You also get a lot of transparency on the technical side about the products you may choose to use.
there is no flow per se, as it’s a subjective experience. it’s just that our current environment (specifically human environment) is predominantly structured in a way that the past makes an imprint on the present (in terms of biological memory, historical records, etc) and the future generally being unpredictable because we kinda don’t want it to be too predictable (eg you don’t want to be too predictable when a tiger is chasing you for example)
but in other environments (talking about same universe here!) the future is more predictable than here on earth, for example motions of planetary bodies can be predicted way in advance within error bars as in the past, and when you have that kind of relatively symmetrical system, any subjective experiences within those systems would be much less inclined to feel that time flows on way or the other. (of course, the only kind of subjective experiencers we know are made of biological stuff which structurally remembers the past and leaves the future open as form of evolved ability, so this timelessness experience may be harder to imagine for us)
If you consider a person, their brain in particular, flowing backwards in time, the brain becomes a prediction tool. Events-memories (neural connections) appear out of nowhere (from the state of being 'forgotten' in the forward time flow) then completely disappear when the corresponding event happens, annihilating its 'predicting' memory.
The brain does want to predict (in fact I think that's what it's suppose to do), the problem is that the Earth is rather chaotic and unpredictable.
If you subject a brain-like neural network to watch planets and stars orbiting around each other and rewarding it for getting things right, it will likely do quite well.
The past that we have yet to subject to our subjection also is effectively future.
Supplication for unknown outcomes surely already determined in the objective past wrt the present time still makes sense. The Divine Successive Relaxation with physical laws as the substrate, and the choices of human free will, human petition and desire, and Divine Will And Intention as boundary conditions, will solidify objective reality into a coherent whole in Open Theism.
Generally the law allows people to make mistakes, as long as a reasonable level of care is taken to avoid them (and also you can get away with carelessness if you don't owe any duty of care to the party). The law regarding what level of care is needed to verify genAI output is probably not very well defined, but it definitely isn't going to be strict liability.
The emotionally-driven hate for AI, in a tech-centric forum even, to the extent that so many commenters seem to be off-balance in their rational thinking, is kinda wild to me.
Computer code is highly deterministic. This allows it to be tested fairly easily. Unfortunately, code productionn is not the only use-case for AI.
Most things in life are not as well defined --- a matter of judgment.
AI is being applied in lots of real world cases where judgment is required to interpret results. For example, "Does this patient have cancer". And it is fairly easy to show that AI's judgment can be highly suspect. There are often legal implications for poor judgment --- i.e. medical malpractice.
Maybe you can argue that this is a mis-application of AI --- and I don't necessarily disagree --- but the point is, once the legal system makes this abundantly clear, the practical business case for AI is going to be severely reduced if humans still have to vet the results in every case.
Why do you think AI is inherently worse than humans in judging whether a patient has cancer, assuming they are given the same information as the human doctor? Is there some fundamental assumption that makes AI worse, or are you simply projecting your personal belief (trust) in human doctors? (Note that given the speed of progress of AI and that we're talking about what the law ought to be, not what it was in the past, the past performance of AI on cancer cases do not have much relevance unless a fundamental issue with AI is identified)
Note that whether a person has cancer is generally well-defined, although it may not be obvious at first. If you just let the patient go untreated, you'll know the answer quite definitely in a couple years.
You are totally correct that hallucinated citations do not invalidate the paper. The paper sans citations might be great too (I mean the LLM could generate great stuff, it's possible).
But the author(s) of the paper is almost by definition a bad scientist (or whatever field they are in). When a researcher writes a paper for publication, if they're not expected to write the thing themselves, at least they should be responsible for checking the accuracy of the contents, and citations are part of the paper...
IMHO what should change is we stop putting "peer reviewed" articles on a pedestal.
Even if peer review is as rigorous as code reviewed (the former which is usually unpaid), we all know that reviewed code still has bugs, and a programmer would be nuts to go around saying "this code is reviewed by experts, we can assume it's bug free, right?"
But there are too many people who are just assuming peer reviewed articles means they're somehow automatically correct.
I agree in principle, and I think this is what's happening mostly. But IMHO the public perception of a paper being peer reviewed as somehow "more trustworthy" is also kind of... bad.
I mean, being peer reviewed is a signal of a paper's quality, but in the hands of an expert in that domain it's not a very valuable signal, because they can just read the paper themselves, and figure out whether it's legit. So instead of having "experts" try to explain a paper and commenting on whether it's peer reviewed or not, I think the better practice is to have said expert say "I read the paper and it's legit", or "I read the paper and it's nonsense".
IMHO the reason they make note of whether it's peer reviewed is because they don't know enough to make the judgement themselves. And the fallback is to trust a couple anonymous reviewers attest to the quality of a paper! If you think of it that way, using this signal to vet the quality of a publication to the lay public isn't really a good idea.
If there was a law against buying the supplies of materials and letting them rot in a storehouse just to deprive competitors of them, your argument would be what OpenAI would try to make in court...
I think you're misreading GP's point, which I think is probably also your point.
The comment they replied to was apparently suggesting that everyone struggles in some areas, so it's meaningless to highlight one group's struggle over another given that they all have reached a certain stable career.
GP main point (as I understand it) is: no, some people actually do struggle harder than others to make it.
I don't know why people get triggered and start comparing traumas but let's avoid doing it here.
Yes, I understood both their points. I added another data point for a struggle that I assume is not common for HNers, and demonstrated that people can overcome such a struggle.
I certainly don't want to compare traumas and I don't consider my family to be traumatised. I'm certain that some HNers could describe further personal experiences that they've overcome. Have we any Sudanese? People from east Ukraine? Yeminites?
reply