Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can't express how disappointed I am in the societal backlash to AI. It used to rightfully be something we looked forward to. I've been fascinated by it for as long as I've known what a computer was, from watching CyberChase as a kid in the early 2000s to reading the Asimov books to making my own silly sentence-mixing chatbot with a cult following on IRC.

I never thought a computer would pass the turing test in our lifetime (my bot did by accident sometimes, which was always amusing). I spoke to an AI professor who's been at this since the 80s and he never thought a computer would pass the turing test in our lifetime. And for it to happen and the reaction to be anything short of thunderous applause betrays a society bankrupt of imagination, forward thinking, and wonder.

We let pearl-clutching loom smashers hijack the narrative to the point where a computer making a drawing based on natural language is "slop" and you're a bad person if you like it, instead of it being the coolest thing in the fucking world which it actually is. We have chatbots that can do extraordinary feats of research and pattern-matching but all we can do is cluck over some idiot giving himself bromide poisoning. The future is here and it's absolutely amazing, and I'm tired of pretending it isn't. I can't wait for this "AI users DNI", "This video proudly made less efficiently than it could have been because I'm afraid of AI" social zietgeist to die off.





> instead of it being the coolest thing in the fucking world

Some people think a M-16 is the coolest thing in the world. Nobody thinks we should be handing them out to school children. The reaction is because most people think AI will compound our current problems. Look at video generation. Not only does it put a lot of people out of work, it also breaks the ability of people to post a video as proof of something. Now we have to try to determine if the very real looking video is from life or a neural net. That is very dangerous and the tech firms released it without any real thought or discussion as to the effect it would have. They make illegal arms dealers look thoughtful by comparison. You ignoring this (and other effects) is just childish.


Even the rationale is looking to me childish "coolest thing in the fuckin world" come on, we're adults here, adults who actually care about the world, and not to woo the world, but to keep it and pass it to our kids in the best shape. Or at least in a better shape, and I believe right now we're failing at it, we are failing our children by throwing at them a flaming molotov instead of a playing ball.

I think that AI capabilities are now really impressive, nevertheless my point is not about it being “cool” it is, but rather what kind of society we are going to produce with it and how it impacts people’s lives.

> It used to rightfully be something we looked forward to

This is rather unimportant, but I would say that media has usually portrayed AI as a dangerous thing. Space oddysey, Terminator, Mass Effect, Her, Alien, Matrix, Ex Machina, you name it.


Although it might be more broadly applied to 'computers', Asimov's stories around AI and tangents like robots often have the underlying message that it's not so much the technology, so much as how humans use, react and interpret what it's doing.

The AI in Her was dangerous rather than depressing?

AI isn't solving the problems that our society needs to solve, and its potentially making some of them worse. If you can't understand why people feel that way by now, then you are out of touch with their struggle. Instead of being disappointed in your fellow humans who contain the same capacity for wonder as you do, perhaps you should wonder why you are so quick to dismiss them as luddites. BTW you might want to read more about those guys, they didn't actually hate technology just because it was progress. They hated the intentional disruption of their hard-earned stability in service of enriching the wealthy.

>It used to rightfully be something we looked forward to

Science fiction has always been mixed. In Star Trek the cool technology and AGI like computer is accompanied by a post-scarcity society where fundamental needs are taken care of. There are countless other stories where technology and AI is used as a tool to enrich some at the expense of others.

>We let pearl-clutching loom smashers hijack the narrative to the point where a computer making a drawing based on natural language is "slop" and you're a bad person if you like it

I don't strongly hold one opinion or the other, but I think fundamentally the roots of people's backlash is that it is something that jeopardizes their livelihood. Not in some abstract "now the beauty and humanity of art is lost" sort of way, but much more concretely, in that because of LLM adoption (or at least hype), they are out of a job and cannot make money—which hurts their quality of life much more than the increase in quality of life from access to LLMs. Then those people see the "easy money" pouring into this bubble, and it would be hard not to get demoralized. You can claim that people just need to find a different job, but that's ignoring the reality that the over the past century the skill-floor has basically risen and the ladder pulled up; and perhaps even worse, trying to reach for that higher bar still results in one "treading water" without any commensurate growth in earnings.


> In Star Trek the cool technology and AGI like computer is accompanied by a post-scarcity society where fundamental needs are taken care of.

The Star Trek computer doesn't even attempt to show AGI, and Commander Data is the exception, not the rule. Star Trek has largely been anti-AGI for its entire run, for a variety of reasons - dehumanization, unsafe/going unstable, etc.


I think you're confusing AGI for ASI or sentience? The enterprise's computer clearly meets the definition for AGI, in that it can basically do any task the humans require of it (limited only by data, which humans need to go out and gather). Especially consider that it also runs the holodeck.

Unlike modern LLMs it also correctly handles uncertainty, stating when there is insufficient information. However they seem to have made a deliberate effort to restrict/limit the extent of its use for planning and command (no "long-running agentic tasks" in modern parlance), requiring human input/intervention in the loop. This is likely because as you mentioned there is a theme of "losing humanity when you entrust too much to the machine".


> I never thought a computer would pass the turing test in our lifetime

Are we talking about the single non-peer reviewed study that show that a random person might only be about 1/3 in guessing correctly that a GPT 4.5 text is a computer and not a human?

Learning to recognize the artifacts, style and logical nonsense of an LLM is a skill. People are slowly learning those and through that the turing results will natural drop, which strongly imply a major fault in how we measure turing completeness.


1/3 is impressive if you thought it would be 0. Future models will improve probably. Turing completeness is not related to the Turing test.

> We let pearl-clutching loom smashers hijack…

“let” nothing.

There is push back and not being able to enjoin it effectively doesn’t invalidate it.

As a concrete example: Here on HN, there are always debates on what the hell people mean when they say LLMs helped them code.

I’ve seen it happen enough that I now have a boiler plate request for posters: Share your level of seniority, experience, domain familiarity, language familiarity, project result, along side how the LLM helped.

I am a nerd through and through, and still read copious amounts of science fiction on a weekly basis. I lack no wonder and love for tech.

To make that future, the jagged edges of AI output need to be mapped and tamed. That needs these kinds of precise conversations so that we have a shared reality to work on.

Doing that job badly, is the root cause of people talking past each other. Dismissing it as doomerism, is to essentially miss market and customer feedback.


Is there a single AI corporation working for the public good? “Open”AI just shed the last vestiges of its non-profitdom, and every single AI CEO sounds like a deranged cultist.

Wake me up when we have the FSM equivalent for AI. What we have now is a whole lot of corporate wank.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: