Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have an “enabling suicidal ideation” concern for starters.

To be honest I’m kind of surprised I need to explain what this means so my guess is you’re just baiting/being opaque, but I’ll give you the benefit of the doubt and answer your question taken at face value: There have been plenty of high profile incidents in the news over the past year or two, as well as multiple behavioral health studies showing that we need to think critically about how these systems are deployed. If you are unable to find them I’ll locate them for you and link them, but I don’t want to get bogged down in “source wars.” So please look first (search “AI psychosis” to start) and then hit me up if you really can’t find anything.

I am not against the use of LLM’s, but like social media and other technologies before it, we need to actually think about the societal implications. We make this mistake time and time again.





> To be honest I’m kind of surprised I need to explain what this means so my guess is you’re just baiting/being opaque

Search for health and safety and see how many results are about work.


You're being needlessly prescriptive with language here. I am taking about health and safety writ large. I don't appreciate the game you're playing and it's why these discussions rarely go anywhere. It can't all be flippant retorts and needling words. I am clearly saying that we need to as a society be willing to discuss the possible issues with LLM's and make informed decisions about how we want this technology to exist in our lives.

If you don't care about that so be it - just say it out loud then. But I do not feel like getting bogged down in justifying why we should even discuss it as we circle what this is really about.


All the Ai companies are taking those concerns seriously though. Every major chat service has guardrails in place that shutdown sessions which appear to be violating such content restrictions.

If your concerns are things like AI psychosis, then I think it is fair to say that the tradeoffs are not yet clear enough to call this. There are benefits and bad consequences for every new technology. Some are a net positive on the balance, others are not. If we outlawed every new technology because someone, somewhere was hurt, nothing would ever be approved for general use.


> All the Ai companies are taking those concerns seriously though.

I do not feel they are but also I was primarily talking about the AI-evangelists who shout people asking these questions down as Luddites.


That's literally what the Luddites were doing though. It's a reasonable comparison.

Luddite is usually used as an insult based on a misunderstanding of the Luddites. That’s the definition I’m responding to here.

I would disagree. Luddite, to me, is a negative and pejorative label because history has shown Ned Ludd and his followers to have been a short-sighted, self-sabotaging reactionary movement.

I think the same thing of the precautionary movements today, including the AI skeptic position you are advocating for here. The comparison is valid, and it is negative and pejorative because history is on the side of advancing technology.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: