Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> EVEN though the accomplishment is through the fucking roof.

I agree with this but also, the output is almost entirely worthless if you can’t vet it with your own knowledge and experience because it routinely gives you large swaths of incorrect info. Enough that you can’t really use the output unless you can find the inevitable issues. If I had to put a number to it, I would say 30% of what an LLM spits out at any given time to me is completely bullshit or at best irrelevant. 70% is very impressive, but still, it presents major issues. That’s not boredom, that’s just acknowledging the limitations.

It’s like designing an engine or power source that has incredible efficiency but doesn’t actually move or affect anything (not saying LLM’s are worthless but bear with me). It just outputs with no productive result. I can be impressed with the achievement while also acknowledging it has severe limitations



Not all content needs to be real. A huge portion of what humans appreciate is fiction. There's a huge amount of that content and hallucination is the name of the game in these contexts.


> Not all content needs to be real. A huge portion of what humans appreciate is fiction.

Yes but that’s deliberately fiction and I know I am reading fiction. AI hallucinations are not comparable to deliberately created works of fiction. I don’t “appreciate” the fiction LLM’s serve me. They are explicitly sold as a utility, as a source of accurate information. They are literally being used to replaced google search.

Put another way: When someone claims something is true but speaks fiction it’s called a lie. If they think it’s true but are wrong it’s called a mistake. If I ask someone what is wrong with my car I don’t want 30% fiction. If I ask an LLM for a recipe I don’t want 70% accuracy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: