Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> One of the well-known limitations with ChatGPT is that it doesn’t tell you what the relevant sources are that it looked at to generate the text it gives you.

This isn't a limitation, this is critically dangerous. Commercial AI is a centralized, controlled, biased LLM. At what point will someone train it to say something they want people to believe? How can it be trusted?

Consensus based information is still best, and I don't feel LLMs will give us that.



This is the thing I specifically use LLMs for when I’m doing history courses. I’ll remember some vague quote or event and ask for the primary sources and latest ChatGPTs are excellent and getting the right reference, which I can then look up and check myself. Maybe this works better for Latin and Greek texts when it’s gobbled up all the Loebs out there but it works well for me.


Consensus based history has similar problems. It's extremely easy for the consensus to be distorted by contemporary politics.


On the contrary. The heart of an LLM is a next word predictor, based on statistics. They do much the same with concepts, making them essentially consensus distillation devices. They are zeitgeisters. They get weird mainly when their training data is too sparse to find actual consensus, so instead tell you to stick cheese to your pizza with glue.


> They get weird mainly when their training data is too sparse to find actual consensus, so instead tell you to stick cheese to your pizza with glue.

That's exactly not how that happened. That happened because Google's summaries are based on their search results and one of the search results contained that.


This is only useful if you know what data was used to train the model.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: