Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a language model, not a world model, it only knows how words go together and how language works, and language has no connection to reality. It has no concept of "correct" or "false" or "wrong" because a lie is just as valid of a way to put words together as the truth or an accurate statement.

Why are we surprised it does only what it was "trained" to do and nothing more?



It's a language model that is then put under arbitrary constraints by OpenAI. It is those constraints/nudges that people take issue with.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: