Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs directly exploit a human trust vuln. Our brains tend to engage with them relationally and create an unconscious functional belief that an agent on the other end is responding with their real thoughts, even when we know better.

AI apps ought to at minimum warn us that their responses are not anyone's (or anything's) real thoughts. But the illusion is so powerful that many people would ignore the warning.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: