Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree with your basic argument: intelligence is ill-defined and human/LLM intelligence being indistinguishable IS the basis for the power of these models.

But the point of the article is a distinct claim: personification of a model, expecting human or even human-like responses is a bad idea. These models can be held responsible for their answers independently because they are tools. They should be used as tools until they are powerful enough to be responsible for their actions and interactions legally.

But we're not there. These are tools. With tool limitations.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: