Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think you need consciousness to reason. I don't see why repeated application of rewrite rules to extrapolate logical conclusions from antecedents shouldn't be considered reasoning. LLMs are perfectly able to match and apply rewrite rules, while using fuzzy concepts rather than being bound to crisp ontologies that make symbolic reasoning impractical to scale up. And for better or worse, LLMs can also apply simplified heuristics and rules of thumb, and end up making the same mistakes that humans make.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: