Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Where is the "what the thing cares about" part?

When I look at that loop my thought is, "OK, the sensory inputs have updated. There are changes. Which ones matter?" The most naive response I could imagine would be like a git diff of sensory inputs. "item 13 in vector A changed from 0.2 to 0.211" etc. Otherwise you have to give it something to care about, or some sophisticated system to develop things to care about.

Even the naive diff is making massive assumptions. Why should it care if some sensor changes? Maybe its more interesting if it stays the same.

Im not arguing artificial intelligence is impossible. I just dont see how that loop gets us anywhere close.



That is more or less the concept I meant to evoke by updating an emotional state every tick. Emotions are in large part a subconscious system dynamic to organize wants and needs. Ours are vastly complicated under the hood but also kind of superficial and obvious in its expression.

To propose the dumbest possible thing: give it a hunger bar and desire for play. Less complex than a sims character. Still enough that an agent has a framework to engage in pattern matching and reasoning within its environment.

Bots are already pretty good at figuring out environment navigation to goal seek towards complex video game objectives. Give them an alternative goal to maximize certainty towards emotional homeostasis and the salience of sensory input changes because an emergent part of gradual reinforcement learning pattern recognition.

Edit: specifically I am saying do reinforcement learning on agents that can call LLMs themselves to provide reasoning. That’s how you get to AGI. Human minds are not brains. They’re systems driven by sensory and hormonal interactions. The brain does encoding and decoding, informational retrieval, and information manipulation. But the concept of you is genuinely your entire bodily system.

LLM-only approaches not part of a system loop framework ignore this important step. It’s NOT about raw intellectual power.


... well, humans are not always known for making correct, logical or sensical decisions when they update their input loops either...


that only makes humans harder to model




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: