Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are no recurrent paths besides tokens. How may I introspect something if it is not an input? I may not.


The recurrence comes from replaying tokens during autoregression.

It's as if you have a variable in a deterministic programming language, only you have to replay the entire history of the program's computation and input to get the next state of the machine (program counter + memory + registers).

Producing a token for an LLM is analogous to a tick of the clock for a CPU. It's the crank handle that drives the process.


Important attention heads or layers within an LLM can be repeated giving you an "unrolled" recursion.


An unrolled loop in a feed-forward network is all just that. The computation is DAG.


But the function of an unrolled recursion is the same as a recursive function with bounded depth as long as the number of unrolled steps match. The point is whatever function recursion is supposed to provide can plausibly be present in LLMs.


And then during the next token, all of that bounded depth is thrown away except for the token of output.

You're fixating on the pseudo-computation within a single token pass. This is very limited compared to actual hidden state retention and the introspection that would enable if we knew how to train it and do online learning already.

The "reasoning" hack would not be a realistic implementation choice if the models had hidden state and could ruminate on it without showing us output.


Sure. But notice "ruminate" is different than introspect, which was what your original comment was about.


Introspection doesn't have to be recurrent. It can happen during the generation of a single token.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: