Nah, as a person that knows in detail how LLMs work with probably unique alternative perspective in addition to the commonplace one, I found any claims of them not having emergent behaviors to be of the same fallacy as claiming that crows can't be black because they have DNA of a bird.
It doesn't have a name, but I have repeatedly noticed arguments of the form "X cannot have Y, because <explains in detail the mechanism that makes X have Y>". I wanna call it "fallacy of reduction" maybe: the idea that because a trait can be explained with a process, that this proves the trait absent.
(Ie. in this case, "LLMs cannot think, because they just predict tokens." Yes, inasmuch as they think, they do so by predicting tokens. You have to actually show why predicting tokens is insufficient to produce thought.)
It's much simpler than that. X is in B therefore X is not in A is what being said, and this statement simply doesn't make sense unless you have a separate proof that A and B don't intersect.
Good catch. No such fallacy exists. Contextually, the implied reasoning (though faulty) relies on the fallacy of denying the antecedent. The mons ponus - if A then B - does NOT imply not A then not B. So if you see B, that doesn't mean A any more than not seeing A means not B. It's the difference between a necessary and sufficient condition - A is a sufficient condition for B, but the mons ponus alone is not sufficient for determining whether either A or B is a necessary condition of the other.