The Turing test was never a test of thinking: Turing said that thinking was difficult to define and so he decided to "replace the question by another, which is closely related to it" (I disagree with him there) "and is expressed in relatively unambiguous words," i.e. the question of whether a chatbot can fool a text-only observer into thinking it's human.
Clearly, current LLMs have passed the Turing test, as witnessed by the difficulty many schools have in enforcing "do not use LLMs to do your homework" rules. But even Turing didn't say his test was a test of intelligence, just a test "closely related" to intelligence. And if he had seen today's LLMs, I think he would have revised that opinion, because today's LLMs generate text with no underlying fact model, no fundamental understanding of the truth behind the words they're saying. (No understanding, even, of the concepts of truth or falsehood). I think today's LLMs have demonstrated that being able to string words together in coherent sentences is not "closely related" to intelligence at all.
Clearly, current LLMs have passed the Turing test, as witnessed by the difficulty many schools have in enforcing "do not use LLMs to do your homework" rules. But even Turing didn't say his test was a test of intelligence, just a test "closely related" to intelligence. And if he had seen today's LLMs, I think he would have revised that opinion, because today's LLMs generate text with no underlying fact model, no fundamental understanding of the truth behind the words they're saying. (No understanding, even, of the concepts of truth or falsehood). I think today's LLMs have demonstrated that being able to string words together in coherent sentences is not "closely related" to intelligence at all.