All you need to make your case is an intelligible definition of thought as an activity.
So far your claim is trapped behind the observation that when an AI produces an output, it looks like thought to you.
In the vein Serle's arguments about the appearance of cognition and your premise, consider the mechanics of consulting a book with respect to the mechanics (so to speak) of solicited thought:
There's something you want to know, so you pick up a book and prompt the TOC or index and it returns a page of stored thought. Depending completely on your judgment, the thought retrieved is deemed appropriate and useful.
No one argues that books think.
Explain how interacting with an LLM to retrieve thought stored in its matrix is distinct from consulting a book in a manner that manifests thought.
If the distinction is only in complexity of internal functioning of the device's retrieval mechanism, then explain precisely what about the mechanism of the LLM brings its functioning into the realm of thought that a book doesn't.
To do that you'll first need to formulate a definition of thinking that's about more than retrieval of stored thoughts.
Or are you truly saying that your 'knowing thinking when you see it' is sufficient for a scientific discourse on the matter?
So far your claim is trapped behind the observation that when an AI produces an output, it looks like thought to you.
In the vein Serle's arguments about the appearance of cognition and your premise, consider the mechanics of consulting a book with respect to the mechanics (so to speak) of solicited thought:
There's something you want to know, so you pick up a book and prompt the TOC or index and it returns a page of stored thought. Depending completely on your judgment, the thought retrieved is deemed appropriate and useful.
No one argues that books think.
Explain how interacting with an LLM to retrieve thought stored in its matrix is distinct from consulting a book in a manner that manifests thought.
If the distinction is only in complexity of internal functioning of the device's retrieval mechanism, then explain precisely what about the mechanism of the LLM brings its functioning into the realm of thought that a book doesn't.
To do that you'll first need to formulate a definition of thinking that's about more than retrieval of stored thoughts.
Or are you truly saying that your 'knowing thinking when you see it' is sufficient for a scientific discourse on the matter?