So why is not intelligence when I ask an LLM to solve a problem, and it proceeds to review code, search external documentation and write a solution, test it and iterate until the problem is solved? It seems very much like we we do. It seems like the difference to you guys is “hard to define magic that we can’t explain / it just is different”.
He's not talking about intelligence though, he's saying it has no knowledge or understanding, whereas something like a decision tree or neural net object recognition model does.
It's brilliant at recapitulating the daya it's trained on. It can be extremely useful. But it's still nowhere close the capability of the human brain, not that I expect it to be.
Don't get me wrong I think they are remarkable but I still prefer to call it LLM rather than AI.
Some of the things we consider prerequisites of general intelligence (what we usually mean by when we talk about intelligence in these contexts) - like creativity or actual reasoning, are not present at all in LLMs.
An LLM is a very clever implementation of autocomplete. The truly vast amount of information we've fed it provides a wealth of material to search against, the language abstraction allows for autocompleting at a semantic level and we've add enough randomness to allow some variation in responses, but it is still autocomplete.
Anyone who has used an LLM enough in an uncommon domain they are very familiar with has no doubt seen evidence of the machine behind the curtain from faulty "reasoning" where it sometimes just plays madlibs to a complete lack of actual creativity.
> I call it a "bullshit generator" because it generates output "with indifference to the truth".
And if we follow the link we find he's referring to LLMs:
> “Bullshit generators” is a suitable term for large language models (“LLMs”) such as ChatGPT, that generate smooth-sounding verbiage that appears to assert things about the world, without understanding that verbiage semantically. This conclusion has received support from the paper titled ChatGPT is bullshit by Hicks et al. (2024).
No one thinks the database, orchestration, tool, etc. portions of ChatGPT are intelligent and frankly, I don't think anyone is confused by using LLM as shorthand not just for the trained model, but also all the support tools around it.
I wasn't thinking about their data store or other infrastructure. I was thinking about the layers added for reasoning and other functions that modify or guide the output of the LLM.
Sure, but how do I know you in particular are intelligent?
Any test you can device for this, ChatGPT would reliably pass if the medium was text, while a good fraction of humans might actually fail. It does a pretty good job if the medium was audio.
Video, and in person remains slightly out of reach for now. But I doubt we are not going to get there eventually.
> Any test you can device for this, ChatGPT would reliably pass if the medium was text, while a good fraction of humans might actually fail.
That's clearly untrue unless you qualify "test" as "objective automated test." Otherwise, "convince Stallman you have intelligence according to his definition," is a test that ChatGPT hasn't passed and which every human probably would:
> I define "intelligence" as being capable of knowing or understanding, at least within some domain. ChatGPT cannot know or understand anything, so it is not intelligence. It does not know what its output means. It has no idea that words can mean anything.
And it's certainly not a boon for freedom and openness.