Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Considering Stallman worked in the MIT AI lab in the era of symbolic AI, and has written GCC (an optimizing compiler is a kind of symbolic reasoner imo), I think he has a deeper understanding of the question than most famous people in tech.




Symbolic AI (GOFAI) and neural networks are very different techniques to solve the same problems. An analogy is someone who specializes in Photoshop making broad claims about painting (or vice versa) because they both create art.

I have not claimed the techniques are similar - going by your example, theres a large set of overlapping skills for both Photoshop and using a paintbrush (color theory, anatomy, perspective etc.), and from the PoV of the end-user, the merits of the piece are what's important - not process.

I'm sure while the AI lab folks didn't have the techniques and compute to do ML like we do now, they have thought a lot about the definition of intelligence and what it takes to achieve it, going from the narrow task-specific AIs to truly generic intelligence.

ML/RL allowed us to create systems that given a train/test set, can learn underlying patterns and discover connections in data without a developer having to program it.

Transformers/LLMs are a scaled up version of the approach (I don't want to get into the weeds as it's beside the point)

Stallman asserts LLMs fall short of general intelligence, which I think he has a much better understanding of what that entails than most people give him credit for. Considering his AI past, I'd be surprised if he didn't keep up with the advancements and techniques in the field at least to some degree (or that understanding said techniques would be beyond his ability).

Because of this, I'm running with the theory that he knows what he's talking about even he doesn't elaborate on it here.


No, modern approaches have zero overlap with Chomsky's deterministic methodology..

What does Chomsky’s work have to do with AI?

My understanding is prior work in NLP and symbolic AI was strongly influenced by Chomsky's philosophies, e.g. https://chomsky.info/20121101/ and https://norvig.com/chomsky.html

Computational linguistics was the subfield of computer science that all the smart kids gravitated towards in the 80s and 90s (RMS came up in CSAIL during this time). While proving various results about formalisms and contrived grammars wasn't bringing in venture capital, most of the guys who would eventually bring linear algebra and stochastic methods to the field would at least have had to familiarize themselves with Chomsky's work.

i think your comment actually mostly makes sense, except the part about neural network guys needing to familiarize with Chomsky, which is not the case at all

Ergo, my initial claim that "modern approaches have zero overlap with Chomsky's deterministic methodology." Statistical token prediction began with the Dragon folks, the CMU guys, and Yorktown Heights, many of whom encountered Chomsky formalism as undergrads.

yeah, your parent comment appears to be just nonsensical name-dropping. happens a lot here. A different type of comment is the "X is just Y" comment that is kind of annoying, like "all of AI is just curve fitting", which the commenter wants readers to think is some kind of profound insight.

[flagged]


> (((leadership)))

Sorry, what do these triple parentheses mean?



Oh. Gross

It has nothing to do with with Jews though (triple paren).

No, he's missed the mark here. The retreat to imprecise non-statements like "ChatGPT cannot know or understand anything, so it is not intelligence" is a flag that you're reading ideology and not analysis. I mean, it's true as far as it goes. But it's no more provable than it is for you or I, it's just a dance around the semantics of the words "understand" and "know".

In particular the quip that it's really just a "bullshit generator" is 100% correct. But also true for, y'know, all the intelligent humans on the planet.

At the end of the day AI gets stuff wrong, as far as we can tell, for basically the same reasons that we get stuff wrong. We both infer from intuition to make our statements about life, and bolt "reasoning" and "logic" on as after the fact optimizations that need to be trained as skills.

(I'm a lot more sympathetic to the free software angle, btw. The fact that all these models live and grow only within extremely-well-funded private enclosures is for sure going to have some very bad externalities as the technology matures.)


Richard Stallman, pushing pure ideology ... I'm shocked ... truly shocked I tell ya.

I'd argue that his point is worse than this.

He's not comparing them to humans, he attributes knowledge/understanding (enough to pass his bar for "is AI") to yolov5 models, xgboost trained trees and as far as I can tell closed source transformer based models too. But not ChatGPT.


> But also true for, y'know, all the intelligent humans on the planet.

That's not true.

> At the end of the day AI gets stuff wrong, as far as we can tell, for basically the same reasons that we get stuff wrong.

Also not true.

> We both infer from intuition [...]

Also not true.


Can you prove those aren't true?

I don't need to. Those are well-known estabilished facts.

If you need more information, you can search them or ask an LLM.

Here you go:

https://chatgpt.com/share/69384dce-c688-8000-97f7-23d0628cd5...


Just as people once hated being told that Earth isn't the center of the universe, so goes the ego when it comes down to discovering the origin of our thoughts.

People still hate being told they're not actually the new Copernicus, no matter how much they think they are.

That never stopped any know-it-all from dunning-krugering.

Or he’s just shaking his fist at the clouds again.

why not both?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: