Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't think Gary Marcus had anything of value to say about AI at any point in what, the past 2 decades? You can replace him with a sign that has "the current AI approach is DOOMED" written on it at no loss of function.

Symbolic AI has died a miserable death, and he never recovered from it.



Ok, but please don't cross into personal attack and please don't post unsubstantive comments to HN. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.


That sign would be more valuable than the one Sam Altman’s been holding saying “AGI in 2025”


Sam Altman never said that, except as a joke. An interviewer asked “what are you excited for in 2025” and he said “AGI”, then said “having a kid” as his real answer.

It’s also worth noting that even when he does talk about AGI, he makes a strong distinction between AGI (human level) and ASI (super human level) intelligence. Many people in these kind of discussions seem to conflate those two as being the same.


He didn’t say it was a joke and has benefited to the tune of hundreds of billions of dollars from the prevailing belief that AGI is imminent, so it seems terribly convenient and charitable to interpret it as a joke. Should we have given the same charity to Elizabeth Holmes and Sam Bankman Fried when they reported their technological capabilities and cash balance? “Oh it’s not fraud that they materially benefited from, it’s just a joke.”


Did he say that?


https://youtu.be/xXCBz_8hM9w?si=KjaolnjTJd2Lz82k

46:12

> What are you excited about in 2025? What's to come?

> AGI. Excited for that.


I took that as he's excited to be working towards AGI. The one word answer is fairly ambiguous. For context in the same interview he says:

>... from here to building an AGI will still take a huge amount of work there are some known unknowns but I think we basically know what to go what to go do and it'll take a while, it'll be hard but that's tremendously exciting... (38:59)

which to me doesn't sound like we'll be done within a year.


That's a more realistic defense of the remark than "it was a joke". A CEO ought to be a little more circumspect when talking about the future performance of the $150bn company he runs, though.

But we live in the era of Elon Musk, so what the hell. I'm going turn my belly button fluff into infinite gold next year. Can I have a few billion?


Sure, I'll see what I can do. Though Musk and Altman only really get away with it because they had past successes - zero to market leader in evs/rockets for Musk and 100 million users in two months for Altman.


That was a joke, how do people not understand that?


Because it wasn't. It was said firmly, with a straight face in response to a simple, serious question, after an interview in which he claimed OpenAI know exactly how to get to AGI and it's just a matter of execution.

Of all the possible defenses of this remark, I didn't expect that one.


If you know how sarcasm works, that’s exactly what it looks like. Afterwards he immediately pondered the question and then gave his real answer (about looking forward to having a kid).

Besides all that, think about it: if this wasn’t a joke, then why has he never said the same thing again?


He has, many times. And when he doesn’t say it directly he insinuates it.

> OpenAI CEO Sam Altman wrote in January: “We are now confident we know how to build AGI.” This is after he told a Y Combinator vodcast in late 2024 that AGI might be achieved in 2025 and tweeted in 2024 that OpenAI had “AGI achieved internally.”

https://fortune.com/2025/08/25/tech-agi-hype-vibe-shift-supe...

Oh it’s all just jokes though, jokes that Sam has profited enormously from.


calm down, in some cultures (american) sarcasm must be clearly telegraphed. In other, it would ruin the joke (british)


Symbolic AI didn’t die tho, it was just merged with deep learning either as complementary from the get go e.g. AlphaGo which uses Symbolic AI to feed a deep neural network or now as a post processing / interventionary technique for guiding and optimizing outputs of LLM, human in the loop and MoR are very much Symbolic AI techniques.


Exactly this, well said. Symbolic AI works so well that we don’t really think of it as AI anymore!

I know I for one was shocked to take my first AI course in undergrad and discover that it was mostly graph search algorithms… To say the least, those are still helpful in systems built around LLMs.

Which, of course, is what makes Mr. Marcus so painfully wrong!


I'm not sure it's true that symbolic AI is dead, but I think Gary Marcus style symbolic AI is. His "next decade in AI" paper doesn't even mention symbolic regression. For those who don't know, linear regression tries to find a linear fit against a dataset. Quadratic regression a quadratic fit. And so on. Symbolic regression tries to find a symbolic expression that gives you an accurate data fit.

Symbolic regression has an extremely obvious advantage over neural networks, which is that it learns parameters and architecture simultaneously. Having the correct architecture means that the generalization power is greater and that the cost of evaluation due to redundant parameters is lower. The crippling downside is that the search space is so vast that it is only applicable to toy problems.

But Gary Marcus is in favour of hybrid architectures, so what would that look like? In the SymbolNet paper, they have essentially decided to keep the overall neural network architecture, but replaced the activation functions with functions that take multiple inputs aka symbols. The network can then be pruned down into a symbolic expression.

That in itself is actually a pretty damning blow to Gary Marcus, because now you have most of the benefits of symbolic AI with only a tiny vestige of investment into it.

What this tells us is that fixed activation functions with a single input appear to be a no go, but nobody ever said that biological neurons implement sigmoid, relu, etc in the first place. It's possible that the spike encoding already acts as a mini symbolic regressor and gives each neuron its own activation function equivalent.

The "Neuronal arithmetic" paper has shown that biological neurons can not only calculate sums (ANNs can do this), but also multiply their inputs, which is something the activation functions in artificial neural networks cannot do. LLMs use gating in their MLPs and attention to explicitly model multiplication.

There is also the fact that biological neurons form loops. The memory cells in LSTMs perform a similar function, but in the brain there can be memory cells everywhere whereas in a fixed architecture like LSTMs they are only where the designer put them.

It seems as if the problem with neural networks is that they're too static and inflexible and contain too much influence from human designers.


You also need something on Gary Marcus's sign about how wonderful Gary Marcus is like

>Gary Marcus tried to tell you for years that this moment would come for years. Do consider reading his 2020 article...

I can't say I agree with the sentiment "Game over". The game of trying to develop AI is very much on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: