Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I don’t really believe in the threat of AGI (Artificial General Intelligence—human-level intelligence) partly because I don’t believe in the possibility of AGI and I’m highly skeptical that the current technology underpinning LLMs will provide a route to it.

I'm on board with being skeptical that LLMs will lead to AGI; but - there being no possibility seems like such a strong claim. Should we really bet that there is something special (or even 'magic') about our particular brain/neural architecture/nervous system + senses + gut biota + etc.?

Don't, like, crows (or octopuses; or elephants; or ...) have a different architecture and display remarkable intelligence? Ok, maybe not different enough (not 'digital') and not AGI (not 'human-level') but already -somewhat- different that should hint at the fact that there -can- be alternatives

Unless we define 'human-level' to be 'human-similar'. Then I agree - "our way" may be the only way to make something that is "us".



We still haven’t figured out what intelligence even is. Depending on what you care about, the second-smartest animal in the world varies wildly.


This is a bogus argument. There's a lot we don't understand about LLMs, yet we built them.


We built something we don’t understand by trial and error. Evolution took a few billion years getting to intelligence, so I guess we’re a few sprints away at least


Evolution took millions of years to reach the minimal thing that can read, write and solve leetcode problems, so by similar logic we're at least a few million years in...


Except it learned by direct examples and none of these models can solve even basic logic that isn’t in training data somehow.

Us having the ability doesn’t mean we’re close to reproducing ourselves, either


>Except it learned by direct examples

What do you mean by "it learned"? From my perspective evolution is nothing more than trail and error on a big timescale. In a way we are much more than evolution because we are a conscious intelligence that is controlling the trail and error resulting in shortening the timescale substantially.

Take robots that can walk for example. Instead of starting at a fish that slowly over thousands or even million of years moves to land and grows limbs over many generations we can just add legs which we have already tested in simulated software at x10 or more the speed of real time.

AI(G) potential or possibilities should not be measured by natures scale.


Have you tried giving it basic logic that isn't in its training data?

I have. gpt-3.5-instruct required a lot of prompting to keep it on track. Sonnet 4 got it in one.

Terrence Tao, the most prominent mathematician alive, says he's been getting LLM assistance with his research. I would need about a decade of training to be able to do any math in a day that Tao can't do in his head in less than 5 seconds.

LLMs are suffer from terrible, uh, dementia-like distraction, but they can definitely do logic.


There's a lot _I_ don't understand about LLMs, but I strongly question whether there is a lot that the best experts in the field don't understand about LLMs.


Oh, boy, are you in for a surprise.

Our theories of how LLMs learn and work look a lot more like biology than math. Including how vague and noncommittal they are because biology is _hard_.


this is an argument against LLM cognition. we don't know anything about the human brain, and we don't know anything about LLMs, so they must be similar? I don't follow


It's an argument that we don't need to understand LLMs to build them. But mostly it's just a statement of fact to promote correct understanding of our world.


Evolution had a much worse understanding of intelligence than we do and still managed just fine (thus i'd expect us to need less time and iterations to get there).


Here’s my argument for why AGI may be practically impossible:

1. I believe AGI may require a component of true agency. An intelligence that has a stable self of self that its trying to act on behalf of.

2. We are not putting any resources of significant scale towards creating such an AGI. It’s not what any of us want. We want an intelligence that acts out specific commands on behalf of humans. There’s an absolutely ruthless evolutionary process for AIs where any AI that doesn’t do this well enough, at low enough costs, is eliminated.

3. We should not believe that something that we’re not actually trying to create, and for which there is no evolutionary process that select for it, will somehow magically appear. It’s good sci-fi, and it can be interesting to ponder about. But not worth worrying about.

Even before that, we need AI which can self-update and do long term zero shot learning. I’m not sure we’re even really gonna put any real work into that either. I suspect we will find that we want reproducible, dependable, energy efficient models.

There’s a chance that the agentic AIs we have now are nearly the peak of what we’ll achieve. Like, I’ve found that I value Zed’s small auto-complete model higher than the agentic AIs, and I suspect if we had a bunch of small and fast specialised models for the various aspects of doing development, I’d use that far more than a general purpose agentic AI.

It’s technically possible to do fully secure, signed end-to-end encrypted email. It’s been possible for many decades now. We can easily imagine the ideal communication system, and it’s even fairly easy to solve technically speaking. Yet it’s not happening the way we imagined. I think that shows how, what’s technically physically possible isn’t always relevant to what’s practically possible. I think we will get electronic mail right eventually. But if it takes half a century or more for that.. something that orders of magnitude more difficult (AGI) could take centuries or more.


>only way to make something that is "us".

Which many people seem to neglect that instead of making us, we make an alien.

Hell, making us a is a good outcome. We at least somewhat understand us. Setting off a bunch of self learning, self organizing code to make an alien, you'll have no clue what comes out the other side.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: