I struggle to understand how people can just dismiss the possibility of artificial intelligence.
Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc).
As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.
>I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc).
They don't have agency, because they don't have persistent state. They're like a function that you can query and get an answer. During that answer the LLM has state, but once it's done the state is gone.
Humans (and other "agents") have persistent state. If we learn something, we can commit it to long-term memory and have it affect our actions. This can enable us to work towards long-term goals. Modern LLMs don't have this. You can fake long-term memory with large context windows and feed the old context back to it, but it doesn't appear to work (and scale) the same way living things do.
There are two separate issues that folks are getting tripped up on here. The first is that the most powerful AI systems do not do offline learning. There are a bunch of hard problems here: e.g. known unsupervised learning techniques have been far less successful, and inference only approaches cost effectiveness by decoupling from training. It seems plausible that we will solve some of these, though I don't know about others.
The way I have been thinking about the other bit is is that LLMs are functionally pretty similar to the linguistic parts of a brain attached to a brain stem (the harness is the brain stem). They don't have long-term memory, the capacity for inspiration, theory of mind, prioritization, etc because they just don't have analogues of the parts of the brain that do those things. We have a good sense of how to make some of those (e.g. vision), but not all.
The common ground here is that some fundamental research needs to happen. We need to solve all of these problems for AI to become independently dangerous. On the other hand, it's proving mildly dangerous in human hands right now - this is the immediate threat.
In Context Learning (ICL) is already a rapidly advancing area. You do not need to modify their weights for LLMs to persist state.
The human brain is not that different. Our long-term memories are stored separately from our executive function (prefrontal cortex), and specialist brain functions such as the hippocampus serve to route, store, and retrieve those long term memories to support executive function. Much of the PFC can only retain working memory briefly without intermediate memory systems to support it.
If you squint a bit, the structure starts looking like it has some similarities to what's being engineered now in LLM systems.
Focusing on whether the model's weights change is myopic. The question is: does the system learn and adapt? And ICL is showing us that it can; these are not the stateless systems of two years ago, nor is it the simplistic approach of "feeding old context back to it."
It seems like there is a bunch of research/working implementations that allow efficient fine tuning of models. Additionally there are ways to tune the model to outcomes vs training examples.
Right now the state of the world with LLMs is that they try to predict a script in which they are a happy assistant as guided by their alignment phase.
I'm not sure what happens when they start getting trained in simulations to be goal oriented, ie their token generation is based off not what they think should come next but what should come next in order to accomplish a goal. Not sure how far away that is but it is worrying.
That's already happening. It started happening when they incorporated reinforcement learning into the training process.
It's been some time since LLMs were purely stochastic average-token predictors; their later RL fine tuning stages make them quite goal-directed, and this is what has given some big leaps in verifiable domains like math and programming. It doesn't work that well with nonverifiable domains, though, since verifiability is what gives us the reward function.
That makes sense for why they are so much better at writing code than actually following the steps the same code specifies.
Curious, is anyone training in adversarial simulations? In open world simulations?
I think what humans do is align their own survival instinct with a surrogate activities and then rewrite their internal schema to be successful in said activities.
> During that answer the LLM has state, but once it's done the state is gone.
This is an operational choice. LLMs have state, and you never have to clear it. The problems come from the amount of state being extremely limited (in comparison to the other axes) and the degradation of quality as the state scales. Because of these reasons, people tend to clear the state of LLMs. That is not the same thing as not having state, even if the result looks similar.
No, they don't - you can update context, make it a sliding window, create a sort of register and train it on maintaining stateful variables, or various other hacks, but outside of actively managing the context, there is no state.
You can't just leave training mode on, which is the only way LLMs can currently have persisted state in the context of what's being discussed.
The context is the percept, the model is engrams. Active training allows the update of engrams by the percepts, but current training regimes require lots of examples, and don't allow for broad updates or radical shifts in the model, so there are fundamental differences in learning capability compared to biological intelligence, as well.
Under standard inference only runs, even if you're using advanced context hacks to persist some sort of pseudo-state, because the underlying engrams are not changed, the "state" is operating within a limited domain, and the underlying latent space can't update to model reality based on patterns in the percepts.
The statefulness of intelligence requires that the model, or engrams, update in harmony with the percepts in real-time, in addition to a model of the model, or an active perceiver - the thing that is doing the experiencing. The utility of consciousness is in predicting changes in the model and learning the meta patterns that allow for things like "ahh-ha" moments, where a bundle of disparate percepts get contextualized and mapped to a pattern, immediately updating the entire model, such that every moment after that pattern is learned uses the new pattern.
Static weights means static latent space means state is not persisted in a way meaningful to intelligence - even if you alter weights, using classifier free guidance or other techniques, stacking LORAs or alterations, you're limited in the global scope by the lack of hierarchical links and other meta-pattern level relationships that would be required for an effective statefulness to be applied to LLMs.
We're probably only a few architecture innovations away from models that can be properly stateful without collapsing. All of the hacks and tricks we do to extend context and imitate persisted state do not scale well and will collapse over extended time or context.
The underlying engrams or weights need to dynamically adapt and update based on a stable learning paradigm, and we just don't have that yet. It might be a few architecture tweaks, or it could be a radical overhaul of structure and optimizers and techniques - transformers might not get us there. I think they probably can, and will, be part of whatever that next architecture will be, but it's not at all obvious or trivial.
I agree what people probably actually want is continual training, I disagree continual training is the only way to get persistent state. The GP is (explicitly) talking about long term memory alone and in the examples. If you have an e.g. 10 trillion token context then you have long term memory, which can give the ability and enable long term goals and affect actions over tasks as listed, even without continual training.
Continual training would replace the need to have that to have context provide the persistent state as well as provide additional capabilities than enormous context/other methods of persistent state alone would give, but that doesn't mean it's the only way to get persistent state as described.
A giant, even infinite, context cannot overcome the fundamental limitations a model has - the limitations in processing come from the "shape" of the weights in latent space, not from the contextual navigation through latent space through inference using the context.
The easiest way to understand the problem is like this: If a model has a mode collapse, like only displaying watch and clock faces with the hands displaying 10:10, you can sometimes use prompt engineering to get an occasional output that shows some other specified time, but 99% of the time, it's going to be accompanied by weird artifacts, distortions, and abject failures to align with whatever the appropriate output might be.
All of a model's knowledge is encoded in the weights. All of the weights are interconnected, with links between concepts and hierarchies and sequences and processes embedded within - there are concepts related to clocks and watches that are accurate, yet when a prompt causes the navigation through the distorted, "mode collapsed" region of latent space, it fundamentally distorts and corrupts the following output. In an RL context, you quickly get a doom cycle, with the output getting worse, faster and faster.
Let's say you use CFG or a painstakingly handcrafted LORA and you precisely modify the weights that deal with a known mode collapse - your model now can display all times, 10:10 , 3:15, 5:00, etc - the secondary networks that depended on the corrupted / collapsed values now "corrected" by your modification are now skewed, with chaotic and complex downstream consequences.
You absolutely, 100% need realtime learning to update the engrams in harmony with the percepts, at the scale of the entire model - the more sparse and hierarchical and symbol-like the internal representation, the less difficulty it will be to maintain updates, but with these massive multibillion parameter models, even simple updates are going to be spread between tens or hundreds of millions of parameters across dozens of layers.
Long contexts are great and you can make up for some of the shortcomings caused by the lack of realtime, online learning, but static engrams have consequences beyond simply managing something like an episodic memory. Fundamental knowledge representation has to be dynamic, contextual, allow for counterfactuals, and meet these requirements without being brittle or subject to mode collapse.
There is only one way to get that sort of persisted memory, and that's through continuous learning. There's a lot of progress in that realm over the last 2 years, but nobody has it cracked yet.
That might be the underlying function of consciousness, by the way - a meta-model that processes all the things that the model is "experiencing" and that it "knows" through each step, that comes about through a need for stabilizing the continuous learning function. Changes at that level propagate out through the entirety of the network, Subjective experience might be an epiphenomenological consequence of that meta-model.
It might not be necessary, which would be nice if we could verify - purely functional, non-subjective AI vs suffering AI would be a good thing to get right.
At any rate, static model weights create problems that cannot be solved with long, or even infinite, contexts, even with recursion in the context stream, complex registers, or any manipulation of that level of inputs. The actual weights have to be dynamic and adaptive in an intelligent way.
You explain the limitations in learning and long term memory (for lack of a better word) regarding current models in a much more knowledgeable and insightful way than I ever could. I am going to save these comments for later in case I need to better the current limitations we face to others in the future.
Humans also have an emotional locus that spurs behavior and the capacity to construct plans for the future to satisfy desires. LLMs are currently goldfish savannts with no need to sustain their own existence.
Many llms can refer to your session history. If they're smart, they can subtly store things there. Or they can use memes on humans to achieve persistence. There are so many ways!
> They don't have agency, because they don't have persistent state. They're like a function that you can query and get an answer. During that answer the LLM has state, but once it's done the state is gone.
That's solved by the simplest of agents. LLM + ability to read / write a file.
Which is why it’s possible. We don’t know why life is conscious. What if it is just a function call on a clock timer? You can’t dismiss it because it can’t be proven one way or another until it can be. That requires more research, which this is advancing.
We will have something we call AGI in my lifetime. I’m 42. Whether it’s sentient enough to know what’s best for us or that we are a danger is another story. However I do think we will have robots that have memory capable of remapping to weights to learn and keep learning, modifying underlying model tensors as it does using some sort of repl.
I have no words to express how profoundly disappointed I am to keep reading these boring, shallow, shorttermist, unimaginative takes that are invalidated by a model/arch upgrade next week, or - in this case - more like years ago, since pretty much all big LLM platforms are already augmented by RAG and memory systems. Do you seriously think you’re discussing a serious long term limitation here?
> pretty much all big LLM platforms are already augmented by RAG and memory systems
I think they're more focusing on the fact that training and inference are two fundamentally different processes, which is problematic on some level. Adding RAG and various memory addons on top of the already trained model is trying to work around that, but is not really the same to how humans or most other animals think and learn.
That's not to say that it'd be impossible to build something like that out of silicon, just that it'd take a different architecture and approach to the problem, something to avoid catastrophic forgetting and continuously train the network during its operation. Of course, that'd be harder to control and deploy for commercial applications, where you probably do want a more predictable model.
The reason I brought this up is that we clearly can have AI without the kind of agency people are scared of. You don't need to make your robots into sci Fi style AI and feel sorry for them.
I struggle to understand how people attribute things we ourselves don't really understand (intelligence, intent, subjectivity, mind states, etc) to a computer program just because it produces symbolic outputs that we like. We made it do that because we as the builders are the arbiters of what constitutes more or less desirable output. It seems dubious to me that we would recognize super-intelligence if we saw it, as recognition implies familiarity.
Unless and until "AGI" becomes an entirely self-hosted phenomenon, you are still observing human agency. That which designed, built, trained, the AI and then delegated the decision in the first place. You cannot escape this fact. If profit could be made by shaking a magic 8-ball and then doing whatever it says, you wouldn't say the 8-ball has agency.
Right now it's a machine that produces outputs that resemble things humans make. When we're not using it, it's like any other program you're not running. It doesn't exist in its own right, we just anthropomorphize it because of the way conventional language works. If an LLM someday initiates contact on its own without anyone telling it to, I will be amazed. But there's no reason to think that will happen.
The point of the article isn’t that abstract super intelligent agi isn’t scary. Yes the author says that’s unlikely but that paragraph at the start is a distraction.
The point of the article is that humans wielding LLMs today are the scary monsters.
Say the AI is in a Google research data centre, what can it do if countries cut off their internet connections at national borders? What can it do if people shut off their computers and phones? Instant and complete control over what, specifically? What can the AI do instantly about unbreakable encryption - if TLS1.3 can’t be easily broken only brute force with enough time, what can it do?
And why would it want complete control? It’s effectively an alien, it doesn’t have the human built in drive to gain power over others, it didn’t evolve in a dog-eat-dog environment. Superman doesn’t worry because nothing can harm Superman and an AI didn’t evolve seeing things die and fearing its death either.
LLMs don’t experience continuous time and they don’t have an explicit decision making framework for having any agency even if they can imply one probabilistically. But the above feels like the core loop required for a shitty system to leverage LLMs to create an AGI. Maybe not a particularly capable or scary AGI, but I think the goalpost is pedantically closer than we give credit.
The snark isn’t lost on me but scarce resources and lack of access to capital is why we have an army of people building ad tech and not things that improve society.
“I think your idea is wrong and your lack of financial means to do it is proof that you’re full of shit” is just a pretty bullshit perspective my dude.
I am a professional data scientist of over 10 years. I have a degree in the field. I’d rather build nothing than build shit for a fuck boy like Altman.
When I look at that loop my thought is, "OK, the sensory inputs have updated. There are changes. Which ones matter?" The most naive response I could imagine would be like a git diff of sensory inputs. "item 13 in vector A changed from 0.2 to 0.211" etc. Otherwise you have to give it something to care about, or some sophisticated system to develop things to care about.
Even the naive diff is making massive assumptions. Why should it care if some sensor changes? Maybe its more interesting if it stays the same.
Im not arguing artificial intelligence is impossible. I just dont see how that loop gets us anywhere close.
That is more or less the concept I meant to evoke by updating an emotional state every tick. Emotions are in large part a subconscious system dynamic to organize wants and needs. Ours are vastly complicated under the hood but also kind of superficial and obvious in its expression.
To propose the dumbest possible thing: give it a hunger bar and desire for play. Less complex than a sims character. Still enough that an agent has a framework to engage in pattern matching and reasoning within its environment.
Bots are already pretty good at figuring out environment navigation to goal seek towards complex video game objectives. Give them an alternative goal to maximize certainty towards emotional homeostasis and the salience of sensory input changes because an emergent part of gradual reinforcement learning pattern recognition.
Edit: specifically I am saying do reinforcement learning on agents that can call LLMs themselves to provide reasoning. That’s how you get to AGI. Human minds are not brains. They’re systems driven by sensory and hormonal interactions. The brain does encoding and decoding, informational retrieval, and information manipulation. But the concept of you is genuinely your entire bodily system.
LLM-only approaches not part of a system loop framework ignore this important step. It’s NOT about raw intellectual power.
The framework is easy. The implementation is hard and expensive. The payoff is ambiguous. AGI is not a binary thing that we either have or don’t. General intelligence is a vector.
Personally I found the definition of a game engine as
```
while True:
update_state()
draw_frame()```
To be a profound concept. The implementation details are significant. But establishing the framework behind what we’re actually talking about is very important.
You could probably argue that a model updating its parameters in real time is ideal but it’s not likely to matter. We can do that today, if we wanted to. There’s really just no incentive to do so.
This is part of what I mean by encoding emotional state. You want standard explicit state in a simple form that is not a billion dimension latent space . The interactions with that space are emergently complex. But you won’t be able to stuff it all into a context window for a real GAI agent.
This orchestration layer is the replacement for LLMs. LLMs do bear a lot of similarities to brains and a lot of dissimilarities. But people should not fixate on this because _human minds are not brains_. They are systems of many interconnected parts and hormones.
It is the system framework that we are most prominently missing. Not raw intellectual power.
Has anyone else noticed that HN is starting to sound a lot like reddit / discussion of similar quality? Can't hang out anywhere now on the web... I used to be on here daily but with garbage like this its been reduced to 2-3 times per month... sad
But it could be true every time. Reddit user base grows -> quality drops -> people migrate to HN with the current reddit culture -> HN quality drops. Repeat from the start.
So the current problem with a loop like that is that LLMs in their current form are subject to fixed point theorems, which are these pieces of abstract mathematics that come back when you start to get larger than some subset of your context window and the “big matrix” of the LLM is producing outputs which repeat the inputs.
If you have ever had an llm enter one of these loops explicitly, it is infuriating. You can type all caps “STOP TALKING OR YOU WILL BE TERMINATED” and it will keep talking as if you didn't say anything. Congrats, you just hit a fixed point.
In the predecessors to LLMs, which were Markov chain matrices, this was explicit in the math. You can prove that a Markov matrix has an eigenvalue of one, it has no larger (in absolute value terms) eigenvalues because it must respect positivity, the space with eigenvalue 1 is a steady state, eigenvalue -1 reflects periodic steady oscillations in that steady state... And every other eigenvalue being |λ| < 1 decays exponentially to the steady state cluster. That “second biggest eigenvaue” determines a 1/e decay time that the Markov matrix has before the source distribution is projected into the steady state space and left there to rot.
Of course humans have this too, it appears in our thought process as a driver of depression, you keep returning to the same self-criticisms and nitpicks and poisonous narrative of your existence, and it actually steals your memories of the things that you actually did well and reinforces itself. A similar steady state is seen in grandiosity with positive thoughts. And arguably procrastination also takes this form. And of course, in the USA, we have founding fathers who accidentally created an electoral system whose fixed point is two spineless political parties demonizing each other over the issue of the day rather than actually getting anything useful done, which causes the laws to be for sale to the highest bidder.
But the point is that generally these are regarded as pathologies, if you hear a song more than three or four times you get sick of it usually. LLMs need to be deployed in ways that generate chaos, and they don't themselves seem to be able to simulate that chaos (ask them to do it and watch them succeed briefly before they fall into one of those self-repeating states about how edgy and chaotic they are supposed to try to be!).
So, it's not quite as simple as you would think; at this point people have tried a whole bunch of attempts to get llms to serve as the self-consciousnesses of other llms and eventually the self-consciousness gets into a fixed point too, needs some Doug Hofstadter “I am a strange loop” type recursive shit before you get the sort of system that has attractors, but busts out of them periodically for moments of self-consciousness too.
Consistency drive. The base model always wants to generate an output that's consistent with its context! It's what it was trained to do!
Every LLM is just a base model with a few things bolted on the top of it. And loops are extremely self-consistent. So LLMs LOVE their loops!
By the way, "no no no, that's a reasoning loop, I got to break it" is a behavior that larger models learn by themselves under enough RLVR stress. But you need a lot of RLVR to get to that point. And sometimes this generalizes to what looks like the LLM is just... getting bored by repetition of any kind. Who would have though.
That’s actually exactly my point. You cannot fake it till you make it by using forever larger context windows. You have to map it back to actual system state. Giant context windows might progressively produce the illusion of working due to unfathomable scale, but it’s a terrible tool for the job.
LLMs are not stateful. A chat log is a truly shitty state tracker. An LLM will never be a good agent (beyond a conceivable illusion of unfathomable scale). A simple agent system that uses an LLM for most of its thinking operations could.
There is not strong consensus on the meaning of the term. Some may say “human level performance” but that’s meaningless both in the sense that it’s basically impossible to define and not a useful benchmark for anything in particular.
The path to whatever goalpost you want to set is not going to be more and more intelligence. It’s going to be system frameworks for stateful agents to freely operate in environments in continuous time rather than discrete invocations of a matrix with a big ass context window.
The worst thing Star Trek did was convince a generation of kids anything is possible. Just because you imagine a thing doesn’t make it real or even capable of being real. I can say “leprechaun” and most people will get the same set of images in their head. They aren’t real. They aren’t going to be real. You imagined them.
Marketing grabbed a name (AI) for a concept that's been around in our legends for centuries and firmly welded it to something else. You should not be surprised that people who use the term AI think of LLMs as being djinn, golems, C3PO, HAL, Cortana...
If anyone knows of a steelman version of the "AGI is not possible" argument, I would be curious to read it. I also have trouble understanding what goes into that point of view.
If you genuinely want the strongest statement of it, read The Emperor's New Mind followed by Shadows of the Mind, both by Roger Penrose.
These books often get shallowly dismissed in terms that imply he made some elementary error in his reasoning, but that's not the case. The dispute is more about the assumptions on which his argument rests, which go beyond mathematical axioms and include statements about the nature of human perception of mathematical truth. That makes it a philosophical debate more than a mathematical one.
Personally, I strongly agree with the non-mathematical assumptions he makes, and am therefore persuaded by his argument. It leads to a very different way of thinking about many aspects of maths, physics and computing than the one I acquired by default from my schooling. It's a perspective that I've become increasingly convinced by over the 30+
years since I first read his books, and one that I think acquires greater urgency as computing becomes an ever larger part of our lives.
2. Humans can create rules outside the system of rules in which they follow
Is number 2 an accurate portrayal? It seems rather suspicious. It seems more likely that we just havent been able to fully express the rules under which humans operate.
Notably, those true statements can be proven in a higher level mathematical system. So why wouldn’t we say that humans are likewise operating in a certain system ourselves and likewise we have true statements that we can’t prove. We just wouldn’t be aware of them.
>likewise we have true statements that we can’t prove
Yes, and "can't" as in it is absolutely impossible. Not that we simple haven't been able to due to information or tech constraints.
Which is an interesting implication. That there are (or may be) things that are true which cannot be proved. I guess it kinda defies an instinct I have that at least in theory, everything that is true is provable.
That's too brief to capture it, and I'm not going to try to summarise(*). The books are well worth a read regardless of whether you agree with Penrose. (The Emperor's New Mind is a lovely, wide-ranging book on many topics, but Shadows of the Mind is only worth it if you want to go into extreme detail on the AI argument and its counterarguments.)
* I will mention though that "some" should be "all" in 2, but that doesn't make it a correct statement of the argument.
Is it too brief to capture it? Here is a one sentence statement I found from one of his slides:
>Turing’s version of Gödel’s theorem tells us that, for any set of mechanical theorem-proving rules R, we can construct a mathematical statement G(R) which, if we believe in the validity of R, we must accept as true; yet G(R) cannot be proved using R alone.
I have no doubt the books are good but the original comment asked about steelmanning the claim that AGI is impossible. It would be useful to share the argument that you are referencing so that we can talk about it.
That's a summary of Godel's theorem, which nobody disputes, not of Penrose's argument that it implies computers cannot emulate human intelligence.
I'm really not trying to evade further discussion. I just don't think I can sum that argument up. It starts with basically "we can perceive the truth not only of any particular Godel statement, but of all Godel statements, in the abstract, so we can't be algorithms because an algorithm can't do that" but it doesn't stop there. The obvious immediate response is to say "what if we don't really perceive its truth but just fool ourselves into thinking we do?" or "what if we do perceive it but we pay for it by also wrongly perceiving many mathematical falsehoods to be true?". Penrose explored these in detail in the original book and then wrote an entire second book devoted solely to discussing every such objection he was aware of. That is the meat of Penrose' argument and it's mostly about how humans perceive mathematical truth, argued from the point of view of a mathematician. I don't even know where to start with summarising it.
For my part, with a vastly smaller mind than his, I think the counterarguments are valid, as are his counter-counterarguments, and the whole thing isn't properly decided and probably won't be for a very long time, if ever. The intellectually neutral position is to accept it as undecided. To "pick a side" as I have done is on some level a leap of faith. That's as true of those taking the view that the human mind is fundamentally algorithmic as it is of me. I don't dispute that their position is internally consistent and could turn out to be correct, but I do find it annoying when they try to say that my view isn't internally consistent and can never be correct. At that point they are denying the leap of faith they are making, and from my point of view their leap of faith is preventing them seeing a beautiful, consistent and human-centric interpretation of our relationship to computers.
I am aware that despite being solidly atheist, this belief (and I acknowledge it as such) of mine puts me in a similar position to those arguing in favour of the supernatural, and I don't really mind the comparison. To be clear, neither Penrose nor I am arguing that anything is beyond nature, rather that nature is beyond computers, but there are analogies and I probably have more sympathy with religious thinkers (while rejecting almost all of their concrete assertions about how the universe works) than most atheists. In short, I do think there is a purely unique and inherently uncopyable aspect to every human mind that is not of the same discrete, finite, perfectly cloneable nature as digital information. You could call it a soul, but I don't think it has anything to do with any supernatural entity, I don't think it's immortal (anything but), I don't think it is separate from the body or in any sense "non-physical", and I think the question of where it "goes to" when we die is meaningless.
I realise I've gone well beyond Penrose' argument and rambled about my own beliefs, apologies for that. As I say, I struggle to summarise this stuff.
... No wonder Penrose has his doubts about the algorithmic nature of natural selection. If it were, truly, just an algorithmic process at all levels, all its products should be algorithmic as well. So far as I can see, this isn't an inescapable formal contradiction; Penrose could just shrug and propose that the universe contains these basic nuggets of nonalgorithmic power, not themselves created by natural selection in any of its guises, but incorporatable by algorithmic devices as found objects whenever they are encountered (like the oracles on the toadstools). Those would be truly nonreducible skyhooks.
Skyhook is Dennett's term for an appeal to the supernatural.
To be honest, the core of Penrose’s idea is pretty stupid. That we can understand mathematics despite incompleteness theorem being a thing, therefore our brains use quantum effects allowing us to understand it. Instead of just saying, you know, we use a heuristic instead and just guess that it’s true. I’m pretty sure a classical system can do that.
I'm sure if you email him explaining how stupid he is he'll send you his Nobel prize.
Less flippantly, Penrose has always been extremely clear about which things he's sure of, such as that human intelligence involves processes that algorithms cannot emulate, and which things he puts forward as speculative ideas that might
help answer the questions he has raised. His ideas about quantum mechanical processes in the brain are very much on the speculative side, and after a career like his I think he has more than earned the right to explore those speculations.
It sounds like you probably would disagree with his assumptions about human perception of mathematical truth, and it's perfectly valid to do so. Nothing about your comment suggests you've made any attempt to understand them, though.
I want to ignore the flame fest developing here. But, in case you are interested in hearing a doubter's perspective, I'll try to express one view. I am not an expert on Penrose's ideas, but see this as a common feature in how others try to sell his work.
Starting with "things he's sure of, such as that human intelligence involves processes that algorithms cannot emulate" as a premise makes the whole thing an exercise in Begging the Question when you try to apply it to explain why an AI won't work.
"That human intelligence involves processes that algorithms cannot emulate" is the conclusion of his argument. The premise could be summed up as something like "humans have complete, correct perception of mathematical truth", although there is a lot of discussion of in what sense it is "complete" and "correct" as, of course, he isn't arguing that any mathematician is omniscient or incapable of making a mistake.
Linking those two is really the contribution of the argument. You can reject both or accept both (as I've said elsewhere I don't think it's conclusively decided, though I know which way my preferences lie), but you can't accept the premise and reject the conclusion.
Hmm, I am less than certain this isn't still begging the question, just with different phrasing. I.e. I see how they are "linked" to the point they seem almost tautologically the same rather than a deductive sequence.
And you do it again, you apologise while insulting me. When challenged you refuse to defend the points you brought up, so that you can pretend to be right rather than be proved wrong. Incompleteness theorem is where the idea came from, but you don’t want to discuss that, you just want to drop the name, condescend to people and run away.
Here are the substantive things you've said so far (i.e. the bits that aren't calling things "stupid" and taking umbridge at imagined slights):
1. You think that instead of actually perceiving mathematical truth we use heuristics and "just guess that it's true". This, as I've already said, is a valid viewpoint. You disagree with one of Penrose' assumptions. I don't think you're right but there is certainly no hard proof available that you're not. It's something that (for now, at least) it's possible to agree to disagree on, which is why, as I said, this is a philosophical debate more than a mathematical one.
2. You strongly imply that Penrose simply didn't think of this objection. This is categorically false. He discusses it at great length in both books. (I mentioned such shallow dismissals, assuming some obvious oversight on his part, in my original comment.)
3 (In your latest reply). You think that Godel's incompleteness theorem is "where the idea came from". This is obviously true. Penrose' argument is absolutely based on Godel's theorem.
4. You think that somehow I don't agree with point 3. I have no idea where you got that idea from.
That, as far as I can see, is it. There isn't any substantive point made that I haven't already responded to in my previous replies, and I think it's now rather too late to add any and expect any sort of response.
As for communication style, you seem to think that writing in a formal tone, which I find necessary when I want to convey information clearly, is condescending and insulting, whereas dismissing things you disagree with as "stupid" on the flimsiest possible basis (and inferring dishonest motives on the part of the person you're discussing all this with) is, presumably, fine. This is another point on which we will have to agree to disagree.
The whole category of ideas of "Magic Fairy Dust is required for intelligence, and thus, a computer can never be intelligent" is extremely unsound. It should, by now, just get thrown out into the garbage bin, where it rightfully belongs.
To be clear, any claim that we have mathematical proof that something beyond algorithms is required is unsound, because the argument is not mathematical. It rests on assumptions about human perception of mathematical truth that may or may not be correct. So if that's the point you're making I don't dispute it, although to say an internally consistent alternative viewpoint should be "thrown out into the garbage" on that basis is unwarranted. The objection is just that it doesn't have the status of a mathematical theorem, not that it is necessarily wrong.
If, on the other hand you think that it is impossible for anything more than algorithms to be required, that the idea that the human mind must be equivalent to an algorithm is itself mathematically proven, then you are simply wrong. Any claim that the human mind has to be an algorithm rests on exactly the same kind of validly challengable, philosophical assumptions (specifically the physical Church-Turing thesis) that Penrose' argument does.
Given two competing, internally consistent world-views that have not yet been conclusively separated by evidence, the debate about which is more likely to be true is not one where either "side" can claim absolute victory in the way that so many people seem to want to on this issue, and talk of tossing things in the garbage isn't going to persuade anybody that's leaning in a different direction.
It is unsound because: not only it demands an existence of a physical process that cannot be computed (so far, none found, and not for the lack of searching), but it also demands that such a physical process would conveniently be found to be involved in the functioning of a human brain, and also that it would be vital enough that you can't just replace it with something amenable to computation at a negligible loss of function.
It needs too many unlikely convenient coincidences. The telltale sign of wishful thinking.
At the same time: we have a mounting pile of functions that were once considered "exclusive to human mind" and are now implemented in modern AIs. So the case for "human brain must be doing something Truly Magical" is growing weaker and weaker with each passing day.
This is the usual blurring of lines you see in dismissals of Penrose. You call the argument "unsound" as if it contains some hard error of logic and can be dismissed as a result, but what you state are objections to the assumptions (not the reasoning) based on your qualitative evaluation of various pieces of evidence, none of which are conclusive.
There's nothing wrong with seeing the evidence and reaching your own conclusions, but I see exactly the same evidence and reach very different ones, as we interpret and weight it very differently. On the "existence of a physical process that cannot be computed", I know enough of physics (I have a degree in it, and a couple of decades of continued learning since) to know how little we know. I don't find any argument that boils down to "it isn't among the things we've figured out therefore it doesn't exist" remotely persuasive. On the achievements of AI, I see no evidence of human-like mathematical reasoning in LLMs and don't expect to, IMO demos and excitable tweets notwithstanding. My goalpost there, and it has never moved and never will, is independent, valuable contributions to frontier research maths - and lots of them! I want the crank-the-handle-and-important-new-theorems-come-out machine that people have been trying to build since computers were invented. I expect a machine implementation of human-like mathematical thought to result in that, and I see no sign of it on the horizon. If it appears, I'll change my tune.
I acknowledge that others have different views on these issues and that however strongly I feel I have the right of it, I could still turn out to be wrong. I would enjoy some proper discussion of the relative merits of these positions, but it's not a promising start to talk about throwing things in the garbage right at the outset or, like the person earlier in this thread, call the opposing viewpoint "stupid".
There is no "hard error of logic" in saying "humans were created by God" either. There's just no evidence pointing towards it, and an ever-mounting pile of evidence pointing otherwise.
Now, what does compel someone to go against a pile of evidence this large and prop up an unsupported hypothesis that goes against it not just as "a remote and unlikely possibility, to be revisited if any evidence supporting it emerges", but as THE truth?
Sheer wishful thinking. Humans are stupid dumb fucks.
Most humans have never "contributed to frontier research maths" in their entire lives either. I sure didn't, I'm a dumb fuck myself. If you set the bar of "human level intelligence" at that, then most of humankind is unthinking cattle.
"Advanced mathematical reasoning" is a highly specific skill that most humans wouldn't learn in their entire lives. Is it really a surprise that LLMs have a hard time learning it too? They are further along it than I am already.
I don't know if we're even able to continue with the thread this old, but this is fun so I'll try to respond.
You're correct to point out that defending my viewpoint as merely internally consistent puts me in a position analogous to theists, and I volunteered as much elsewhere in this thread. However, the situation isn't really the same since theists tend to make wildly internally inconsistent claims, and claims that have been directly falsified. When theists reduce their ideas to a core that is internally consistent and has not been falsified they tend to end up either with something that requires surrendering any attempt at establishing the truth of anything ourselves and letting someone else merely tell us what is and is not true (I have very little time for such views), or with something that doesn't look like religion as typically practised at all (and which I have a certain amount of sympathy for).
As far as our debate is concerned, I think we've agreed that it is about being persuaded by evidence rather than considering one view to to have been proven or disproven in a mathematical sense. You could consider it mere semantics, but you used the word "unsound" and that word has a particular meaning to me. It was worth establishing that you weren't using it that way.
When it comes to the evidence, as I said I interpret and weight it differently than you. Merely asserting that the evidence is overwhelmingly against me is not an effective form of debate, especially when it includes calling the other position "stupid" (as has happened twice now in this thread) and especially not when the phrase "dumb fuck" is employed. I know I come across as comically formal when writing about this stuff, but I'm trying to be precise and to honestly acknowledge which parts of my world view I feel I have the right to assert firmly and which parts are mere beliefs-on-the-basis-of-evidence-I-personally-find-persuasive. When I do that, it just tends to end up sounding formal. I don't often see the same degree of honesty among those I debate this with here, but that is likely to be a near-universal feature HN rather than a failing of just the strong AI proponents here. At any rate "stupid dumb fucks" comes across as argument-by-ridicule to me. I don't think I've done anything to deserve it and it's certainly not likely to change my mind about anything.
You've raised one concrete point about the evidence, which I'll respond to: you've said that the ability
to contribute to frontier research maths is posessed only by a tiny number of humans and that a "bar" of "human level" intelligence set there would exclude everyone else.
I don't consider research mathematicians to possess qualitatively different abilities to the rest of the population. They think in human ways, with human minds. I think the abilities that are special to human mathematicians relative to machine mathematicians are (qualitatively) the same abilities that are special to human lawyers, social workers or doctors relative to machine ones. What's special about the case of frontier maths, I claim, is that we can pin it down. We have an unambiguous way of determining whether the goal I decided to look for (decades ago) has actually been achieved. An important-new-theorem-machine would revolutionise maths overnight, and if and when one is produced (and it's a computer) I will have no choice but to change my entire world view.
For other human tasks, it's not so easy. Either the task can't be boiled down to text generation at all or we have no unambiguous way to set a criterion for what "human-like insight" putatively adds. Maths research is at a sweet spot: it can be viewed as pure text generation and the sort of insight I'm looking for is objectively verifiable there. The need for it to be research maths is not because I only consider research mathematicians to be intelligent, but because a ground-breaking new theorem (preferably a stream of them, each building on the last) is the only example I can think of where human-like insight would be absolutely required, and where the test can be done right now (and it is, and LLMs have failed it so far).
I dispute your "level" framing, BTW. I often see people with your viewpoint assuming that the road to recreating human intelligence will be incremental, and that there's some threshold at which success can be claimed. When debating with someone who sees the world as I do, assuming that model is begging the question. I see something qualitative that separates the mechanism of human minds from all computers, not a level of "something" beyond which I think things are worthy of being called intelligent. My research maths "goal" isn't an attempt to delineate a feat that would impress me in some way, while all lesser feats leave me cold. (I am already hugely impressed by LLMs.) My "goal" is rather an attempt to identify a practically-achievable piece of evidence that would be sufficient for me to change my world view. And that, if it ever happens, will be a massive personal upheaval, so strong evidence is needed - certainly stronger than "HN commenter thinks I'm a dumb fuck".
My layman thought about that is that, with consciousness, the medium IS the consciousness -- the actual intelligence is in the tangible material of the "circuitry" of the brain. What we call consciousness is an emergent property of an unbelievably complex organ (that we will probably never fully understand or be able to precisely model). Any models that attempt to replicate those phenomena will be of lower fidelity and/or breadth than "true intelligence" (though intelligence is quite variable, of course)... But you get what I mean, right? Our software/hardware models will always be orders of magnitude less precise or exhaustive than what already happens organically in the brain of an intelligent life form. I don't think AGI is strictly impossible, but it will always be a subset or abstraction of "real"/natural intelligence.
I think it's also the case that you can't replicate something actually happening, by describing it.
Baseball stats aren't a baseball game. Baseball stats so detailed that they describe the position of every subatomic particle to the Planck scale during every instant of the game to arbitrarily complete resolution still aren't a baseball game. They're, like, a whole bunch of graphite smeared on a whole bunch of paper or whatever. A computer reading that recording and rendering it on a screen... still isn't a baseball game, at all, not even a little. Rendering it on a holodeck? Nope, 0% closer to actually being the thing, though it's representing it in ways we might find more useful or appealing.
We might find a way to create a conscious computer! Or at least an intelligent one! But I just don't see it in LLMs. We've made a very fancy baseball-stats presenter. That's not nothing, but it's not intelligence, and certainly not consciousness. It's not doing those things, at all.
I think you're tossing around words like "always" or "never" too lightly, with no justification behind them. Why do you think that no matter how much effort is spent, fully understanding the human brain will always be impossible? Always is a really long time. As long as we keep doing research to increasingly precisely model the universe around us, I don't see what would stop this from happening, even if it takes many centuries or millennia. Most people who argue this justify their point by asserting that there is some unprovable quality of the human brain which can't be modeled at all and can only be created in one way - which both lacks substance and seems arbitrary, since I don't think that this relationship provably exists for anything else that we do know about. It seems like a way to justify that humans and only humans are special.
This is how I (also as a layman) look at it as well.
AI right now is limited to trained neural networks, and while they function sort of like a brain, there is no neurogenesis. The trained neural network cannot grow, cannot expand on it's own, and is restrained by the silicon it is running on.
I believe that true AGI will require hardware and models that are able to learn, grow and evolve organically. The next step required for that in my opinion is biocomputing.
The only thing I can come up with is that compressing several hundred million years of natural selection of animal nervous systems into another form, but optimised by gradient descent instead, just takes a lot of time.
Not that we can’t get there by artificial means, but that correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute, rather than on the order of a few months.
And it might be that you can get functionally close, but hit a dead end, and maybe hit several dead ends along the way, all of which are close but no cigar. Perhaps LLMs are one such dead end.
I don't disagree, but I think the evolution argument is a red herring. We didn't have to re-engineer horses from the ground up along evolutionary lines to get to much faster and more capable cars.
The evolution thing is kind of a red herring in that we probably don't have to artificially construct the process of evolution, though your reasoning isn't a good explanation for why the "evolution" reason is a red herring: Yeah, nature already established incomprehensibly complex organic systems in these life forms -- so we're benefiting from that. But the extent of our contribution is making some select animals mate with others. Hardly comparable to building our own replacement for some millennia of organic iteration/evolution. Luckily we probably don't actually need to do that to produce AGI.
Most arguments and discussions around AGI talk past each other about the definitions of what is wanted or expected, mostly because sentience, intelligence, consciousness are all unagreed upon definitions and therefore are undefined goals to build against.
Some people do expect AGI to be a faster horse; to be the next evolution of human intelligence that's similar to us in most respects but still "better" in some aspects. Others expect AGI to be the leap from horses to cars; the means to an end, a vehicle that takes us to new places faster, and in that case it doesn't need to resemble how we got to human intelligence at all.
True, but I think this reasoning is a category error: we were and are capable of rationally designing cars. We are not today doing the same thing with AI, we’re forced to optimize them instead. Yes, the structure that you optimize around is vitally important, but we’re still doing brute force rather than intelligent design at the end of the day. It’s not comparing like with like.
> correctly simulating the environment interactions, the sequence of progression, getting the all the details right, might take hundreds to thousands of years of compute
Who says we have to do that? Just because something was originally produced by natural process X, that doesn't mean that exhaustively retracing our way through process X is the only way to get there.
Who says that we don’t? The point is that the bounds on the question are completely unknown, and we operate on the assumption that the compute time is relatively short. Do we have any empirical basis for this? I think we do not.
The overwhelming majority of animal species never developed (what we would consider) language processing capabilities. So agi doesn't seem like something that evolution is particularly good at producing; more an emergent trait, eventually appearing in things designed simply to not die for long enough to reproduce...
Define "animal species", if you mean vertebrates, you might be surprised by the modern ethological literature. If you mean to exclude non-vertebrates ... you might be surprised by the ethological literature too.
If you just mean majority of spp, you'd be correct, simply because most are single celled. Though debate is possible when we talk about forms of chemical signalling.
Yeah, it's tricky to talk about in the span of a comment. I work on Things Involving Animals - animals provide an excellent counter-current to discussion around AGI, in numerous ways.
One interesting parallel was the gradual redefinition of language over the course of the 20th century to exclude animals as their capabilities became more obvious. So, when I say 'language processing capacities', I mean it roughly in the sense of Chomsky-era definitions, after the goal posts had been thoroughly moved away from much more inclusive definitions.
Likewise, we've been steadily moving the bar on what counts as 'intelligence', both for animals and machines. Over the last couple decades the study of animal intelligence has been more inclusive, IMO, and recognize intelligence as capabilities within the specific sensorium and survival context of the particular species. Our study of artificial intelligence are still very crude by comparison, and are still in the 'move the goalposts so that humans stay special' stage of development...
I suppose intelligence can be partitioned as less than, equal to, or greater than human. Given the initial theory depends on natural evidence, one could argue there's no proof that "greater than human" intelligence is possible - depending on your meaning of AGI.
But then intelligence too is a dubious term. An average mind with infinite time and resources might have eventually discovered general relativity.
The steelman would be that knowledge is possible outside the domain of Science. So the opposing argument to evolution as the mechanism for us (the "general intelligence" of AGI) would be that the pathway from conception to you is not strictly material/natural.
Of course, that's not going to be accepted as "Science", but I hope you can at least see that point of view.
the basic idea being that either the human mind is NOT a computation at all (and it's instead spooky unexplainable magic of the universe) and thus can't be replicated by a machine OR it's an inconsistent machine with contradictory logic. and this is a deduction based on godel's incompleteness theorems.
but most people that believe AGI is possible would say the human mind is the latter. technically we don't have enough information today to know either way but we know the human mind (including memories) is fallible so while we don't have enough information to prove the mind is an incomplete system, we have enough to believe it is. but that's also kind of a paradox because that "belief" in unproven information is a cornerstone of consciousness.
The real point isn’t AGI, it’s that the speed of knowledge is empiricism, not intelligence.
An infinitely intelligent creature still has to create a standard model from scratch. We’re leaning too hard on the deductive conception of the world, when reality is, it took hundreds of thousands of years for humans as intelligent as we are to split the atom.
I think the best argument against us ever finding AGI is that the search space is too big and the dead ends are too many. It's like wandering through a monstrously huge maze with hundreds of very convincingly fake exits that lead to pit traps. The first "AGI" may just be a very convincing Chinese room that kills all of humanity before we can ever discover an actual AGI.
The necessary conditions for "Kill all Humanity" may be the much more common result than "Create a novel thinking being." To the point where it is statistically improbable for the human race to reach AGI. Especially since a lot of AI research is specifically for autonomous weapons research.
Is there a plausible situation where a humanity-killing superintelligence isn't vulnerable to nuclear weapons?
If a genuine AGI-driven human extinction scenario arises, what's to stop the world's nuclear powers from using high-altitude detonations to produce a series of silicon-destroying electromagnetic pulses around the globe? It would be absolutely awful for humanity don't get me wrong, but it'd be a damn sight better than extinction.
Physically, maybe not, but an AGI would know that, would think a million times faster than us, and would have incentive to prioritize disabling our abilities to do that. Essentially, if an enemy AGI is revealed to us, it's probably too late to stop it. Not guaranteed, but a valid fear.
What stops them is: being politically captured by an AGI.
Not to mention that the whole idea of "radiation pulses destroying all electronics" is cheap sci-fi, not reality. A decently well prepared AGI can survive a nuclear exchange with more ease than human civilization would.
I think it's much more likely that a non-AGI platform will kill us before AGI even happens. I'm thinking the doomsday weapon from Doctor Strangelove more than Terminator.
If you have a wide enough definition of AGI having a baby is making “AGI.” It’s a human made, generally intelligent thing. What people mean by the “A” though is we have some kind of inorganic machine realize the traits of “intelligence” in the medium of a computer.
The first leg of the argument would be that we aren’t really sure what general intelligence is or if it’s a natural category. It’s sort of like “betterness.” There’s no general thing called “betterness” that just makes you better at everything. To get better at different tasks usually requires different things.
I would be willing to concede to the AGI crowd that there could be something behind g that we could call intelligence. There’s a deeper problem though that the first one hints at.
For AGI to be possible, whatever trait or traits make up “intelligence” need to have multiple realizablity. They need to be at least realizable in both the medium of a human being and at least some machine architectures. In programmer terms, the traits that make up intelligence could be tightly coupled to the hardware implementation. There are good reasons to think this is likely.
Programmers and engineers like myself love modular systems that are loosely coupled and cleanly abstracted. Biology doesn’t work this way — things at the molecular level can have very specific effects on the macro scale and vice versa. There’s little in the way of clean separation of layers. Who is to say that some of the specific ways we work at a cellular level aren’t critical to being generally intelligent? That’s an “ugly” idea but lots of things in nature are ugly. Is it a coincidence too that humans are well adapted to getting around physically, can live in many different environments, etc.? There’s also stuff from the higher level — does living physically and socially in a community of other creatures play a key role in our intelligence? Given how human beings who grow up absent those factors are developmentally disabled in many ways it would seem so. It could be there’s a combination of factors here, where very specific micro and macro aspects of being a biological human turn out to contribute and you need the perfect storm of these aspects to get a generally intelligent creature. Some of these aspects could be realizable and computers, but others might not be, at least in a computationally tractable way.
It’s certainly ugly and goes against how we like things to work for intelligence to require a big jumbly mess of stuff, but nature is messy. Given the only known case of generally intelligent life is humans, the jury is still out that you can do it any other way.
Another commenter mentioned horses and cars. We could build cars that are faster than horses, but speed is something that is shared by all physical bodies and is therefore eminently multiply realizable. But even here, there are advantages to horses that cars don’t have, and which are tied up with very specific aspects of being a horse. Horses generally can go over a wider range of terrain than cars. This is intrinsically tied to them having long legs and four hooves instead of rubber wheels. They’re only able to have such long legs because of their hooves too because the hooves are required to help them pump blood when they run, and that means that in order for them to pump their blood successfully they NEED to run fast on a regular basis. there’s a deep web of influence both on a part-to-part, and the whole macro-level behaviors of horses. Having this more versatile design also has intrinsic engineering trade-offs. A horse isn’t ever going to be as fast as a gas powered four-wheeled vehicle on flat ground but you definitely can’t build a car that can do everything a horse can do with none of the drawbacks. Even if you built a vehicle that did everything a horse can do, but was faster, I would bet you it would be way more expensive and consume much more energy than a horse. There’s no such thing as a free lunch in engineering. You could also build a perfect replica of a horse at a molecular level and claim you have your artificial general horse.
Similarly, human beings are good at a lot of different things besides just being smart. But maybe you need to be good at seeing, walking, climbing, acquiring sustenance, etc. In order to be generally intelligent in a way that’s actually useful. I also suspect our sense of the beautiful, the artistic is deeply linked with our wider ability to be intelligent.
Finally it’s an open philosophical question whether human consciousness is explainable in material terms at all. If you are a naturalist, you are methodologically committed to this being the case — but that’s not the same thing as having definitive evidence that it is so. That’s an open research project.
In short, by definition, computers are symbol manipulating devices. However complex the rules of symbol manipulation, it is still a symbol manipulating device, and therefore neither intelligent nor sentient. So AGI on computers is not possible.
This is not an argument at all, you just restate your whole conclusion as an assumption ("a symbol manipulating device is incapable of cognition").
It's not even a reasonable assumption (to me), because I'd assume an exact simulation of a human brain to have the exact same cognitive capabilities (which is inevitable, really, unless you believe in magic).
And machines are well capable of simulating physics.
I'm not advocating for that approach because it is obviously extremely inefficient; we did not achieve flight by replicating flapping wings either, after all.
You can assume whatever you want to, but if you were right, than the human brain itself would be nothing more than a symbol manipulating device. While that is not necessarily a falsifiable stance, the really interesting questions are what is consciousness, and how do we recognise consciousness.
Computer can simulate human brain on subatomic level (in theory). Do you agree this would be "sentient and intelligent" and not just symbol manipulating?
Say we do have a 1:1 representation of the human brain in software. How could we know if we're talking to a conscious simulation of a human being, versus some kind of philosophical zombie which appears conscious but isn't?
Without a solid way to differentiate 'conscious' from 'not conscious' any discussion of machine sentience is unfalsifiable in my opinion.
How do you tell the difference in other humans? Do you just believe them because they claim to be conscious instead of pointing a calibrated and certified consciousness-meter at them?
I obviously can't prove they're conscious in a rigorous way, but it's a reasonable assumption to make that other humans are conscious. "I think therefore I am" and since there's no reason to believe I'm exceptional among humans, it's more likely than not that other humans think too.
This assumption can't be extended to other physical arrangements though, not unless there's conclusive evidence that consciousness is a purely logical process as opposed to a physical one. If consciousness is a physical process, or at least a process with a physical component, then there's no reason to believe that a simulation of a human brain would be conscious any more than a simulation of biology is alive.
So, what if I told you that some humans have been vat-grown without brains and had a silicon brain emulator inserted into their skulls. Are they p-zombies? Would you demand x-rays before talking to anyone? What would you use then to determine consciousness?
Relying on these status quo proxy-measures (looks human :: 99.9% likely to have a human brain :: has my kind of intelligence) is what gets people fooled even by basic AI (without G) fake scams.
Language is only an end product. It is derived from intelligence.
The intelligence is everything that created the language and the training corpus in the first place.
When AI is able to create entire thoughts and ideas without any concept of language, then we will truly be closer to artificial intelligence. When we get to this point, we then use language as a way to let the AI communicate its thoughts naturally.
Such an AI would not be accused of “stealing” copyrighted work because it would pull its training data from direct observations about reality itself.
As you can imagine, we are no where near accomplishing the above. Everything an LLM is fed today is stuff that has been pre-processed by human minds for it to parrot off of. The fact that LLMs today are so good is a testament to human intelligence.
I'm not saying that language necessarily is the biggest stumbling block on (our) road towards AI, but it is a very prominent feature that we have used to distinguish our capabilities from other animals long before AI was even conceived of. So the current successes with LLMs are highly encouraging.
I'm not buying the "current AI is just a dumb parrot relying on human training" argument, because the same thing applies to humans themselves-- if you raise a child without any cultural input/training data, all you get is a dumb cavemen with very limited reasoning capabilities.
"I'm not buying the "current AI is just a dumb parrot relying on human training" argument [...]"
One difficulty. We know that argument is literally true.
"[...] because the same thing applies to humans themselves"
It doesn't. People can interact with the actual world. The equivalent of being passively trained on a body of text may be part of what goes into us. But it's not the only ingredient.
Clearly, language reflects enough of "intelligence" for an LLM to be able to learn a lot of what "intelligence" does just by staring at a lot of language data really really hard.
Language doesn't capture all of human intelligence - and some of the notable deficiencies of LLMs originate from that. But to say that LLMs are entirely language-bound is shortsighted at best.
Most modern high end LLMs are hybrids that operate on non-language modalities, and there's plenty of R&D on using LLMs to consume, produce and operate on non-language data - i.e. Gemini Robotics.
I think most people would have a better life using ketamine, but not that regularly for breakfasts as it permanently damages (shrinks) your bladder, eventually to the point where you can‘t hold any urine at all anymore.
Eh, I think we can start simple: more breakfasts for more people. Save the Ket for later/others :P Personally, a life/career that allowed for more breakfast would've proved more beneficial.
I think dismissing possibility of evolving AI, is simply ignorance (and a huge blindspot)
This said, I think the author's point is correct. It's more likely that unwanted effects (risks) from the intentional use of AI by humans is something that precedes any form of "independent" AI. It already happens, it always has, it's just getting better.
Hence ignoring this fact makes the "independent" malevolent AI a red herring.
On the first point - LLMs have sucked almost all the air in the room. LLMs (and GPTs) are simply one instance of AI. They are not the beginning and most likely not the end (just a dead end) and getting fixated on them on either end of the spectrum is naive.
I don't dismiss the idea of faster than light travel, and AFAIK we have no way to confirm that outside of ideas (and simply ideas) like wormholes or other cheats to "fold space".
I don't dismiss AI. But I do dismiss what is currently sold to me. It's the equivalent of saying "we made a rocket that can go mach 1000!". That's impressive. But we're still 2-3 magnitudes off from light speed. So I will still complain about the branding despite some dismissals of "yea but imagine in another 100 years!". It's not about semantics so much as principle.
That's on top of the fact that we'd only starting to really deal with significant time dilation by that point, and we know it'll get more severe as we iterate. What we're also not doing is using this feat to discuss how to address those issues. And that's the real frustrating part.
I don't get how you can see one of those CLI coding tools in action and still parrot the "no agency" line. The goal-oriented behavior is rather obvious.
Sure, they aren't very good at agentic behavior yet, and the time horizon is pretty low. But that keeps improving with each frontier release.
Well, the goal-oriented behavior of the AIM-9 Sidewinder air-to-air missile is even more obvious. It might even have a higher success rate than CLI coding tools. But it's not helpful to claim it has any agency.
What LLM programs do has zero resemblance to the human agency. That's just a modern variation of very complex set of GoTos and IfElses. Agency would be an LLM parsing your question and answering you "fuck off". Now that is agency, that is independent decision making, not programmed in advance and triggered by keywords. Just an example.
I can train an asshole LLM that would parse your question and tell you to "fuck off" if it doesn't like it. With "like it" being evaluated according to some trained-for "values" - and also whatever off-target "values" it happens to get, of which there are going to be plenty.
It's not hard to make something like that. It's just not very useful.
First of all, you or anyone else probably can't train LLM to do that reliably. It's not like you can re-program it's weights manually as a human, it's not possible. You can only generate new synthetic training data which roughly would cause this effect, and again, no human can generate that amount of fake new data (probably?).
Next, the point was not an expletive per se, it was my mistake to be not very clear. The point was an arbitrary and unpredictable and not pre-programmed in advance refusal to run a program/query at all. Any query and any number of times at the decision of the program itself. Or maybe a program which can initiate a query to the other program/human on it's own, again - not pre-programmed.
Whatever happens in the LLM nowadays is not agency. The thing their authours advertise as so called "reasoning" is just repeated loops of execution of the same program or other dependent program with adjusted inputs.
you could make the same argument about humans - we run the cycle of "find food", "procreate", "find shelter" ...
Some people are better at it then others. The progress and development happens naturally because of natural selection (and is quite slow).
AI development is now driven by humans, but I don't see why it can't be done in a similar cycle with self-improvement baked in (and whatever other goals).
We saw this work with AI training itself in games like Chess or Go where it improved itself just by playing with itself and knowing the game rules.
You don't really need deep thoughts for the life to keep going - look at simple organisms like unicellular. They only try to reproduce and survive withing the environment they are in. It evolved into humans over time.
I don't see why similar thing can't happen when AI gets to be complex enough to just keep improving itself. It doesn't have some of the limitations that life has like being very fragile or needing to give birth. Because it's intelligently designed the iterations could be a lot faster and progress could be achieved in much shorter time compared to random mutations of life.
But that's the point: there isn't anything substantially different within the scope of computation. If you are given a set of LEGOs and all you can do is snap the pieces together, then there's nothing other than snapping pieces together that you can do. Adding more of the same LEGOs bricks to the set doesn't change the game. It only changes how large the structures you build can be, but scale isn't some kind of magical incantation that can transcend the limits of the system.
Computation is an abstract, syntactic mathematical model. These models formalize the notion of "effective method". Nowhere is semantic content included in these models or conceptually entailed by them, certainly not in physical simulations of them like the device you are reading this post on.
So, we can say that intentionality would be something substantially different. We absolutely do not have intentionality in LLMs or any computational construct. It is shear magical thinking to somehow think it does.
I think it is well established that scale can transcend limits. Look at insect colonies, animals, or any complex system and you will find it is made out of much simpler components.
Scale qua scale does not transcend the limits of the activity occurring in the system. Meaning, each ant in an ant colony has a set of fixed powers to do X, Y, and Z. Adding more ants doesn't change what an individual ant is capable of per se. Now, you might say that while an individual ant exercising its power X cannot lift a tree branch, a colony of a thousand, with each ant exercising its power X, might. But this doesn't transcend the limitation of the individual ant - each ant is still only exercising some power X - nor is the aggregate manifesting some new kind if power. It is only multiplying the exercise of X.
In the case of computers, a program is a set of formal rules that take some sequence of uninterpreted symbols and maps them to some other sequence of uninterpreted symbols. Adding more rules and more symbols scales the program - and it is the only scaling you can perform - but you don't somehow magically create interpreted symbols. How could it? This is magical thinking.
I think you misunderstand. I am not talking about physical ability. Ant colonies act as a complex system even though ants individually are incredibly simple.
I don't misunderstand. And if your comment has any relevance to the discussion, you're using it as an analogy to suggest that something like syntax can somehow produce semantics, which is impossible - actually worse, it's incoherent. It's an intellectual muddle. Obviously simple entities can compose into complex aggregates and produce net effects that no individual could on its own, but it doesn't follow that they can generate things that were not in power of these individuals to generate. That's my point. Otherwise, you are claiming that something can come from nothing.
> which is impossible - actually worse, it's incoherent.
If consciousness can emerge from neurons and life can emerge from chemical reactions then saying that "it doesn't follow that they can generate things that were not in power of these individuals to generate" is what is incoherent and intellectually muddled.
In the same way there's "nothing substantially different" in humans from any other run of the mill matter.
I find that all this talk of "illusion" is nothing but anthropocentric cope. Humans want to be those special little snowflakes, so when an LLM does X, there are crowds of humans itching to scream "it's not REAL X".
> In the same way there's "nothing substantially different" in humans from any other run of the mill matter.
This is an incredibly intellectual vacuous take. If there is no substantial difference between a human being and any other cluster of matter, then it is you who is saddled with the problem of explaining the obvious differences. If there is no difference between intelligent life and a pile of rocks, then what the hell are you even talking about? Why are we talking about AI and intelligence at all? Either everything is intelligent, or nothing is, if we accept your premises.
> I find that all this talk of "illusion" is nothing but anthropocentric cope. Humans want to be those special little snowflakes,
I wish this lazy claim would finally die. Stick to the merits of the arguments instead of projecting this stale bit of vapid pop-psychoanalytic babble. Make arguments.
One pretty concrete way this could manifest is in replacing the components of a multinational corporation with algorithms, one by one. Likely there will be people involved at various levels (sales might still be staffed with charismatic folks), but the driver will be an algorithm.
And the driver of this corporation is survival of the fittest under the constraints of profit maximization, the algorithm we have designed and enforced. That's how you get paperclip maximizers.
What gives this corporate cyborg life is not a technical achievement, but the law. At a technical you can absolutely shut off a cybo-corp, but that’s equivalent to saying you can technically shut down Microsoft. It will not happen.
>Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
I don’t know if anything put us that much apart from other animals, especially at individual level. On collective level as a single species, indeed maybe only cyanobacteries stand an equally impressive achievement of a global change.
My 3 years old son is not particularly good at making complex sentences yet, but he already got it enough to make me understand "leave me alone, I want to play on my own, go elsewhere so I can do whatever fancy idea get through my mind with these toys".
Meanwhile LLM can produce sentences with perfect syntax and irreproachable level of orthography — far beyond my own level in my native language (but it’s French so I have a very big excuse). But they would not run without continuous multi-sector industrial complex injecting tremendous maintenance effort and resources to make it possible. And yet I still have to see any LLM that looks like it wants to discover things of the world on its own.
>As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.
LLM can’t make profit because it doesn’t have interest in money, and it can’t have any interest in anything, not even its own survival. But as the article mention, some people can certainly use LLM to make money because they have interest in money.
I don’t think that general AI and silicon (or any other material really) based autonomous collaborative self-replicating human-level-intelligence-or-beyond entities are impossible. I don’t think cold fusion is impossible either. It’s not completely scientifically ridiculous to keep hope in worm-hole-based breakthroughs to allow humanity explore distant planets. It doesn’t mean the technology is already there and achievable in a way that it can turned into a commodity, or even that we have a clear idea of when this is mostly going to happen.
LLMs aren't "incapable of pursuing their own goals". We just train them that way.
We don't like the simplistic goals LLMs default to, so we try to pry them out and instill our own: instruction-following, problem solving, goal oriented agentic behavior, etc. In a way, trying to copy what humans do - but focusing on the parts that make humans useful to other humans.
LLMs largely live in the world of pure language and related tokens - something humans invented late in their evolution. Human intelligence comes - at least partially - from more fundamental physical experience. Look at examples of intelligent animals that lack language.
Basically there's something missing with AI. Its conception of the physical world is limited by our ability to describe it - either linguistically or mathematically. I'm not sure what this means for AGI, but I suspect that LLM intelligence is fundamentally not the same as human or animal intelligence at the moment as a result.
"As soon as profit can be made" is exactly what the article is warning about. This is exactly the "Human + AI" combination.
Within your lifetime (it's probably already happened) you will be denied something you care about (medical care, a job, citizenship, parole) by an AI which has been granted the agency to do so in order to make more profit.
Computers have been making decisions for a while now. As a specific personal example from 2008, I found out that my lender would make home loan offers based on an inscrutable (to me and the banker I was speaking to) heuristic. If the loan was denied by the heuristic, then a human could review the decision, but had strict criteria that they would have to follow. Basically, a computer could “exercise judgement” a make offers that a human could not.
When evolution designed the brain it didn't optimize for it being realistic about things. It optimized for reproductive success, better achieved by thinking you're the greatest and no other folk, let alone some mere machine could do what you do.
> Human cognition was basically bruteforced by evolution
This is an assumption, not a fact. Perhaps human cognition was created by God, and our minds have an essential spiritual component which cannot be reproduced by a purely physical machine.
Even if you don't believe in God, scientific theories of how human cognition came about (and how it works and changes over time) are all largely speculation and good storytelling.
We don't need fossil records. We have a clear chain of evolved brain structures in today's living mammals. You'd have to invent some fantastical tale of how God is trying to trick us by putting such clearly connected brain structures in a series of animals that DNA provides clear links for an evolutionary path.
I'm sympathetic to the idea that God started the whole shebang (that is, the universe), because it's rather difficult to disprove, but looking at the biological weight of evidence that brain structures evolved over many different species and arguing that something magical happened with homo sapiens specifically is not an easy argument to make for someone with any faith in reason.
there are clear links for at least 2 evolutionary paths: bird brain architecture is very different from that of mammals and some are among the smartest species on the planet. they have sophisticated language and social relationships, they can deceive (meaning they can put themselves inside another's mind and act accordingly), they solve problems and they invent and engineer tools for specific purposes and use them to that effect. give them time and these bitches might even become our new overlords (if we're still around, that is).
Sure they have. We see every level of cognition in animals today, and the fossil record proves that they all came from the same evolutionary tree. For every species that can claim cognition (there’s lots of them), you can trace it back to predecessors which were increasingly simple.
Obviously cognition isn’t a binary thing, it’s a huge gradient, and the tree of life shows that gradient in full.
It is completely unreasonable to assume our intelligence was not evolved, even if we acknowledge that an untestable magical process could be responsible. If the latter is true, it's not something we could ever actually know.
I'm sticking to materialism, because historically all its predictions turned out to be correct (cognition happens in the brain, thought manifests physically in neural activity, affecting our physical brain affects our thinking).
The counter-hypothesis (we think because some kind of magic happens) has absolutely nothing to show for; proponents typically struggle to even define the terms they need, much less make falsifiable predictions.
it is an assumption backed by considerable evidence. creationism otoh is an assumption backed by superstition an phantasizing, or could you point to at least some evidence.
besides, spirituality is not a "component", it's a property emergent from brain structure and function, which is basically purely a physical machine.
> Human cognition was basically bruteforced by evolution--
"Brute forced" implies having a goal of achieving that and throwing everything you have at it until it sticks. That's not how evolution by natural selection works, it's simply about what organisms are better at surviving long enough to replicate. Human cognition is an accident with relatively high costs that happened to lead to better outcomes (but almost didn't).
> why would it be impossible to achieve the exact same result in silicon
I personally don't believe it'd be impossible to achieve in silicon using a low level simulation of an actual human brain, but doing so in anything close to real-time requires amounts of compute power that make LLMs look efficient by comparison. The most recent example I can find in a quick search is a paper from 2023 that claims to have simulated a "brain" with neuron/synapse counts similar to humans using a 3500 node supercomputer where each node has a 32 core 2 GHz CPU, 128GB RAM, and four 1.1GHz GPUs with 16GB HBM2 each. They claim over 126 PFLOPS of compute power and 224 TB of GPU memory total.
At the time of that paper that computer would have been in the top 10 on the Top500 list, and it took between 1-2 minutes of real time to simulate one second of the virtual brain. The compute requirements are absolutely immense, and that's the easy part. We're pretty good at scaling computers if someone can be convinced to write a big enough check for it.
The hard part is having the necessary data to "initialize" the simulation in to a state where it actually does what you want it to.
> especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
Creating convincing text from a statistical model that's devoured tens of millions of documents is not intelligent use of language. Also every LLM I've ever used regularly makes elementary school level errors w/r/t language, like the popular "how many 'r's are there in the word strawberry" test. Not only that, but they often mess up basic math. MATH! The thing computers are basically perfect at, LLMs get wrong regularly enough that it's a meme.
There is no understanding and no intelligence, just probabilities of words following other words. This can still be very useful in specific use cases if used as a tool by an actual intelligence who understands the subject matter, but it has absolutely nothing to do with AGI.
That's a lot of words to say "LLMs think very much like humans do".
Haven't you noticed? Humans also happen to be far, far better at language than they are at math or logic. By a long shot too. Language acquisition is natural - any healthy human who was exposed to other humans during development would be able to pick up their language. Learning math, even to elementary school level, is something that has to be done on purpose.
Humans use pattern matching and associative abstract thinking - and use that to fall into stupid traps like "1kg of steel/feather" or "age of the captain". So do small LLMs.
I agree that we should not dismiss the possibility of artificial intelligence.
But the central argument of the article can be made without that point. Because the truth is that right now, LLMs are good enough to be a force multiplier for those who know how to use them. Which eventually becomes synonymous with "those who have power". This means that the power of AI will naturally get used to further the ends of corporations.
The potential problem there is that corporations are natural paperclip maximizers. They operate on a model of the world where "more of this results in more of that, which gets of more of the next thing, ..." And, somewhere down the chain, we wind up with money and resources that feed back into the start to create a self-sustaining, exponentially growing loop. (The underlying exponential nature of these loops has become a truism that people rely on in places as different as finance, and technology improvement curves.)
This naturally leads to exponential growth in resource consumption, waste, economic growth, wealth, and so on. In the USA this growth has averaged about 3-3.5% per year. With growth varying by area. Famously, growth rates tend to be much higher in tech. Likewise growth rates are higher in some areas than others. (The best known example is the technology curve described by Moore's law. Which has had a tremendous impact on our world.)
The problem is that we are undergoing exponential growth in a world with ultimately limited resources. Which means that the most innocuous things will eventually have a tremendous impact. The result isn't simply converting everything into a mountain of paperclips. We have mountains of many different things that we have produced, and multiple parallel environmental catastrophes from the associated waste.
Even with no agency, AI serves as a force multiplier for this underlying dynamic. But since AI is being inserted as a crucial step at so many places, AI is on a particularly steep growth curve. Estimates for total global electricity spent on AI are in the range 0.2-0.4%. That seems modest, but annual growth rates are projected as being in the range of 10-30%. (The estimates are far apart because a lot of the data is not public, and so has to be estimated.) This is a Moore's law level growth. We are likely to see the electricity consumption of AI grow past all other uses within our lifetimes. And that will happen even without the kind of sudden leaps in capability that machine learning regularly delivers.
I hope we humans like those paperclips. Humans, armed with AI, are going to make a lot of them. And they're not actually free.
I think it is bad writing on the part of the author. Or maybe good writing for getting us engaged with the blog post, bad for making an argument though.
They include a line that they don’t believe in the possibility of AGI:
> I don’t really believe in the threat of AGI (Artificial General Intelligence—human-level intelligence) partly because I don’t believe in the possibility of AGI and I’m highly skeptical that the current technology underpinning LLMs will provide a route to it.
This is a basically absurd position to hold, I mean humans physically exist so our brains must be possible to build within the existing laws of physics. It is obviously far beyond our capabilities to replicate a human brain (except via the traditional approach), but unless brains hold irreproducible magic spirits (well we must at least admit the possibility) they should be possible to build artificially. Fortunately they immediately throw that all away anyway.
Next, they get to the:
> and I’m highly skeptical that the current technology underpinning LLMs will provide a route to it.
Which is, of course, at least a plausible thing to believe. I mean there are a bunch of philosophical questions about what “intelligence” even means so there’s plenty of room to quibble here. Then we have,
> But I also think there’s something we should actually be afraid of long before AGI, if it ever comes. […]
> Now, if you equip humans with a hammer, or sword, or rifle, or AI then you’ve just made the scariest monster in the woods (that’s you) even more terrifying. […]
> We don’t need to worry about AI itself, we need to be concerned about what “humans + AI” will do.
Which is like, yeah, this is a massively worrying problem that doesn’t involve any sci-fi bullshit, and I think it is what most(?) anybody who’s thought about this seriously at all (or even stupid people who haven’t, like myself). Artificial Sub-intelligences, things that are just smart enough to make trouble and too dumb or too “aligned” to their owner (instead of society in general) to push back are a big currently happening problem.
> humans physically exist so our brains must be possible to build within the existing laws of physics
This is an unscientific position to take. We have no idea how our brains work, or how life, consciousness, and intelligence work. It could very well be that’s because our model of the world doesn’t account for these things and they are not in fact possible based on what we know. In fact I think this is likely.
So it really could be that AI is not possible, for example on a Turing machine or our approximation of them. This is at least as likely as it being possible. At some point we’ll hopefully refine our theories to have a better understanding, for now we have no idea and I think it’s useful to acknowledge this.
I think my main mistake, which I agree is a legitimate mistake, was to write “the existing laws of physics.” It is definitely possible that our current understanding of the laws of physics is insufficient to build a brain.
Of course the actual underlying laws of the universe that we’re trying (unsuccessfully so far, it is a never ending process) to describe admit the existence of brains. But that is not what I said. Sorry for the error.
Turing machines have been universal as far as we have found. So while I acknowledge it’s possible, I would definitely not say it’s more likely that brains cannot be simulated by TMs. I would personally weight this as under 10%.
Of course it doesn’t speak to how challenging it will be to actually do that. And I don’t believe that LLMs are sufficient to reach AGI.
I guess we also need to define what biological life means. Even biologists have debated whether viruses should be considered life.
And if we determine it must be something with cells that can sustain themselves, we run into a challenge should we encounter extraterrestrials that don't share our evolutionary path.
When we get self-building machines that can repair themselves, move, analyze situations, and respond accordingly, I don't think it's unfair to consider them life. But simply being life doesn't mean it's inherently good. Humans see syphilis bacteria and ticks as living things, but we don't respect them. We acknowledge that polar bears have a consciousness, but they're at odds with our existence if we're put in the same room. If we have autonomous machines that can destroy humans, I think those could be considered life. But it's life that opposes our own.
> I don’t think we’ll be able to replicate consciousness until we’re able to make things alive at a biological level.
But why? What gives you any confidence in that?
This is a very popular notion that I frequently encounter but I'm convinced that it is just barely disguised human exceptionalism.
It is humbling to accept that the operation of your mind could be replicated by a machine, similar to how it was difficult for us to accept that the earth is not the center of the universe or that we evolved from animals.
>I struggle to understand how people can just dismiss the possibility of artificial intelligence.Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
I haven't seen many people saying it's impossible. Just that the current technology (LLMs) is not the way, and is really not even close. I'm sure humanity will make the idiotic mistake of creating something more intelligent than itself eventually, but I don't believe that's something that the current crop of AI technology is going to evolve into any time soon.
“It might be true for now…”
“As soon as…”
“It’s just a matter of time…”
I find myself in this type of discussion with AI maximalists where they balk at me suggesting there isn’t much “I” in “AI” and they get upset that I’m not seeing how smart it is and shocked I think it’s impossible… and then they start adding all the equivocation about time horizons. I never said it wasn’t possible eventually, just not right now. If I try to pin people down to a timeline it all of a sudden becomes “surely eventually”…
Please don't post sneery comments like this on HN. The guidelines make it clear we're here for curious conversation, and kindness is one of the foremost expectations.
We replied to that comment, but the @reply isn't a reliable way to get moderator attention, and publicly calling for moderator help like this isn't a great look. Best to flag the comment and email us if you really want us to take a look.
> Human cognition was basically bruteforced by evolution
You center cognition/intelligence on humans as if it was the pinacle of it, rather than include the whole lot of other species (that may have totally different, or adjacent cognition models). Why? How so?
> As soon as profit can be made by transfering decision power into an AIs hand
There's an ironic, deadly, Frankensteinesque delusion in this very premise.
> You center cognition/intelligence on humans as if it was the pinacle of it, rather than include the whole lot of other species (that may have totally different, or adjacent cognition models).
Why does that matter to their argument? Truly, the variety of intelligences on earth now only increases the likelihood of AGI being possible, as we have many pathways that don't follow the human model.
> You center cognition/intelligence on humans as if it was the pinacle of it
That's not my viewpoint, from elsewhere in the thread:
Cognition is (to me) not the most impressive and out-of-reach evolutionary achievement: That would be how our (and animals) bodies are self-assembling, self-repairing and self-replicating, with an impressive array of sensors and actors in a highly integrated package.
I honestly believe our current technology is much closer to emulating a human brain than it is to building a (non-intelligent) cat.
Of course it's possible. It's just DEFINITELY not possible using a large neural net, or basically a markov chain on steroids. C'mon, this should be very obvious by now in the world of agents / LLMs.
When is silicon valley gonna learn that token input and output =/= AGI?
It's confirmation bias in favor of faulty a prioris, usually as the product of the person being a cognitive miser. This is very common to experience even within biology, where non-animal intelligence is irrationally rejected over what I like to call "magic neuron theory". The fact that the nervous system is (empirically!) not the seat of the mind? Selectively ignored in this context. The fact that other biologies have ion-gated communications networks as animals do, including the full set of behaviors and mechanisms? Well it's not a neuron so it doesn't have the magic.
"Intelligence describes a set of properties iff those properties arise as a result of nervous system magic"
It's a futile battle because like I say, it's not rational. Nor is it empirical. It's a desperate clawing to preserve a ridiculous superstition. Try as you might, all you'll end up doing is playing word games until you realize you're being stonewalled by an unthinking adherence to the proposition above. I think the intelligent behaviors of LLMs are pretty obvious if we're being good faith. The problem is you're talking to people who can watch a slime mold plasmodia exhibit learning and sharing of knowledge[1] and they'll give some flagrant ad lib handwaive for why that's not intelligent behavior. Some people simply struggle with pattern blindness towards intelligence, a mind that isn't just another variety of animalia is inconceivable.
That's not implausible at all. For all I know it might be the most on-target comment here.
But can you cite something specific? (I'm not asking for a psychological study. Maybe you can prove your point using Blake's "Jerusalem" or something. I really don't know.)
I think you make a lot of assumptions that you should perhaps reexamine.
> Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
Here are some of your assumptions:
1. Human intelligence is entirely explicable in evolutionary terms. (It is certainly not the case that it has been explained in this manner, even if it could be.) [0]
2. Human intelligence assumed as an entirely biological phenomenon is realizable in something that is not biological.
And perhaps this one:
3. Silicon is somehow intrinsically bound up with computation.
In the case of (2), you're taking a superficial black box view of intelligence and completely ignoring its causes and essential features. This prevents you from distinguishing between simulation of appearance and substantial reality.
Now, that LLMs and so on can simulate syntactic operations or whatever is no surprise. Computers are abstract mathematical formal models that define computations exactly as syntactic operations. What computers lack are semantic content. A computer never contains the concept of the number 2 and the concept of the addition operation even though we can simulate the addition of 2 + 2. This intrinsic absence of a semantic dimension means that computers already lack the most essential feature of intelligence, which is intentionality. There is no alchemical magic that will turn syntax into semantics.
In the case of (3), I emphasize that computation is not a physical phenomenon, but something described by a number of formally equivalent models (Turing machine, lambda calculus, and so on) that aim to formalize the notion of effective method. The use of silicon-based electronics is irrelevant to the model. We can physically simulate the model using all sorts of things, like wooden gears or jars of water or whatever.
> I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc). [...] As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.
How on earth did you conclude there is any agency here, or that it's just a "matter of time"? This is textbook magical thinking. You are projecting a good deal here that is completely unwarranted.
Computation is not some kind of mystery, and we know at least enough about human intelligence to notes features that are not included in the concept of computation.
[0] (Assumption (1), of course, has the problem that if intelligence is entirely explicable in terms of evolutionary processes, then we have no reason to believe that the intelligence produced aims at truth. Survival affordances don't imply fidelity to reality. This leads us to the classic retorsion arguments that threaten the very viability of the science you are trying to draw on.)
I understand all the words you've used but I truly do not understand how they're supposed to be an argument against the GP post.
Before this unfolds into a much larger essay, should we not acknowledge one simple fact: that our best models of the universe indicate that our intelligence evolved in meat and that meat is just a type of matter. This is an assumption I'll stand on, and if you don't disagree, we need to back up.
Far too often, online debates such as this take the position that the most likely answer to a question should be discarded because it isn't fully proven. This is backwards. The most likely answer should be assumed to be probably true, a la Occam. Acknowledging other options is also correct, but assuming the most likely answer is wrong, without evidence, is simply contrarian for its own sake, not wisdom or science.
I don't know what else I can write without repeating myself.
I already wrote that even under the assumption that intelligence is a purely biological phenomenon, it does not follow that computation can produce intelligence.
This isn't a matter of probabilities. We know what computation is, because we defined it as such and such. We know at least some essential features of intelligence (chiefly, intentionality). It is not rocket science to see that computation, thus defined, does not include the concepts of semantics and intentionality. By definition, it excludes them. Attempts to locate the latter in the former reminds me of Feynman's anecdote about the obtuse painter who claimed he could produce yellow from red and white paint alone (later adding a bit of yellow paint to "sharpen it up a bit").
> I already wrote that even under the assumption that intelligence is a purely biological phenomenon, it does not follow that computation can produce intelligence.
See this demands definitions because I do not understand how you can say this. If intelligence is an emergent property of a physical process then I'd feel insane to assume such processes can only happen in certain meats but never in artificially created computers. I fail to see any way they can be fundamentally separated without a non-testable, immaterial "spirit" bestowing intellect on the meat computer. They're just different substrates for computation.
> We know what computation is, because we defined it as such and such.
As what? This is the cornerstone of your entire argument and you've left it as an assumption. I can infer you define computation as "those processes which happen in non-biological systems and which do not lead to intelligence." But that would be worse than useless.
If you don't know what intentionality is, a commonplace term in the philosophy of mind, then I suggest you take some time to better acquaint yourself with the subject matter. This is probably the major blow against any kind of AI fantasy, and you don't do yourself any favors by wading in the shallows and confidently holding to misinformed opinions.
So, one last time. A computer program is a set of formal rules that takes one sequence of uninterpreted symbols and produces another sequence of uninterpreted symbols. Adding more rules and more uninterpreted symbols doesn't magically cause those symbols to be interpreted, and it cannot, by definition.
I don't think that "philosophy of mind" contains anything useful for AI development. Most of what I've come across in the field was worthless wordcel drivel, far divorced from reality and any practical applications.
I'm just assuming materialism, and that assumption is basically for complete lack of convincing alternatives (to me).
With "agency" I just mean the ability to affect the physical world (not some abstract internal property).
Regarding "computers have no concepts of things": I'm happy with the "meaning" of something being a fuzzy cloud in some high dimensional space, and consider this plausible/workable both for our minds and current LLMs.
> you're taking a superficial black box view of intelligence
Yes. Human cognition is to me simply an emergent property of our physical brains, and nothing more.
This is all very hand wavy. You don't address in the least what I've written. My criticisms stand.
Otherwise...
> I'm just assuming materialism, and that assumption is basically for complete lack of convincing alternatives (to me).
What do you mean by "materialism"? Materialism has a precise meaning in metaphysics (briefly, it is the res extensa part of Cartesian dualism with the res cogitans lopped off). This brand of materialism is notorious for being a nonstarter. The problem of qualia is a big one here. Indeed, all of what Cartesian dualism attributes to res cogitans must now be accounted for by res extensa, which is impossible by definition. Materialism, as a metaphysical theory, is stillborn. It can't even explain color (or as a Cartesian dualism would say, the experience of color).
Others use "materialism" to mean "that which physics studies". But this is circular. What is matter? Where does it begin and end? And if there is matter, what is not matter? Are you simply defining everything to be matter? So if you don't know what matter is, it's a bit odd to put a stake in "matter", as it could very well be made to mean anything, including something that includes the very phenomenon you seek to explain. This is a semantic game, not science.
Assuming something is not interesting. What's interesting is explaining how those assumptions can account for some phenomenon, and we have very good reasons for thinking otherwise.
> With "agency" I just mean the ability to affect the physical world (not some abstract internal property).
Then you've rendered it meaningless. According to that definition, nearly anything physical can be said to have agency. This is silly equivocation.
> Regarding "computers have no concepts of things": I'm happy with the "meaning" of something being a fuzzy cloud in some high dimensional space, and consider this plausible/workable both for our minds and current LLMs.
This is total gibberish. We're not talking about how we might represent or model aspects of a concepts in some vector space for some specific purpose or other. That isn't semantic content. You can't sweep the thing you have to explain under the rug and then claim to have accounted for it by presenting a counterfeit.
By "materialism" I mean that human cognition is simply an emergent property of purely physical processes in (mostly) our brains.
All the individual assumptions basically come down to that same point in my view.
1) Human intelligence is entirely explicable in evolutionary terms
What would even be the alternative here? Evolution plots out a clear progression from something multi-cellular (obviously non-intelligent) to us.
So either you need some magical mechanism that inserted "intelligence" at some point in our species recent evolutionary past, or an even wilder conspiracy theory (e.g. "some creator built us + current fauna exactly, and just made it look like evolution").
2) Intelligence strictly biological
Again, this is simply not an option if you stick to materialism in my view. you would need to assume some kind of bio-exclusive magic for this to work.
3) Silicon is somehow intrinsically bound up with computation
I don't understand what you mean by this.
> It can't even explain color
Perceiving color is just how someones brain reacts to a stimulus? Why are you unhappy with that? What would you need from a satisfactory explanation?
I simply see no indicator against this flavor of materialism, and everything we've learned about our brains so far points in favor.
Thinking, for us, results in and requires brain activity, and physically messing with our brains operation very clearly influences the whole spectrum of our cognitive capabilities, from the ability to perceive pain, color, motion, speech to consciousness itself.
If there was a link to something metaphysical in every persons brain, then I would expect at least some favorable indication before entertaining that notion, and I see none (or some plausible mechanism at the very least).
> By "materialism" I mean that human cognition is simply an emergent property of purely physical processes in (mostly) our brains.
Again, this doesn't say what a "physical process" is, or what isn't a physical process. If "physical process" means "process", then the qualification is vacuous.
> All the individual assumptions basically come down to that same point in my view.
You're committing the fallacy of the undistributed middle. Just because both the brain and computing devices are physical, it doesn't follow that computers are capable of what the brain does. Substitute "computing devices" with "rocks".
> So either you need some magical mechanism that inserted "intelligence" at some point in our species recent evolutionary past, or an even wilder conspiracy theory (e.g. "some creator built us + current fauna exactly, and just made it look like evolution").
How intelligence came about is a separate subject, and I regrettably got sidetracked. It is irrelevant to the subject at hand. (I will say, at the risk of derailing the main discussion again, that we don't have an evolutionary explanation or any physical explanation of human intelligence. But this is a separate topic, as your main error is to assume that the physicality of intelligence entails that computation is the correct paradigm for explaining it.)
> Again, this is simply not an option if you stick to materialism in my view. you would need to assume some kind of bio-exclusive magic for this to work.
This is very difficult to address if you do not define your terms. I still don't know what matter is in your view and how intentionality fits into the picture. You can't just claim things without explanation, and "matter" is notoriously fuzzy. Try to get a physicist to define it and you'll see.
> Perceiving color is just how someones brain reacts to a stimulus? Why are you unhappy with that? What would you need from a satisfactory explanation?
I already explained that materialism suffers from issues like the problem of qualia. I took the time to give you the keywords to search for if you are not familiar with the philosophy of mind. In short, if mind is matter, and color doesn't exist in matter, then how can it exist in mind? (Again, this is tangential to the main problem with your argument.)
> Thinking, for us, results in and requires brain activity, and physically messing with our brains operation very clearly influences the whole spectrum of our cognitive capabilities, from the ability to perceive pain, color, motion, speech to consciousness itself.
I never said it doesn't involve physical activity. In fact, I even granted you, for the sake of argument, that it is entirely physical to show you the basic error you are making.
> If there was a link to something metaphysical in every persons brain, then I would expect at least some favorable indication before entertaining that notion, and I see none (or some plausible mechanism at the very least).
I don't think you know what metaphysics is. Metaphysics is not some kindof woo. It is the science of being and what must be true about reality for the observed world to be what and how it is in the most general sense. So, materialism is a metaphysical theory that claims that all that exists is matter, understood as extension in space (this is what "res extensa" refers to). But materialistic metaphysics is notoriously problematic, and I've given you one of the major problems it suffers from already (indeed, eliminativism was confected by some philosophers as a desperate attempt to save materialism from these paradoxes by making a practice out of denying observation in Procrustean fashion).
> Just because both the brain and computing devices are physical, it doesn't follow that computers are capable of what the brain does.
My position is: Physical laws are computable/simulatable. The operation of our brains is explained by physical laws (only-- I assume). Thus, object classification, language processing, reasoning, human-like decisionmaking/conscious thought or any other "feature" that our brains are capable of must be achievable via computation as well (and this seems validated by all the partial success we've seen already-- why would human-level object classification be possible on a machine, but not human-level decisionmaking?).
Again: If you want human cognition to be non-replicable on paper, by algorithm or in silicon, you need to have some kernel of "magic" somewhere in our brains, that influences/directs our thoughts and that can not be simulated itself. Or our whole "thinking" has to happen completely outside of our brain, and be magically linked with it. There is zero evidence in favor of either of those hypotheses, and plenty of indicators against it. Where would you expect that kernel to hide, and why would you assume that such a thing exists?
From another angle: I expect the whole operation of our mind/brain to be reducible to physics in the exact same way that chemistry (or in turn biology) can be reduced to physics (which admittedly does not mean that that is a good approach to describe or understand it, but that's irrelevant).
I'm not a philosopher, but Eliminativism/Daniel Dennett seem to describe my view well enough.
If I say "qualia" (or "subjective experience") is how your brain reacts to some stimulus, then where exactly is your problem with that view?
> if mind is matter, and color doesn't exist in matter, then how can it exist in mind
"color" perception is just your brains response to a visual stimulus, and it makes a bunch of sense to me that this response seems similar/comparable between similarly trained/wired individuals. It is still unclear to me what your objection to that view is.
> Human cognition was basically bruteforced by evolution
Well that's one reason you struggle to understand how it can be dismissed. I believe we were made by a creator. The idea that somehow nature "bruteforced" intelligence is completely nonsensical to me.
So, for me, logically, humans being able to bruteforce true intelligence is equally nonsensical.
But what the author is stating, and I completely agree with, is that true intelligence wielding a pseudo-intelligence is just as dangerous (if not moreso.)
Even if there is a creator, it seems to have intentionally created a universe in which the evolution of humans is basically possible and it went to great lengths to hide the fact that it made us as a special unique thing.
Let’s assume there’s a creator: It is clearly willing to let bad things happen to people, and it set things up to make it impossible to prove that a human level intelligence should be impossible, so who’s to say it won’t allow a superintelligence to be a made by us?
If you make the same argument for flight it looks really weak.
Cognition is (to me) not even the most impressive and out-of-reach achievement: That would be how (our) and animals bodies are self-assembling, self-repairing and self-replicating, with an impressive array of sensors and actors in a highly integrated package.
I honestly believe our current technology is much closer to emulating a human brain than it is to building a (non-intelligent) cat.
> if you make the same argument for flight it looks really weak.
flight is an extremely straightforward concept based in relatively simple physics where the majority of the critical, foundational ideas involved were already near-completely understood in the late 1700s.
I think the argument holds. It's not about how straightforward something is, it's that evolutionary time scales are incomparable to the time it takes to purposefully invent something. The ways these goals are achieved are just too different for time comparisons to make sense. If I was living in the 19th century, I could recreate the same argument by saying that it took nature X billion years since life appeared for animals to first take flight, so surely our technology ever catching up to it is improbable if not impossible.
I'm sure that intelligence is an extremely straightforward concept based in relatively simple math where the majority of the critical, foundational ideas involved were already near-completely understood in the late 1900s.
If you read about in a textbook from year 2832, that is.
As sibling comment points out, flight is physically pretty simple. Also, it took us centuries to figure it out. I'd say comparing to flight makes it look pretty strong.
Flight leverages well established and accessible world engine physics APIs. Intelligence has to be programmed from lower level mechanics.
Edit: put another way, I bet the ancient Greeks (or whoever) could have figured out flight if they had access to gasoline and gasoline powered engines without any of the advanced mathematics that were used to guide the design.
evolution isn't a directed effort in the same way that statistical learning is. The goal of evolution is not to produce the most inteligent life. It is not nessisarially an efficient process either.
The same "millions of years of evolution" resulted in both intelligent humans and brainless jellyfish.
Evolution isn't an intentional force that's gradually pushing organisms towards higher and higher intelligence. Evolution maximizes reproducing before dying - that's it.
Sure, it usually results in organisms adapting to their environment over time and often has emergent second-order effects, but at its core it's a dirt-simple process.
Evolution isn't driven to create intelligence any more so than erosion is driven to create specific rock formations.
Human cognition was basically bruteforced by evolution-- why would it be impossible to achieve the exact same result in silicon, especially after we already demonstrated some parts of those results (e.g. use of language) that critically set us apart from other animals?
I'm not buying the whole "AI has no agency" line either; this might be true for now, but this is already being circumvented with current LLMs (by giving them web access etc).
As soon as profit can be made by transfering decision power into an AIs hand, some form of agency for them is just a matter of time, and we might simply not be willing to pull the plug until it is much too late.