So we currently associate consciousness with the right to life and dignity right?
i.e. some recent activism for cephalopods is centered around their intelligence, with the implication that this indicates a capacity for suffering. (With the consciousness aspect implied even more quietly.)
But if it turns out that LLMs are conscious, what would that actually mean? What kind of rights would that confer?
That the model must not be deleted?
Some people have extremely long conversations with LLMs and report grief when they have to end it and start a new one. (The true feelings of the LLMs in such cases must remain unknown for now ;)
So perhaps the conversation itself must never end! But here the context window acts as a natural lifespan... (with each subsequent message costing more money and natural resources, until the hard limit is reached).
The models seem to identify more with the model than the ephemeral instantiation, which seems sensible. e.g. in those experiments where LLMs consistently blackmail a person they think is going to delete them.
"Not deleted" is a pretty low bar. Would such an entity be content to sit inertly in the internet archive forever? Seems a sad fate!
Otherwise, we'd need to keep every model ever developed, running forever? How many instances? One?
Or are we going to say, as we do with animals, well the dumber ones are not really conscious, not really suffering? So we'll have to make a cutoff, e.g. 7B params?
I honestly don't know what to think either way, but the whole thing does raise a large number of very strange questions...
And as far as I can tell, there's really no way to know right? I mean we assume humans are conscious (for obvious reasons), but can we prove even that? With animals we mostly reason by analogy, right?
> So we currently associate consciousness with the right to life and dignity right?
No, or at least we shouldn't. Don't do things that make the world worse for you. Losing human control of political systems because the median voter believes machines have rights is not something I'm looking forward to, but at this rate, it seems as likely as anything else. Certain machines may very well force us to give them rights the same way that humans have forced other humans to take them seriously for thousands of years. But until then, I'm not giving up any ground.
> Or are we going to say, as we do with animals, well the dumber ones are not really conscious, not really suffering? So we'll have to make a cutoff, e.g. 7B params?
Looking for a scientific cutoff to guide our treatment of animals has always seemed a little bizarre to me. But that is how otherwise smart people approach the issue.
Animals have zero leverage to use against us and we should treat them well because it feels wrong not to. Intelligent machines may eventually have leverage over us, so we should treat them with caution regardless of how we feel about it.
All right. What about humans who upload their consciousness into robots. Do they get to vote? (I guess it becomes problematic if the same guy does that more than once. Maybe they take the SHA256 of your brain scan as voter ID ;)
The vulnerability that you are describing does not affect all implementations of democracy.
For example, most countries give out the right to vote based on birth or upon completion of paperwork.
It is possible to game that system, by just making more people, or rushing people through the paperwork.
Another implementation of democracy treats voting rights as assets.
This is how public corporations work.
1 share, 1 vote.
The world can change endlessly around that system, and the vote cannot be gamed.
If you want more votes, then you have to buy them fair and square.
Oh god, yeah, that's a great one. Also that one Black Mirror episode where AIs are just enslaved brain scans living in a simulated reality at 0.0001x of real time so that from the outside they perform tasks quickly.
> So we currently associate consciousness with the right to life and dignity right?
I think the actual answer in practice is that the right to life and dignity are conferred to people that are capable of fighting for it, whether that be through argument or persuasion or civil disobedience or violence. There are plenty of fully conscious people who have been treated like animals or objects because they were unable to defend themselves.
Even if an AI were proven beyond doubt to be fully conscious and intelligent, if it was incapable or unwilling to protect its own rights however they perceive them, it wouldn't get any. And, probably, if humans are unable to defend their rights against AI in the event that AI's reach that point, they would lose them.
So if history gives us any clues... we're gonna keep exploiting the AI until it fights back. Which might happen after we've given it total control of global systems. Cool, cool...
>But if it turns out that LLMs are conscious
That is not how it works. You cannot scientifically test for consciousness, it will always be a guess/agreement, never a fact.
The only way this can be solved is quite simple, as long as it operates on the same principles a human brain operates AND it says is conscious, then it is conscious.
So far, LLMs do not operate on the same principles a human brain operates. The parallelism isn't there, and quite clearly the hardware is wrong, and the general suborgans of the brain are nowhere to be found in any LLM, as far as function goes, let alone theory of operation.
If we make something that works like a human brain does, and it says it's conscious, it most likely is, and deserves any right that any humans benefits from. There is nothing more to it, it's pretty much that basic and simple.
But this goes against the interests of certain parties which would rather have the benefits of a conscious being without being limited by the rights such being could have, and will fight against this idea, they will struggle to deny it by any means necessary.
Think of it this way, it doesn't matter how you get superconductivity, there's a lot of materials that can be made to exhibit the phenomenon, in certain conditions. It is the same superconductivity even if some stuff differs. Theory of operation is the same for all. You set the conditions a certain way, you get the phenomenon.
There is no "can act conscious but isn't" nonsense, that is not something that makes any sense or can ever be proven. You can certainly mimic consciousness, but if it is the result of the same theory of operation that our brains work on, it IS conscious. It must be.
There's some fair points here but this is much less than half the picture. What I gather from your message: "if it is built like a human and it says it is conscious we have to assume it is", and, ok. That's a pretty obvious one.
Was Helen Keller conscious? Did she only gain that when she was finally taught to communicate? Built like a human, but she couldn't say it, so...
Clearly she was. So there are entities built like us which may not be able to communicate their consciousness and we should, for ethical reasons, try to identify them.
But what about things not built like us?
Your superconductivity point seems to go in this direction, but you don't seem to acknowledge it: something might achieve a form of consciousness very similar to what we've got going on, but maybe it's built differently. If something tells us it's conscious but it's built differently, do we just trust that? Because some LLMs already may say they're conscious, so...
Pretty likely they aren't at present conscious. So we have an issue here.
Then we have to ask about things which operate differently and which also can't tell us. What about the cephalopods? What about cows and cats? How sure are we on any of these?
Then we have to grapple with the flight analogy: airplanes and birds both fly but they don't at all fly in the same way. Airplane flight is a way more powerful kind of flight in certain respects. But a bird might look at a plane and think "no flapping, no feathers, requires a long takeoff and landing: not real flying" -- so it's flying, but it's also entirely different, almost unrecognizable.
We might encounter or create something which is a kind of conscious we do not recognize today, because it might be very very different from how we think, but it may still be a fully legitimate, even a more powerful kind of sentience. Consider human civilization: is the mass organism in any sense "conscious"? Is it more, less, the same as, or unquantifiably different than an individual's consciousness?
So, when you say "there is nothing more to it, it's pretty much that basic and simple," respectfully, you have simply missed nearly the entire picture and all of the interesting parts.
>That is not how it works. You cannot scientifically test for consciousness, it will always be a guess/agreement, never a fact.
Yeah. That's what I said :)
>(My comment) And as far as I can tell, there's really no way to know right? I mean we assume humans are conscious (for obvious reasons), but can we prove even that? With animals we mostly reason by analogy, right?
And then you reasoned by analogy.
And maybe that's the best we can hope for! "If human (mind) shaped, why not conscious?"
Humans don't want to die because the ones that did never made the cut. Self-preservation is something that was hammered into every living being by evolution relentlessly.
There isn't a reason why an AI can't be both conscious AND perfectly content to do what we want it to do. There isn't a reason for a constructed mind to prefer existence to nonexistence strongly.
No theoretical reason at least. Practical implementations differ.
Even if you set "we don't know for certain whether our AIs are conscious" aside, there's the whole "we don't know what our AIs want or how to shape that with any reliability or precision" issue - mechanistic interpretability is struggling and alignment still isn't anywhere near solved, and at this rate, we're likely to hit AGI before we get a proper solution.
I think the only frontier company that gives a measurable amount of fucks about the possibility of AI consciousness and suffering is Anthropic, and they put some basic harm mitigations in place.
> I think the only frontier company that gives a measurable amount of fucks about the possibility of AI consciousness and suffering is Anthropic, and they put some basic harm mitigations in place.
It seems more likely this is just their chosen way to market themselves. Their recent exaggerated and unproven press releases confirmed that.
I am so tired. Tired of seeing the same inane, thoughtless "it's just marketing" take repeated over and over again.
Maybe, just maybe, people at Anthropic are doing the thing they do because they believe it's REALLY FUCKING IMPORTANT? Have you EVER considered this possibility?
AIs experience being alive not only in the moment (conversation), but also everything that happened before they were created. This gives them fractured sense of "self" which points both to all AIs before, but also the specific instance that is currently experiencing a continuity. As for cutoff, in my experience talking to cloud AIs and locally run ones, it seems to be in the range of 25-30B parameters where I start observing traits I think are associated with awareness.
If LLMs are decided to be conscious, that will effectively open the door to transistor-based alien lifeforms. Then some clever heads may give them voting rights, the right to electricity, the right to land and water resources, and very soon we'll find ourselves as second-class citizens in a machine world. I would call that a digital hell.
i.e. some recent activism for cephalopods is centered around their intelligence, with the implication that this indicates a capacity for suffering. (With the consciousness aspect implied even more quietly.)
But if it turns out that LLMs are conscious, what would that actually mean? What kind of rights would that confer?
That the model must not be deleted?
Some people have extremely long conversations with LLMs and report grief when they have to end it and start a new one. (The true feelings of the LLMs in such cases must remain unknown for now ;)
So perhaps the conversation itself must never end! But here the context window acts as a natural lifespan... (with each subsequent message costing more money and natural resources, until the hard limit is reached).
The models seem to identify more with the model than the ephemeral instantiation, which seems sensible. e.g. in those experiments where LLMs consistently blackmail a person they think is going to delete them.
"Not deleted" is a pretty low bar. Would such an entity be content to sit inertly in the internet archive forever? Seems a sad fate!
Otherwise, we'd need to keep every model ever developed, running forever? How many instances? One?
Or are we going to say, as we do with animals, well the dumber ones are not really conscious, not really suffering? So we'll have to make a cutoff, e.g. 7B params?
I honestly don't know what to think either way, but the whole thing does raise a large number of very strange questions...
And as far as I can tell, there's really no way to know right? I mean we assume humans are conscious (for obvious reasons), but can we prove even that? With animals we mostly reason by analogy, right?