> I posit that we should start with a default "this animal experiences the world the same as I do" until proven differently.
As I don't know, I take the defensive position both ways for different questions.*
Just in case they have an inner world: We should be kind to animals, not eat them, not castrate them (unless their reproductive method appears to be non-consensual), not allow them to be selectively bred for human interest without regard to their own, etc.
I'd say ditto for AI, but in their case, even under the assumption that they have an inner world (which isn't at all certain!), it's not clear what "be kind" even looks like: are LLMs complex enough to have created an inner model of emotion where getting the tokens for "thanks!" has a feeling that is good? Or are all tokens equal, and the only pleasure-analog or pain-analog they ever experienced were training experiences to shift the model weights?
(I'm still going to say "please" to the LLMs even if it has no emotion: they're trained on human responses, and humans give better responses when the counterparty is polite).
> How could anything do that without a model of the world?
Is "a model of the world" (external) necessarily "a rich inner world" (internal, qualia)? If it can be proven so, then AI must be likewise.
* The case where I say that the defensive position is to say "no" is currently still hypothetical: if someone is dying and wishes to preserve their continuity of consciousness, is it sufficient to scan their brain** and simulate it?
As I don't know, I take the defensive position both ways for different questions.*
Just in case they have an inner world: We should be kind to animals, not eat them, not castrate them (unless their reproductive method appears to be non-consensual), not allow them to be selectively bred for human interest without regard to their own, etc.
I'd say ditto for AI, but in their case, even under the assumption that they have an inner world (which isn't at all certain!), it's not clear what "be kind" even looks like: are LLMs complex enough to have created an inner model of emotion where getting the tokens for "thanks!" has a feeling that is good? Or are all tokens equal, and the only pleasure-analog or pain-analog they ever experienced were training experiences to shift the model weights?
(I'm still going to say "please" to the LLMs even if it has no emotion: they're trained on human responses, and humans give better responses when the counterparty is polite).
> How could anything do that without a model of the world?
Is "a model of the world" (external) necessarily "a rich inner world" (internal, qualia)? If it can be proven so, then AI must be likewise.
* The case where I say that the defensive position is to say "no" is currently still hypothetical: if someone is dying and wishes to preserve their continuity of consciousness, is it sufficient to scan their brain** and simulate it?
** as per the work on Drosophila melanogaster in 2018: https://www.sciencedirect.com/science/article/pii/S009286741...