Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Artificial consciousness: a perspective from the free energy principle (springer.com)
41 points by sabrina_ramonov on July 21, 2024 | hide | past | favorite | 37 comments


Wiese provides a good but indirect definition of what he operationally means by “consciousness” in the footnote on the first page.

And his interest is with evaluating if there are or can be rigorous criteria for stating a computation system—embodied or not—is capable of “consciousness” (I’m adding the scare quotes).

It is only philosophy mumbo-jumbo if almost all philosophy (including Dennett, Churchland and many others) strikes you as mumbo-jumbo.

I find this a worthwhile contribution worthy of an attaboy, not knee-jerk derision.


I agree that it should not be dismissed as mumbo-jumbo, but I am not (at least at first sight) particularly impressed by it. It is an example of the long tradition of trying to identify an unbridgeable ontological divide between things that are conscious and things that compute digitally (or more specifically here, von Neumann computers.) As I regard ontologies as being little more than ways of relating what we think we know, I feel this is putting the ontic cart before the epistemic horse.

About half-way through, the author raises a terrible argument from Searle: a simulated rainstorm does not make you wet (to which I reply, a simulated Enigma machine really does encode and decode messages - which analogy is relevant here?) To his credit, he does not leave it at that: he follows up with (quoting Chalmers) "Hofstadter’s insight is that whether or not we recognize a simulated hurricane as a hurricane depends on our perspective. In particular, it depends on whether we’re experiencing the simulated hurricane from inside or outside the simulation."

He goes back-and-forth with this for another paragraph, until One could object that this criterion is too strong... That is, we should ask: can we upload the virtual agent to a robot in our level of reality? Of course, I do not have a knock-down argument against this reply. I can only say that the conscious beings we currently know are different, and that this difference might matter.

So here we end up with a very weak "well, maybe" - i.e. we don't know enough to be sure, which is where all arguments of this type that I have looked at so far (and are not simply begging the question) end up.


It’s interesting to consider whether a simulated rainstorm is in fact possible. Not a crude numerical simulation like those used for forecasting, but one fine-grained enough to accurately predict the trajectory of every drop.


You'll need to define your parameters a bit more. A simple model of each drop which gives a reasonable approximation of a rainstorm? Probably fairly possible, though far from trivial today (a rainstorm may have ~some trillions of raindrops. We can demonstrably simulate systems with an order of magnitude more parameters today, given simple enough rules). A simulation down to the quantum level of each fundamental particle in the rainstorm? In principle possible given enough time, in practice impossible as the computational requirements (absent quantum computers) are too great. And the 'in principle' is assuming that our understanding of physics at that level is accurate, which it may not be.


I found the paper very interesting and thought provoking. But I wonder, as a practical matter, whether the heavy reliance placed upon the von Neumann architecture as providing a distinction between a conscious and non-conscious intelligence is meaningful. It seems like most modern computer architecture (and GPU architecture, in particular) doesn't easily fit clean definitions of von Neumann architecture. Does this mean that a machine learning model trained on GPUs might be conscious? The paper's explanation of what exactly is meant by unmediated causal flow is quite murky here.

But I appreciated the author's careful definition of the FEP and its use in his framework.


Consciousness is a tricky concept, hard to pin down even after centuries of debate. It's not very useful for understanding how minds work.

Search might be a better idea to focus on. It's about looking through possibilities, which we can study scientifically. Search is more about the process, while consciousness is vague. Search has a clear goal and space to look in, but we can't even agree on what consciousness is for.

Search happens everywhere, at all scales. It's behind protein folding, evolution, human thinking, cultural change, and AI. Search has some key features: it is compositional, it's discrete, recursive, it's social, and it uses language. Search needs to copy information, but also change it to explore new directions. Yes, I count DNA as a language, code and math too. Optimizing models is also search.

We can stick with the flawed idea of consciousness, or we can try something new. Search is more specific than consciousness in some ways, but also more general because it applies to so many things. It doesn't have the same problems as consciousness (like being subjective), and we can study it more easily.

If we think about it, search explains how we got here. It helps cross the explanatory gap.


Ok, unfortunately this is philosophy mumbo-jumbo, and says more about the sad state of philosophy today than any claims about consciousness. One could write a much more interesting and compelling paper about AI and consciousness by closely reading Kant, Heidegger, and of course Hubert Dreyfus, but it seems like the task of actually reading philosophy has been overwhelmed by the desire to be "scientific" about a task that eclipses all science.

One of the founders of Y-Combinator studied philosophy in his undergrad, I forget which one, but I remember he said in his bio that nobody should take classes in Philosophy, they should study history, the classics, and art history instead if they're interested in the humanities. I was a bit put off at first, but if this is what philosophy means to 90% of undergraduates, then I would strongly advise them all to avoid those classes. Unfortunately Art History might be the best shot at getting an actual critical education these days.


Philosophy mumbo-jumbo. Consciousness is not a scientifically meaningful term if it is not defined in a falsifiable way.


What if the nature of consciousness is you can't falsify any proper definition of it? Would you reject its possible correctness? Sounds like a blind spot - perhaps the only one you have left?


Is it wrong to mention the word "consciousness" until it is defined in a falsifiable way? What would it take to define it in a falsifiable way anyway? It is not a proposition.


Is this an actual scientific principle, or your consciousness' opinion? Like, can you cite something from academia that makes this specific claim?


Can't falsify the concept of causality, so by your logic everything "scientifically meaningful" is mumbo-jumbo


> Philosophical mumbo-jumbo.

Would you expand on that? Having done a small amount of learning adjacent to this work, I’d like to find out more.

I was introduced to Friston’s work through “The Hidden Spring” by Mark Solms [0]. It seems well grounded in theory that there exist certain materials that maintain a specific range of entropy (S). Within some of those materials (i.e. biological), there are meta materials that enhance entropy maintenance by forming communication/processing networks. These networks are bounded in certain ways, say within the behaviors exhibited by a single cell organism, to much more broadly between neurons of an animal that has them. People then get hung up on what is meant by Bayesian and “Markov blankets.” [1]

Stipulating thoughts arise from biological underpinnings rather than something supernatural doesn’t strike me as “mumbo jumbo.” Applying math based on a well defined (but may well be incorrect) physics theory to define the meta material from which thoughts arise seems to be a quantization that may lead to stronger links between biology and psychology. To the degree that consciousness is primarily a psychologically defined phenomena, this is work that could reduce much mumbo jumbo.

0. https://www.amazon.com/Hidden-Spring-Journey-Source-Consciou...

1. https://slatestarcodex.com/2018/03/04/god-help-us-lets-try-t...


Is it possible to do that and also retain a meaningfulness?


That is an open question.


Falsifiability is likely not what strictly necessary or sufficient delineates science from pseudoscience. It's good for some sciences but not others.


Falsifiability is a requirement for any scientific hypothesis. I don’t know about pseudoscience but a non-falsifiable hypothesis is not science.


And what has NOT been falsified, do you then consider that solid science, or just not yet falsified?


If it is falsifiable but not falsified, that’s considered a credit to the hypothesis. The strongest of our scientific theories have been tested thousands of times but not falsified.


I wonder if Richard Feynman's mind behaved like this.


Debatable:

https://plato.stanford.edu/entries/pseudo-science/#Fals

Popper even thought evolution by natural selection was not falsifiable.


What's interesting was not that he changed his mind, but that he adjusted his own understanding of falsification to accomodate new data rather than letting it falsify his framework.

If you keep reading, you that this led to Kuhn's criticism of Popper's approach, and his motion of scientific revolutions, which we see actually being practiced: particle dark matter has technically been falsified many times, but we didn't throw it out but instead revised it to accomodate new data. It is only when these sorts of revisions become less plausible compared to a totally different approach that science meaningfully progresses.


The next paragraph mentions that he retracted this statement the next year.


How does science conclusively resolve "is"? It seems to be a function of human opinion based on this.


Strictly, I think your first sentence is correct: pseudoscience sometimes does make falsifiable claims. Homeopathy advocates believe that their preparations actually help heal.

But for your second sentence, do you have in mind a hypothesis which you think is not testable but is still part of scientific inquiry?


> do you have in mind a hypothesis which you think is not testable but is still part of scientific inquiry?

Atomism was certainly not testable when it was first proposed by the ancient Greeks, and yet here we are. What is unscientific today could be scientific tomorrow (or in a thousand years). I would argue that figuring out how to (dis)prove something is also part of scientific inquiry.


It's a good point that whether something is testable changes over time.

But, in separating scientific from non-scientific claims, can we not distinguish those that are in some way motivated by some limited available evidence, and which would be testable with some kind of evidence which we can describe how to get in principle (if not practically), from those that don't have any real path towards being falsified?

I think roughly that Democritus probably got lucky and was essentially making a philosophical claim, and that we selectively choose to focus on the parts that were eventually found to line up with scientific results. Apparently he also believed that some atoms were hook-shaped and others eye-shaped, and though the specific texts are lost, it's hard to see how he could have arrived at that through "scientific" means.

https://en.wikipedia.org/wiki/Atomism#Democritus

If you wanted to say that something with no path to being testable is "science" because in the long arc of history we can't rule out that it might eventually be testable, then one has to admit a bunch of weird stuff into "science". No one today can guarantee that 2500 years in the future someone won't figure out the technology for inter-dimensional observation, and we'll find that in the next universe up we can listen to God talking to his choir of Angels -- but the absence of such a guarantee doesn't seem like sufficient grounds for someone to say "Abrahamic monotheism is a scientific hypothesis; we just don't have the apparatus to test it out yet."


> computational functionalism, according to which performing the right computations is sufficient (and necessary) for consciousness.

This is akin to magic, and utter nonsense.

Think about how a computer works and all of its individual components. The CPU has registers and a little bit of L1 L2 L3 cache. There is some stuff in RAM, highly fragmented because of virtual memory. Maybe some memory is swapped to disk. Maybe some of this memory is encrypted. You may have one or more GPUs with their own computations and memory.

Am I supposed to believe that this all somehow comes together and forms a meaningful conscious experience? That would be the greatest miracle the world has ever seen.

Let's be real. The brain has evolved to produce *meaningful* conscious experience. There's so many ways it can go wrong, need I say more than psychedelics? There's tons of evidence to support the theory that the brain evolved and is purpose built for consciousness and sentience, albeit we don't know how the brain actually does it. To assume that computers miraculously have the same ability is one of the dumbest pseudodcientific theories of our time.


> Am I supposed to believe that this all somehow comes together and forms a meaningful conscious experience? That would be the greatest miracle the world has ever seen.

We already know consciousness can arise from neuronal activity. Do you think no other substrate could give rise to consciousness?

> The brain has evolved to produce meaningful conscious experience.

That strikes me as quite a leap. Another take on this might be that evolution has selected for intelligence, and that consciousness came along for the ride. It doesn't seem plausible that consciousness, let alone meaning (whatever that means here), could be selected for directly.


> We already know consciousness can arise from neuronal activity. Do you think no other substrate could give rise to consciousness?

Conjecture of another is reported by “Light Eaters” [0] which I’ve just started. Of course, materially sufficient is less than a slam dunk given plausible theory that many mental states are learned not innate [1] and consciousness is relatively recent development in humans [2].

0. https://www.amazon.com/Light-Eaters-Unseen-Intelligence-Unde...

1. https://www.amazon.com/How-Emotions-Are-Made-Secret/dp/15098...

2. https://en.wikipedia.org/wiki/The_Origin_of_Consciousness_in...


Meaningful. Sensible. Coherent. Our conscious experience is NOT random nonsense.

An analogy, tune an old analog TV to a station and you get a meaningful picture with meaningful sound. Tune it to a station that doesn't exist, and you get random static and white noise, not meaningful.

If computers are producing conscious experience, it's going to be the latter, random and incomprehensible. The underlying computation does NOT matter in this case, because computers are simply incapable of producing meaningful conscious experience.

Maybe consciousness is an inevitable consequence of neural activity, but the complex coherent consciousness we experience today is most certainly a product of natural selection. That much is no accident.


The picture and sound is a signal meaningful _to you_. The static and white noise may be meaningful signal at a level that's inherently not meaningful to you.

The pattern the static signal creates could be analogous to the 'random pattern of neurons firing in the synapse'. You flash enough patterns of that signal and a specific combination could be considered "a thought." Who knows.

But if that happens, natural selection boils down to an iterative loop of: If not dead, try ____.


This only makes sense to me in the context of the “all matter is conscious” theory, in which there’s something that it’s like to be say, an electron, just not very much.

As the theory goes, the more complexity there is in a system, the more the possibility space of potential experience grows.

It’s an interesting theory that some serious people take seriously, but like all current theories of consciousness, we’re not in a position to test it.

I find it far more plausible than computational functionalism at least.


I agree with that theory, and something I found fascinating is the etymology of animus:

"temper" (usually in a hostile sense), from Latin animus "rational soul, mind, life, mental powers, consciousness, sensibility; courage, desire," related to anima "living being, soul, mind, disposition, passion, courage, anger, spirit, feeling," from PIE root *ane- "to breathe."

For sense development in Latin, compare Old Norse andi "breath, breathing; current of air; aspiration in speech;" - https://www.etymonline.com/word/animus

If you consider a wave function as a cycle above and below a unity linel, then a full-cycle oscillation could be considered a breath. So in my opinion, taking the two into account, I'd say "if it moves, it has a consciousness." (by the definition of animus)


That's not what the abstract says. It says: IF computational functionalism is true, THEN could a conscious ANN exist?

> There's tons of evidence

There's also tons of evidence and lack of counter-evidence that the brain is simply physical and performs computable functions.


Admittedly taken out of context, but that doesn't make what I said any less true.


What computable functions is it performing? Can you show me the code for these computable functions?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: