Models and the test of consciousness
For decades, new theories have been emerging that want to explain what consciousness is. They speak of information integration (Tononi, 2004, 2015), of global availability (Baars, 1988; Dehaene, 2014), recursive predictions (Friston, 2010) or phenomenological structures (Varela, Thompson & Rosch, 1991). Their common conviction is that consciousness can be explained by certain dynamics and forms of organization. But the crucial question remains: How could we tell that such a model not only sounds shoty, but actually produces consciousness?
In the following, it will be shown that all current models of consciousness are subject to a fundamental test. Four aspects must be considered: (1) its generative power, which has not yet been proven, (2) its metaphysical positions, (3) its circular validation between theory and empiricism, and (4) its predominantly interpretive, non-explanatory function.
1. Empirical Testing and Generative Power
The real touchstone of a model is not its internal consistency, but its generative power. A model endures if it can be shown that it creates a system that actually possesses consciousness. This is not a trivial point. This is because many models are content to provide correlations: they show that certain structures in the brain regularly coincide with consciousness (Mashour, Roelfsema, Changeux & Dehaene, 2020). From this it is then concluded that these structures explain consciousness. But until it is proven that the model itself is capable of producing consciousness, it remains an unproven hypothesis.
Empirical confirmation can only consist in the fact that the respective theory can generate consciousness in a reconstructed or artificially implemented form i.e. that it demonstrates its own generative power. This would only be the case if a system constructed according to the principles of theory produced consciousness. As long as no such system exists that actually produces consciousness on the basis of a theory, any theory of consciousness remains in the realm of metaphysics.
But what could such proof look like? Most likely, only by reconstructing the model in a reconstructed or artificially implemented form. The replica is not an end in itself, but it provides the only way to show that the model is not just a heuristic framework, but a generative structure. If it is possible to construct a system that meets the same conditions and that behaves like a conscious system in tests such as complexity measures, multimodal integration, memory performance or stable self-reference (Casali et al., 2013), then one can assume with good reason that the model actually generates consciousness.
2. Metaphysical Positing and Categorical Errors
All theories of consciousness are based on metaphysical positing they replace explanation with definition. Some models remain close to empiricism and try to develop a theoretical reconstruction from neurophysiological or information-theoretical findings. Others, on the other hand, understand consciousness as a basic principle of reality, as an emergent property of information or as an ontological category of being. But in their epistemological position, both directions are of equal importance. They differ only in the nature of their speculation, not in their scope. As long as neither of them provides practical proof that consciousness can actually produce it, they remain variations of the same philosophical project.
A classic example is Integrated Information Theory (IIT). It claims that consciousness arises from the integration of information and quantifies this with a measure (?). What begins as an elegant mathematical model, however, becomes a naïve generalization: any structure that integrates information is declared conscious. In this way, a heuristic hypothesis is transformed into a metaphysical ontology that inflationarily extends consciousness to all systems. The real difficulty why integration should produce experience at all remains untouched.
This is even clearer in the case of the free energy principle (Friston, 2010). Two categorical errors are made here: First, the epistemological principle of uncertainty minimization is derived from the thermodynamic principle of energy balance two completely different quantities. Second, according to the Bayesian theorem, the ability to calculate probabilities is shifted from the modeling level into the ontology of the organism, as if the brain were actually calculating probabilities. Both assumptions are metaphysical positions, not empirical findings.
3. The tautological circle
Theories of consciousness operate in a circular relationship between theory and empiricism: they invoke empirical data to support their validity, and then use the same data to confirm the theoretical premises. This creates an epistemological circle that does not allow for independent validation.
As a rule, these theories are composed of empirical markers, which in turn have been obtained independently. But what does it mean to bring these markers together in an overall concept? Empirically, they do not provide anything that is not already shown by the markers themselves. As theories, they therefore have no additional content, but merely the character of a narrative order. Their overall message is exhausted in metaphorically bundling known findings. As long as they do not point beyond these markers and prove their generative power, they remain epistemically empty.
4. Cognition function and heuristic value
Theories of consciousness serve less scientific knowledge than the interpretation of the world. They tell stories about how the world and experience could be connected, and each give this relationship its own symbolic form. In this sense, all theories of consciousness are of equal rank: they move on the same level of speculative metaphysics. Their differences do not lie in the empirical verifiability, but in the way they are interpreted. In this way, they fulfill a philosophical rather than a scientific function they are attempts to conceptualize the inexplicable, not to explain it empirically.
Its greatest heuristic value lies in bringing empirical observations into a consistent structure of interpretation. Those theories that are directly based on empirical data and derive their consistency from established theories of evolutionary biology, psychology and neuroscience make the most valuable contribution here. While they do not provide an explanation of consciousness, they do provide a coherent framework for describing its conditions.
Result
Theories of consciousness have not yet fulfilled an empirical test. They operate with metaphysical settings, circular references and heuristic metaphors. Its value therefore lies not in its explanatory power, but in its function of symbolically ordering the relationship between the world, life and experience. Only when a model can actually generate consciousness does philosophy turn into science.
Literature
Baars, B. J. (1988). A cognitive theory of consciousness. Cambridge University Press.
Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18(2), 227287. https://doi.org/10.1017/S0140525X00038188
Casali, A. G., Gosseries, O., Rosanova, M., Boly, M., Sarasso, S., Casali, K. R., ... & Massimini, M. (2013). A theoretically based index of consciousness independent of sensory processing and behavior. Science Translational Medicine, 5(198), 198ra105. https://doi.org/10.1126/scitranslmed.3006294
Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.
Dehaene, S. (2014). Consciousness and the brain: Deciphering how the brain codes our thoughts. Viking.
Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2), 127138. https://doi.org/10.1038/nrn2787
Mashour, G. A., Roelfsema, P., Changeux, J. P., & Dehaene, S. (2020). Conscious processing and the global neuronal workspace hypothesis. Neuron, 105(5), 776798. https://doi.org/10.1016/j.neuron.2020.01.026
Melloni, L., Mudrik, L., Pitts, M., & Koch, C. (2021). Making the hard problem of consciousness easier. Science, 372(6545), 911912. https://doi.org/10.1126/science.abj3259
Tononi, G. (2004). An information integration theory of consciousness. BMC Neuroscience, 5(1), 42 https://doi.org/10.1186/1471-2202-5-42
Tononi, G. (2015). Integrated information theory. Scholarpedia, 10(1), 4164. https://doi.org/10.4249/scholarpedia.4164
Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. WITH press.
In the following, it will be shown that all current models of consciousness are subject to a fundamental test. Four aspects must be considered: (1) its generative power, which has not yet been proven, (2) its metaphysical positions, (3) its circular validation between theory and empiricism, and (4) its predominantly interpretive, non-explanatory function.
1. Empirical Testing and Generative Power
The real touchstone of a model is not its internal consistency, but its generative power. A model endures if it can be shown that it creates a system that actually possesses consciousness. This is not a trivial point. This is because many models are content to provide correlations: they show that certain structures in the brain regularly coincide with consciousness (Mashour, Roelfsema, Changeux & Dehaene, 2020). From this it is then concluded that these structures explain consciousness. But until it is proven that the model itself is capable of producing consciousness, it remains an unproven hypothesis.
Empirical confirmation can only consist in the fact that the respective theory can generate consciousness in a reconstructed or artificially implemented form i.e. that it demonstrates its own generative power. This would only be the case if a system constructed according to the principles of theory produced consciousness. As long as no such system exists that actually produces consciousness on the basis of a theory, any theory of consciousness remains in the realm of metaphysics.
But what could such proof look like? Most likely, only by reconstructing the model in a reconstructed or artificially implemented form. The replica is not an end in itself, but it provides the only way to show that the model is not just a heuristic framework, but a generative structure. If it is possible to construct a system that meets the same conditions and that behaves like a conscious system in tests such as complexity measures, multimodal integration, memory performance or stable self-reference (Casali et al., 2013), then one can assume with good reason that the model actually generates consciousness.
2. Metaphysical Positing and Categorical Errors
All theories of consciousness are based on metaphysical positing they replace explanation with definition. Some models remain close to empiricism and try to develop a theoretical reconstruction from neurophysiological or information-theoretical findings. Others, on the other hand, understand consciousness as a basic principle of reality, as an emergent property of information or as an ontological category of being. But in their epistemological position, both directions are of equal importance. They differ only in the nature of their speculation, not in their scope. As long as neither of them provides practical proof that consciousness can actually produce it, they remain variations of the same philosophical project.
A classic example is Integrated Information Theory (IIT). It claims that consciousness arises from the integration of information and quantifies this with a measure (?). What begins as an elegant mathematical model, however, becomes a naïve generalization: any structure that integrates information is declared conscious. In this way, a heuristic hypothesis is transformed into a metaphysical ontology that inflationarily extends consciousness to all systems. The real difficulty why integration should produce experience at all remains untouched.
This is even clearer in the case of the free energy principle (Friston, 2010). Two categorical errors are made here: First, the epistemological principle of uncertainty minimization is derived from the thermodynamic principle of energy balance two completely different quantities. Second, according to the Bayesian theorem, the ability to calculate probabilities is shifted from the modeling level into the ontology of the organism, as if the brain were actually calculating probabilities. Both assumptions are metaphysical positions, not empirical findings.
3. The tautological circle
Theories of consciousness operate in a circular relationship between theory and empiricism: they invoke empirical data to support their validity, and then use the same data to confirm the theoretical premises. This creates an epistemological circle that does not allow for independent validation.
As a rule, these theories are composed of empirical markers, which in turn have been obtained independently. But what does it mean to bring these markers together in an overall concept? Empirically, they do not provide anything that is not already shown by the markers themselves. As theories, they therefore have no additional content, but merely the character of a narrative order. Their overall message is exhausted in metaphorically bundling known findings. As long as they do not point beyond these markers and prove their generative power, they remain epistemically empty.
4. Cognition function and heuristic value
Theories of consciousness serve less scientific knowledge than the interpretation of the world. They tell stories about how the world and experience could be connected, and each give this relationship its own symbolic form. In this sense, all theories of consciousness are of equal rank: they move on the same level of speculative metaphysics. Their differences do not lie in the empirical verifiability, but in the way they are interpreted. In this way, they fulfill a philosophical rather than a scientific function they are attempts to conceptualize the inexplicable, not to explain it empirically.
Its greatest heuristic value lies in bringing empirical observations into a consistent structure of interpretation. Those theories that are directly based on empirical data and derive their consistency from established theories of evolutionary biology, psychology and neuroscience make the most valuable contribution here. While they do not provide an explanation of consciousness, they do provide a coherent framework for describing its conditions.
Result
Theories of consciousness have not yet fulfilled an empirical test. They operate with metaphysical settings, circular references and heuristic metaphors. Its value therefore lies not in its explanatory power, but in its function of symbolically ordering the relationship between the world, life and experience. Only when a model can actually generate consciousness does philosophy turn into science.
Literature
Baars, B. J. (1988). A cognitive theory of consciousness. Cambridge University Press.
Block, N. (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18(2), 227287. https://doi.org/10.1017/S0140525X00038188
Casali, A. G., Gosseries, O., Rosanova, M., Boly, M., Sarasso, S., Casali, K. R., ... & Massimini, M. (2013). A theoretically based index of consciousness independent of sensory processing and behavior. Science Translational Medicine, 5(198), 198ra105. https://doi.org/10.1126/scitranslmed.3006294
Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.
Dehaene, S. (2014). Consciousness and the brain: Deciphering how the brain codes our thoughts. Viking.
Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 11(2), 127138. https://doi.org/10.1038/nrn2787
Mashour, G. A., Roelfsema, P., Changeux, J. P., & Dehaene, S. (2020). Conscious processing and the global neuronal workspace hypothesis. Neuron, 105(5), 776798. https://doi.org/10.1016/j.neuron.2020.01.026
Melloni, L., Mudrik, L., Pitts, M., & Koch, C. (2021). Making the hard problem of consciousness easier. Science, 372(6545), 911912. https://doi.org/10.1126/science.abj3259
Tononi, G. (2004). An information integration theory of consciousness. BMC Neuroscience, 5(1), 42 https://doi.org/10.1186/1471-2202-5-42
Tononi, G. (2015). Integrated information theory. Scholarpedia, 10(1), 4164. https://doi.org/10.4249/scholarpedia.4164
Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. WITH press.
Comments (41)
My only disagreement would be the position that the only means of verifying a model is generative.
The important thing in science is having a model that has predictive / inferential power. And given this, there are already plenty of things we understand about consciousness. I can display a particular kind of image to you, and predict you will see an optical illusion in your "mind's eye", thus verifying that particular understanding of how the brain constructs that aspect of consciousness.
I would not see it as impossible that a model of the neurological basis of consciousness could nonetheless be verified purely with tests of this kind.
Bear in mind also that there are some pretty fundamental aspects of consciousness that are still wide open for explanation: sleeping, dreaming, anaesthesia etc. Let me be clear that I don't consider these part of the "hard problem" of consciousness, and I actually get a bit annoyed when people take the problem of consciousness to just be whether someone is awake, alert etc.
However, I think our understanding of these phenomena remains quite weak so it is plausible to me that a model that explains subjective experience could also revolutionize our understanding of these too. And these aspects may have very trivial ways to test in vivo.
What is consciousness more generally is the question you should have in mind. What is it in a sense that could be extended not just to neurobiology but biology - a scientific theory of both life and mind. And indeed sociology.
If you think of consciousness as a stuff to be explained - a fundamental essence - then you are already off track. We learnt that from considering life to have been a stuff, a vital essence, rather than an entropic physical process based on semiosis or the modelling relation that organisms have with their worlds.
Like informational entropy and physical entropy ... ??
Quoting Wolfgang
No, the point is that if complex systems exist for extended periods of time, they must appear as if they are modelling their environment in the sense that their states are statistically coupled to those of their environment.
Consciousness exists because it serves the laws of thermodynamics. Or to put it in less substantial terms, biosemiosis exists as anticipatory modelling pays for its existence by being able to add novel dissipative structure to the entropic flow of the world.
The Bayesian algorithm describes how an organism in fact maintains a dynamical balance in this regard. At heart, life is the creation of a structured entropy flow. And the minimisation of surprisal is what keeps the organism humming along in stable fashion while also being perched right on the edge of the instability it is creating.
A mitochondrian is dealing with chemical forces that could simply blow it up at any moment. But it keeps its own genes close at hand to minimise the possibility of that.
Likewise the rock climber could fall at any minute. But hopefully keeps their wits about them at all times.
If you want to talk about theories of consciousness, it is best to start with those who understand it from the point of view of natural organic structure rather than information processing metaphors or tales about dualistic substance.
Quoting Wolfgang
I mean, informational entropy is a central part of Friston's theory.
Quoting Wolfgang
I'm just correcting your assertion that organisms need to know how to calculate probabilities.
I think its a nice framework for examining self-organization and conceptualizing what living organisms and brains do.
Quoting Wolfgang
This is where you already blew up your credibility.
Quoting Wolfgang
You mean like, hey little cell, are you organised by a genetic code?
Hey little brain, are you organised by a neural code?
Hey little human, are you organised by both those and also now a linguistic code?
And then even with suitable scientific and technological training by the further level of world modelling that is a mathematical code?
So "consciousness" is a sloppy term for glossing over all four of these levels of semiosis that can pragmatically inform us what life and mind "are". Organisms don't "generate" states of awareness. They enact the various levels of the semiotic modelling relation that define being an organism in the world.
An approach that Friston almost wryly captures in talking about an organism maximising its self-information through the minimisation of its surprisal.
Consciousness, such as it is, boils down to a capacity to effectively ignore the world as that world has already been predicted in terms of how it is flowing in the direction wanted.
So you are making the classic representationalist mistake of consciousness being some kind of veridical display. A glowing state of reality understanding.
You don't yet get what an enactive and embodied view of cognition would be about. Let alone the still deeper thing understood by the biologist that all this semiotic action has to be harnessed to job of dissipating thermal gradients.
So brains are evolved as ways to predict the world a world as a model of it would be if it has an "us" as its regulating centre. The more we don't have to pay attention to the world, the more we can simply emit learnt habits, the more we feel like a "self" that is doing just that. We hardly have to snap a finger and the world meets our expectations.
We want to lift a cup of hot tea to our lips and no thought at all appears required although that was not something we could have said at the age of two or three. If instead we wobble and splash the tea, or smash the china rim clumsily into our teeth, then this error of prediction will be so surprising we will want to look around for someone or something else that can take the blame. Our sense of self will be that strong in terms of our Bayesian priors.
At least at the sociocultural level of semiosis where self-awareness itself arises as a model of the modeller in the act of reality modelling.
And again, you entirely miss the point about Friston. He being the cite who has made the most progress. At least in terms of turning the idea of the semiotic modelling relation that defines an organism into something that looks like an authentic branch of physical mechanics. Boiled down into a differential equation that a physicist would understand as a maximally generalised algorithm they could hope to do something with.
Like not "generating consciousness". Just understanding how life and mind do appear in Nature as an algorithmic habit seeking to insert itself into the entropic flows of the world.
The classic functionalist straw man trotted out yet again.
I thought I was arguing against using a reifying term such as consciousness. I thought I was saying this is where folk already went off track. The call for a theory of consciousness is already turning phenomenology into the hunt for a substrate.
So I can recognise life and mind as processes to be explained. And biosemiosis as the best general physicalist account of that,
I would endorse Friston in particular for developing a model along those lines.
Others like Varela, Dehaene and Baars are really just talking about attentional processing in contrast to habit processing. And more in terms of the description of a functional anatomy than a general functional logic. Which is why they say little about the hard problem.
But you are welcome to keep popping up with your strawman attack that never goes anywhere. :up:
I think before we can go getting maths and science to go looking for the stuff we should probably have a good idea of the shape of the subject we seek. As with seeking anything really.
For me, consciousness appears to be affect and thoughts arising within something that we might call a witness or observer. If that is what you are looking for, then it is worth asking those few who have looked into their own and are very familiar with its spaciousness and stillness, the flight of thoughts and clouds of omen that inhabit it. It would be good if it had been properly studied in the West as it has been for thousands of years in the East, but it demands discipline of mind, service, wisdom and love, and those are resources that tend to be in shortish supply in Western laboratories, research centres and universities.
As I said in my first post here (https://thephilosophyforum.com/discussion/16197/ich-du-v-ich-es-in-ai-interactions), It is generally accepted that for an AI to be conscious it would have to have meta-cognition, subjective states, and long-term identity. Perhaps a look at that thread might clarify what I am referring to here.
I hope this has clarified rather than confused.
Love, peace, happiness and grace,
Swami Prajna Pranab
But you think consciousness is real.
I hear people talking about it all the time. Just not very meaningfully. And certainly not at all scientifically.
Ok, but do you disagree or agree with my point?
That is, that it does not follow, in the abstract, it is not necessarily the case that we need to make a toy consciousness to verify / refute different models of consciousness.
After all, we have already used many observations about human consciousness to test our predictions and inferences.
Or have I misunderstood you: are you simply saying that, in your view, the only way to distinguish between these specific models is with a generated system? But in that case, I don't understand why we're narrowing our focus in that way; another model could come along tomorrow and be verified purely by observations on/by humans.
Theories of consciousness usually start with an unproven assumption and then build a theory around it. This assumption is neither empirically confirmed nor even verifiable and therefore such a theory is not only unscientific but also epistemically useless.
Two examples:
The Integrated Information Theory (IIT) claims that consciousness arises from and is identical with integrated information. According to this logic, even a simple LED would have consciousness. Such a statement is neither provable nor falsifiable in Poppers sense.
The second example is Predictive Coding or the Free Energy Principle. It claims that organisms minimize uncertainty. This claim cannot be empirically confirmed. Friston believes that he can derive this from the physical principle of energy minimization which, however, represents a massive category error. Physical energy and semantic uncertainty belong to entirely different descriptive levels; they have nothing to do with each other.
Such theories are, in the end, fairy tales wrapped in scientific terminology. They offer no real progress in understanding quite the opposite. They create the illusion of knowledge by citing empirical data to confirm the theory and then using the same theory to explain these data. That is a logical circle, not a proof.
I have discussed this in more detail here:
https://doi.org/10.5281/zenodo.17277410
Thanks for clarifying. I would say this:
All scientific models include assertions and assumptions.
The assertions are the actual claims of the model and are the things we are going to test.
And the assumptions should be minimized to just known established facts and reasonable extrapolations -- these are also indirectly tested.
I don't think the things you are calling assumptions are assumptions, I think they are the actual assertions of the model.
So it comes down to the testability of these hypotheses.
If these hypotheses are not testable for now, that's fine, because we are still in the early phases of trying to create a model of consciousness.
At the frontier of science, it's always the case that we start with speculation, and trying to firm up our speculation into something testable. Then we create a testable model. Then we test the model and refute it or gain confidence in it.
If we shut down all speculation because it's not testable then we can't even get started.
If you're saying both of these hypotheses are never testable, even in principle, then can you focus on that part of the proof please? Because I have not seen how you've demonstrated this.
But this is just a basic principle of cognitive science. There is already abundant evidence for it. The mind is a predictive model. Some of the most direct demonstrations come from sports science. Getting elite athletes to return tennis serves or bat cricket balls. Showing how even the best trained players cant beat the built in lag of neural processing delays. Anticipation can build up a state of preparation up until a fifth of a second before bat must hit ball in precise fashion. But that last 200 milliseconds is effectively flying blind.
So Friston had abundant reason to focus on this principle. I myself talked to him about the sports science and other relevant lab results. He was working closely with Geoff Hinton on Helmholtz machines and the general theory of generative neural networks. It was already known that this was basic to any theory of consciousness. The flesh and blood human nervous system simply could not function in any other way but to be based on an anticipatory logic. It takes 50 milliseconds just to conduct signals from the retina to the brain. So minimising uncertainty is the most uncontroversial of assumptions - for anyone familiar with the real world constraints on human wetware.
Quoting Wolfgang
Again a naive opinion. Shannon information is a useful formalism exactly because it sets up this inverse relation between noise and signal. And brains are all about the kind of information processing that allows an organism to create the free energy that does useful work - the entropy flow that results in homeostatic stability of the entropifying structure. The energy liberated to repair and reproduce the organism in a fashion that preserves its structural integrity.
So some of this is Friston being tongue in cheek. Framing a psychological theory in physicalist terms. But also that is genuinely what is being claimed. That the mind does connect to the physics as a biosemiotic modelling relation. An anticipatory model that liberates free energy so that an organism can homeostatically maintain a state of persistent identity.
And you will note that Friston doesnt make the simple claim that consciousness just is this free energy principle. He is explicit that it is a general theory of life and mind. And perhaps even AI. He is not even trying to play the game of Tononi and others who might be mentioned.
He shared a lab with Tononi under the famous egotist Gerald Edelman. Another amusing topic when it came to the great neural correlates hunt that Koch and Chalmers launched in the mid-1990s. A stunt that gave scientific cover for a whole rash of the kind of non-theories that you rightly deplore.
Im simply saying, Friston was never part of that bandwagon. Even though of course he also sat right at its centre as the guru on how properly to calculate the correlations resulting from brain imaging research.
Thank you for the clarification I see your point.
However, I think we are talking about two different kinds of unverifiability. You are right that many scientific models begin with speculative hypotheses that can eventually become empirically testable. But that is not the case here.
The hypotheses of IIT or Predictive Coding are not temporarily unverifiable; they are principally unverifiable, because they connect concepts that belong to entirely different descriptive levels. They claim that a physical or mathematical structure (e.g., integrated information, energy minimization) produces or is identical with a semanticphenomenological phenomenon (experience, uncertainty).
This is not an empirical speculation but a category error precisely the kind of confusion that Gilbert Ryle described in his famous example: after a student had toured all the buildings of a university, he asked, But where is the university? The mistake lies in confusing a physical arrangement (the buildings) with an institutional meaning (the university). The same confusion occurs when physical quantities or mathematical constructs are taken to be or to produce semantic phenomena such as experience.
Between physics and semantics there can be no bridge law, only correlation.
A physical process can correlate with a semantic event, but it can never translate into or cause it. The relationship between brain and mind is therefore not causal, but correlative two complementary descriptions of one and the same dynamic viewed from different epistemic perspectives.
That is why such theories can never be verified even in principle. There is no possible experiment that could demonstrate a causal transition from a physical process to a semantic or experiential one. Any attempt to do so merely redefines consciousness in physical terms and then claims success by definition.
So the problem is not that we currently lack the appropriate empirical tools, but that the conceptual architecture of these theories confuses what cannot be unified.
Not every speculation can be turned into science only those that remain within a single, coherent descriptive level can.
Elite batting or return tasks show anticipatory control because neural and sensorimotor delays demand feed-forward strategies. That is perfectly compatible with many control-theoretic explanations (internal models, Smith predictors, model predictive control, dynamical systems) that do not require Bayesian inference or a universal principle of uncertainty minimization. From organisms behave as if anticipating it does not follow that they literally minimize epistemic uncertainty.
2. Shannon information, thermodynamic free energy, and semantic or epistemic uncertainty are categorically different concepts.
Formal similarities (e.g., entropy, noise, and signal) do not justify treating them as identical. Variational free energy in Fristons sense is a model-relative bound on surprisal, not a physical energy quantity; and uncertainty here is a term defined over the probability distributions of a generative model. Sliding between these domains without a strict translation rule is a category error.
Between physics and semantics there can be no bridge law, only correlation.
A physical process can correlate with a semantic or cognitive state, but it can never produce, translate into, or explain it. The physical and the semantic belong to different epistemic domains; their connection is observational, not generative.
3. What would count as a risky, discriminative prediction?
If organisms minimize uncertainty is to be an empirical claim rather than a post-hoc description, it must yield a pre-specified, falsifiable prediction that (i) distinguishes the FEP/predictive-coding framework from alternative non-Bayesian control models, (ii) is measurable in the organism itself (not only in our statistical model), and (iii) could in principle fail.
Without such criteria, the principle remains unfalsifiable and collapses into metaphor.
So the issue is not anticipation or control per se I fully agree that organisms stabilize their internal dynamics. The issue is the illegitimate conceptual leap from physical energy flow to semantic uncertainty, and from probabilistic modelling to biological reality. Thats precisely the confusion I am objecting to.
So you accept the principle of feedforward in general, just not in Fristons particular case? And yet Friston is generalising the feedforward story as the particular thing he is doing? :chin:
Quoting Wolfgang
Or instead trying to unify the two perspectives that need to be unified.
The issue in the 1990s was the question of which paradigm was the best to model neurobiology. Was it dynamical systems theory of some kind, or a computational neural network of some kind? Both seemed important, but it was a little mysterious as to which way to jump as a theorist.
Friston for example was interested in Scott Kelsos coordination networks and neuronal transients as representing the strictly physicalist approach - dynamical self organisation. But also in generative AI models like Helmholtz machines as the alternative of an informational approach.
So a lot of us were torn in this way. Was the brain a fundamentally analog and physical device, or instead better understood as fundamentally digital and computational? Was the brain trafficking in terms of entropy or information.
I found my answer to this conundrum in biosemiosis - a movement in theoretical biology where hierarchy theorists were just discovering the useful connection between dissipative structure theory and Peircean semiotics.
Friston found his resolution in Bayesian mechanics - a more austere and mathematical treatment that boiled the connection down to differential equations. But saying essentially the same thing.
So what you see as the bug is what I see as the feature. Finding a way to tie together the physical dynamics and the information processing into the one unified paradigm.
Of course Friston could be accused of just being too sparse and general in offering a bare algorithm and not a larger metaphysics. And I would agree. But also see it as still be part of the same important project that I just described.
For myself, I am concerned with the actual how of this semiotic connection is made. And that has become its own exciting story with the rapid advances in biophysics as I outlined in this post some years back - https://thephilosophyforum.com/discussion/comment/679203
Quoting Wolfgang
A sweeping statement. As I argue, what is needed is not a law but a bridging mechanism. And that is what biophysics has provided.
Quoting Wolfgang
Yeah. I just see this as missing the point as to what the game is about. It is not about the best model of predictive coding. It is about how to bridge between the control model - implemented as flesh and blood biology - and the entropic world that as an actual organism it is meant to be controlling.
Lets not get forget this is a problem of biology and not computer science. How do you get consciousness out of genes and biochemistry? What is a modelling relation look like in those terms?
Which again why the Bayesian Brain approach is an advantage by being generalised to a level beyond the choice of hardware.
Quoting Wolfgang
And I say it is the leap that in the 1990s only a relative few understood was the leap that needed to be made. Friston in particular shaped my view on this.
I was talking to Chalmers, Block, Baars, Koch and many, many others too. But there was a reason that when Fristons name was mentioned, serious neuroscientists gave a knowing nod that he was quietly in a different league. The one to watch.
The problem is not the wish to connect physics and meaning it is the belief that this connection can be realized by a physical mechanism.
When you speak of a bridge mechanism, you already presuppose that there is a level of description where semantics becomes physics. But this presupposition is itself metaphysical. Biophysics may show correlations between energy dissipation, metabolic regulation, and neural complexity but it does not and cannot show how meaning arises from these processes. It only shows how living systems correlate with the conditions that make meaning possible.
The difference is not rhetorical but categorical.
Physical systems can organize, synchronize, and self-stabilize all of which can be formally modeled. But semantic reference, intentionality, or subjective experience are not additional physical phenomena that arise through complexity. They are descriptions that belong to a different epistemic domain.
There can be no bridge law between the two, because any such law would require a common metric and there is none.
A neuron can fire, but it does not mean.
Energy can flow, but it does not know.
Meaning appears only at the level of systemic coherence where correlations are interpreted and interpretation is not a physical operation, but an epistemic one.
So when you say that biophysics has already provided the bridge, I would say: it has provided the conditions of correlation, not the transition itself. What you call a bridge is in truth an interface of perspectives, not a mechanism.
This is why the Free Energy Principle cannot unify physics and semantics it only overlays one vocabulary on top of the other.
It does not explain how consciousness arises; it only reformulates life in terms of a statistical metaphor. And that is precisely the point where the philosophy must step in again.
I think I mentioned this before:
it is as if one tried to explain social bonding through magnetism, simply because both use the category of attraction.
The concept may be shared, but the phenomena have entirely different origins and they cannot be causally connected.
Imagine a craftsman who begins to work without knowing how to handle his tools. That, in essence, is how many theorists of consciousness operate. They juggle terms like information, causality, or uncertainty without realizing that these belong to different descriptive domains and therefore cannot be meaningfully combined.
The problem is that philosophy never truly established a methodological discipline. In the natural sciences, methodological rules are explicit and binding; in philosophy, one still prefers improvisation. Speculation becomes a virtue, and rhetorical elegance a substitute for conceptual clarity. The less one understands, the more mysterious and therefore profound the subject appears.
Modern philosophy has thus taken on the character of a stage performance. When David Chalmers, with long hair and leather jacket, walks onto a conference stage, the wow effect precedes the argument. Add a few enigmatic questions How does mind arise from matter? and the performance is complete.
Koch and Friston follow a similar pattern: their theories sound deep precisely because almost no one can truly assess them.
Philosophers, for their part, often lack the mathematical literacy to distinguish between a formal structure and an ontological claim. They swallow every equation presented to them as if it were metaphysical truth. Those who insist on sober analysis on clarifying terms before admiring them are dismissed as dull or pedantic.
Yet that is precisely what philosophy was meant to be: the discipline that distinguishes between what can be said and what can only be imagined. Without this methodological backbone, philosophy turns into a spectacle a show of intelligence without understanding, and a theater of thought where metaphysics masquerades as science.
OK, I think I 90% agree with you.
As I have been saying in the parallel thread on the hard problem of consciousness, the problem of explaining subjective experience in a scientific model looks intractable.
I can't imagine a set of words I could write on a page that would enable a person with no color vision to experience red. Or for me to imagine what ultraviolet looks like to birds.
These things seem absurd, yet the problem of trying to explain experience itself in a scientific model is pretty much the same. Note: the experience itself, not the correlates of the experience.
I think the areas of disagreement, would be first of all I can't claim that it's impossible. It seems highly implausible that some words could make me imagine a new color say, but I am not aware of a proof from first principles. And generally we should be careful not to prematurely claim that things are impossible.
Secondly, and this might not be a disagreement as such, but the level of verification that is possible for these models remains very high, even if experience itself remains a black box. For instance, we could find a particular neural structure in the brain that is essential to triggering pain, and not only that but be able to make testable predictions of how much pain someone will experience based on the pattern of activation and their own specific neural structure.
(In principle I mean, I know measuring individual neurons in vivo is basically not a thing yet)
So I don't see it as either "solve the hard problem of consciousness" or "worthless". Figuring out the neural correlates can get us knocking on the door of consciousness (and indeed be medically and scientifically useful). Even if the door looks more like a brick wall.
> Okay, I think I agree with you 90%...
That already means a lot in this field and I think our 10% disagreement is not about facts, but about the very kind of question we are asking.
> I cant imagine any words I could write on a piece of paper that would make a person without colour vision experience red...
Exactly. What you describe here is not a limitation of empirical science, but of translation between descriptive levels.
No symbolic or physical operation no arrangement of letters, equations, or neurons can generate the phenomenal content of red, because the phenomenal and the physical are not commensurable domains.
> I think the points where we disagree are: first, I cannot claim it is impossible
I would say it is not empirically impossible, but conceptually impossible.
To explain experience in physical or functional terms is like trying to explain why water is wet.
The question sounds meaningful, but it secretly fuses two descriptive frameworks:
water belongs to the physical domain, wetness to the experiential one.
You can describe all molecular interactions of H?O without ever reaching the concept of wetness, because wetness exists only from within a certain scale and relation that of embodied perception.
Likewise, consciousness is not something that can result from physics; it is the epistemic context in which physics appears as physics.
> And in general we should be careful not to say things are impossible too easily.
I completely agree but impossibility here does not mean unreachable by future science, it means misstated at the logical level.
If a question collapses categories asking how a physical state becomes a subjective one then the problem is not unsolved, but ill-posed.
> The possible degree of verification of such models is still high... we might find specific neural structures that correlate with pain...
Yes and correlation is the correct word.
Neural correlates of consciousness are entirely legitimate research.
But they never explain why those correlates accompany experience only that they do.
They help us predict and intervene (which is medically crucial), but prediction is not explanation.
It tells us nothing about the epistemic relation between the measurable and the felt.
> So I do not see this research as a solution, but not as useless either...
I agree. It is not useless at all it maps the interface between physiology and phenomenology.
But this interface is not a causal bridge.
We can touch the door of consciousness, as you say, but not because we are about to open it rather because we have finally realized that it was never a door in the physical sense to begin with.
In other words:
Whenever a question asks how matter gives rise to mind, it already contains the confusion that makes it unanswerable.
Mind and matter are not cause and effect, but two correlated descriptions of the same systemic reality one internal, one external.
To search for a causal connection between them is like asking what makes water wet: the answer is not hidden its a category mistake.
The first is the one already mentioned: it conflates two descriptive levels the physical and the semantic and then asks how one could possibly give rise to the other. This question is not profound; it is ill-posed.
The second is subtler: it assumes that mind must arise from matter, when in fact it arises from life.
If you reduce a physical system, you end up with particles.
If you reduce a living system, you end up with autocatalytic organization the self-maintaining network of chemical reactions that became enclosed by a membrane and thus capable of internal coherence.
That is the true basis of life: the emergence of a causal core within recursive, self-referential processes.
From there, consciousness can be understood evolutionarily, not metaphysically.
At the neurophysiological level, one might say that in associative cortical areas, sensory inputs converge and integrate into dynamic wholes.
Through recursive feedback between higher and lower regions, the system begins to form something like a mirror of itself.
When these integrated representations are re-projected onto the body map, they generate what we call feeling the systems own state becoming part of its model of the world.
In that sense, consciousness is not something added to matter, nor an inexplicable emergence; it is the self-reflection of an autocatalytic system that has become complex enough to model its own internal causality.
Of course, this is not a solution to the hard problem in the usual sense because no such final solution exists.
But it offers a neurophysiological direction that might lead toward a satisfactory description:
not a metaphysical bridge between mind and matter, but a consistent account of how recursive, life-based systems can generate the conditions under which experience becomes possible.
No more than any other scientific theory. I mean, the reason why predictive coding (as a specific machine learning architecture) became popular is because machine learning architectures were designed that described actual neural responses. So this theory can be empirically evaluated as much as any other scientific theory in the sense that you can build models and test them.
Now, Friston's free energy principle is a mathematical principle that is unfalsifiable and much more general than any specific theory about the brain...
but specific kinds or families of predictive processing models for what neurons and the brain do are obviously testable, which is what I am talking about in the first paragraph.
Quoting Wolfgang
But as I already explained, this is a crucial point of Friston's theory. He is not saying that organisms have some intentional uncertainty minimization thing they do. He is saying that in order to exist, they have to look like they are doing that. It doesn't matter how that is achieved, which is why his theory generalizes across many domains. Evolutionary natural selection can be framed as free energy minimization.
Quoting Wolfgang
At the highest level of generality, Friston's theory is more of general mathematical principle that can be shown to be the case generally for complicated systems regardless of the specific way they are described.
In this paper, he makes a great deal of effort to connect his principle to physics - statistical, newtonian and quantum mechanics - to emphasize the generality of the description as applying to random dynamical systems of which fundamental physics might be seen as special cases.
https://arxiv.org/abs/1906.10184
The relationship between free energy minimization and stochastic systems was shown before even Friston started his idea: e.g.
https://scholar.google.co.uk/scholar?cluster=17970774975628711245&hl=en&as_sdt=0,5&as_vis=1
So the category error you accuse him of doesn't hold because the theory is much more general than you suggest.
Quoting Wolfgang
This seems to be your main issue, which is fine because these theories aren't meant to solve the hard problem of explain subjective experience.
You raise several important points, but I think we are still talking at different levels of description.
When I say that such theories are not verifiable, I dont mean that one cannot empirically test specific predictive-processing models or machine-learning architectures inspired by them.
Of course, such models can be tested and many have been.
But that is not the same as testing a theory of consciousness that builds upon them.
A model that reproduces neural activity or perceptual prediction errors is not thereby a model of experience.
It models signal processing, not consciousness.
You are right that Fristons Free Energy Principle is mathematically unfalsifiable.
But that is precisely the problem.
It is so general that any system can always be re-described as minimizing free energy.
This generality immunizes it against falsification and therefore removes it from the domain of empirical science.
That is not a technical issue, but exactly the category error at stake:
mathematics, physics, and semantics are being fused into a pseudo-unity, although they belong to different descriptive domains.
When Friston says that organisms must appear to minimize uncertainty in order to persist, this is a semantic paraphrase of a mathematical tautology:
Any system that continues to exist must stabilize its internal states within certain bounds.
That is trivially true but it says nothing about consciousness or intentionality.
The fact that the principle can be applied to statistical, Newtonian, or even quantum systems only demonstrates its formality, not its explanatory power.
A formalism that fits neurons, molecules, and galaxies alike cannot, without an additional epistemic framework, explain experience or cognition.
It simply re-describes physical regularities in probabilistic language.
Hence, the category error remains:
Mathematical structure ? physical mechanism ? phenomenal meaning.
There is no bridge law between these levels only semantic correlation.
And this also reveals the deeper contradiction:
Friston and his followers often claim that subjective experience (qualia) is not the target of the theory.
But to describe the brain as a predictive, inferential, or representational system is already to invoke the phenomenal domain.
You cannot speak of prediction or inference without presupposing a model that experiences something to be predicted or inferred.
To exclude qualia while describing perception is therefore not a modest limitation it is a category mistake.
It eliminates the very phenomenon that the terms presuppose.
In the same sense, to speak of a unified brain theory is necessarily to speak of a theory of consciousness, because it aims to integrate perception, action, memory, and self-organization all of which are phenomenally defined.
A unified brain theory that excludes consciousness is like a unified theory of music that excludes sound: mathematically coherent, but phenomenologically empty.
The Free Energy Principle may serve as a metaphorical ordering principle for self-organization, but it is not, and never was, a theory of consciousness.
This raises the simple but essential question:
What is Fristons theory actually for?
What does it allow us to know that we did not already know?
If it is neither empirically testable nor conceptually coherent, it remains a formal metaphor a kind of mathematical cosmology of life that explains everything and therefore nothing.
A theory that can be applied to all systems explains no system in particular; it produces not knowledge, but only a symbolic sense of connectedness.
A real theory of consciousness must instead explain the transition from physical stability to autocatalytic self-reference the formation of a causal core in which internal states are not only maintained but recursively interpreted.
Only at that level the level of life itself can consciousness arise.
If, as you say, Fristons theory is not intended to explain subjective experience, then it is simply not a theory of consciousness and should not be presented as such.
It might then be classified as a general systems formalism describing the dynamics of self-stabilizing structures.
That is perfectly legitimate, but it places the theory outside the epistemic scope of what is usually meant by a theory of consciousness.
In that case, it explains neither the emergence of experience nor the relation between neural activity and awareness; it only offers a high-level metaphor for persistence and adaptation.
But that is not consciousness it is life in probabilistic notation.
And here lies the fundamental dilemma:
If a theory claims to explain consciousness, it faces the category error I have described.
If it does not, it becomes irrelevant to the problem it is most often cited for.
Either way, the result is the same:
the Free Energy Principle is not wrong, but misapplied a powerful mathematical metaphor mistaken for a theory of mind.
I don't think this matters if you treat it in the proper sense as a conceptualizing framework. If you can have testable theories at a lower level, then there's no issue. Its like criticizing mathematics for being unfalsifiable when thats not the point of mathematics. Mathematics can be used as a tool for the purpose of describing scientific theories.
I think at the core, you are thinking about Friston's theory in terms of subjective experience. This is not what it is about in any sense.
The category error is yours for thinking a theory or principle is about something that it is not intended to be about.
Quoting Wolfgang
I don't think this is true. Yoy can describe a single neuron as doing predictive coding. I don't think most people believe we need to ascribe them with experience. You might say "prediction" or "inference" is the wrong word because in your head they are somehow connected to qualia, but thats then just semantics that has no bearing on the validity of models or what they intended to do.
Quoting Wolfgang
It is a conceptual framework in which you can give things a formal description. I would say the benefit is conceptualizing how the world works, just like what philosophy does in general. Philosophy doesn't necessarily provide us with new knowledge about the world, but it people use it to organize their concepts of the world in a self-consistent way. As a mathematical tool it provides a choice for how one can describe systems they are interested in, like how in physics, there are usually different formulations of the same theory. None of the formulations predict anything different, but they are different perspectives on the same thing.
And I will emphasize it is "conceptually coherent" in your specific sense of the phrase if we are not talking about qualia. Because thats not what its intended to do and what most neuroscientific theories are not about. We are dealing with the easy not hard problems of consciousness. In that sense, any of these theories from neuroscience can be fully consistent theories of consciousness (in the easy sense). They are not intending on solving hard problems.
Quoting Wolfgang
I think it is worth noting there is obviously a continuum between inanimate and animate living things, with no strict dividing line. It is then unlikely that there can be a single, unique theory of living things or consciousness due to this because any such theories will have to make arbitrary distinctions about where living things end and non-living things start. Clearly then, a theory which fully encompasses the continuum must be maximally general, but that doesn't mean it is mutually exclusive of other theories of increasingly less generality. We shouldn't be looking for a single unique theory that explains everything. We need a plurality of tools that describes phenomena at various levels of generality from the highest to lowest.
Quoting Wolfgang
Again, the category error is yourse because the theory can be used for things and many papers have been written using it to construct models that are even compared to data. The category error is yours in thinking that it is meant not for those things but explaining phenomenal consciousness. The theory is being used for all those things it is good at in those papers. It is not being misapplied, you are mislabeling it.
You are right that a formal framework can serve as a useful tool.
Mathematics itself is not falsifiable but it does not make empirical claims.
The Free Energy Principle (FEP), however, is not presented as a mere formalism; it is promoted as a scientific account of how organisms, brains, and even societies maintain their organization by minimizing free energy.
The moment such a statement is made, it leaves the purely formal domain and enters the empirical one and therefore becomes subject to falsification.
Otherwise, it is not a scientific framework but a metaphysical one.
This is exactly the issue identified by Bowers and Davis (2012), who described predictive processing as a Bayesian just-so story:
a framework so flexible that any observation can be redescribed post hoc as free-energy minimization.
A theory that can explain everything explains nothing.
It becomes a formal tautology a mathematical language searching for an ontology.
The same problem appears in the well-known Dark Room Argument (Friston, Thornton & Clark, 2012).
If organisms truly sought to minimize surprisal, they would remain in dark, stimulus-free environments.
To avoid this absurdity, the theory must implicitly introduce meaning assuming that the organism wants stimulation, prefers survival, or seeks adaptation.
But these are semantic predicates, not physical ones.
Hence, the principle only works by smuggling intentionality through the back door the very thing it claims to explain.
Even sympathetic commentators such as Andy Clark (2013) and Jakob Hohwy (2020) have admitted this tension.
Clark warns that predictive processing risks epistemic inflation the tendency to overextend a successful formalism into domains where its terms lose meaning.
Hohwy concedes that FEP is better seen as a framework than a theory.
But that is precisely the point:
a framework that lacks clear empirical boundaries and shifts freely between physics, biology, and psychology is not a unifying theory it is a semantic conflation.
Your second point, that terms like prediction or inference can be used metaphorically for neurons, simply confirms my argument.
If those terms are metaphorical, they no longer describe what they literally mean;
if they are literal, they presuppose an experiencing subject.
There is no third option.
This is the very category error I referred to: a semantic predicate (inference, prediction, representation) applied to a physical process, as if the process itself were epistemic.
To say that Fristons theory is not about qualia does not solve the problem it reveals it.
Once you speak of perception, cognition, or self-organization, you are already within the phenomenal domain.
You cannot meaningfully explain perception without presupposing experience; otherwise, the words lose their reference.
A theory of consciousness that excludes consciousness is a contradiction in terms a map with no territory.
You also mention a continuum between life and non-life.
I agree.
But the decisive transition is not a line in matter; it is the emergence of autocatalytic self-reference
the moment a system begins to interpret its own internal states as significant.
That is not a metaphysical distinction but a systemic one.
And no equation of free energy can account for it, because significance is not a physical magnitude.
To compare FEP with mathematics therefore misses the point.
Mathematics is explicitly non-empirical; FEP oscillates between being empirical and metaphysical, depending on how it is defended.
That is precisely what renders it incoherent.
Finally, if as you and others claim the theory is not about subjective experience,
then it should not be presented as a theory of consciousness at all.
Otherwise, it becomes exactly what I called it before:
a mathematical cosmology of life that explains everything, and therefore nothing.
References
Bowers, J. S., & Davis, C. J. (2012). Bayesian just-so stories in psychology and neuroscience. Psychological Bulletin, 138(3), 389414.
Friston, K., Thornton, C., & Clark, A. (2012). Free-energy minimization and the dark-room problem. Frontiers in Psychology, 3, 130.
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181204.
Hohwy, J. (2020). The Self-Evidencing Brain. Oxford University Press.
:100:
More or less the 'non-reductionist physicalist, embodied functionalism' story I tell myself too.
Quoting apokrisis
:up: :up:
Sure. The molecule must be a message. But it is also a molecule. And there must be a message.
The issue is not semantics becoming physics. It is semantics regulating physics in a way that builds more of the semantics. An organism that lives as it entropifies in some purposeful and selfcontrolled fashion.
Quoting Wolfgang
Now you have reified things in the way Ryle criticises. Or at least you dont see that you have jumped a level of semiosis to talk about the socially-constructed level of mind. Animals are conscious in an immediate and embodied fashion. Humans add self-consciousness as they regulate their consciousness through a collective socialised notion of what it is like to have a mind which has semantic reference, intentionality, subjective experience,
These kinds of thoughts have no way of formulating in the mind of an animal. But they are the jargon that a self regulating human is expected to employ to regulate their behaviour as part of a higher level social narrative.
So connecting consciousness to neurobiology is one thing. Connecting self awareness as a narrative habit that can then regulate human behaviour according to socially-evolved norms is another thing.
This is why neuroscience would not talk much about consciousness but about attentional processing and predictive modelling and the other functional aspects appropriate to a neurobiological level account. If you want a model of the stuff you are talking about, call in the social constructionist. The issue is about how words and numbers organise the human mind, not how neurons and genes organise the biological level of mind.
Quoting Wolfgang
Read the link I provided and you will see you are off the mark. The issue was how an information could even affect entropy flows. The critical finding was that all the various physical forces converge to the same scale at the quasi-classical nanoscale and so can be switched from one form to another at no cost by a semantic network of molecular machinery.
A cell was once thought of as just a bag of autocatalytic chemistry - toss in enzymes at the right moment and watch the metabolism run. But now the model has completely changed to a biosemiotic one.
Quoting Wolfgang
But I was there. I had lunch with Chalmers and Koch the day they launched their projects of the hard problem and the neural correlates of consciousness. I quizzed them and continued to do so. I agree that each was shallow.
And likewise, I spent time with Friston when he was just the statistics guy. I could see he was in a completely different class of seriousness. Which is why I object to your OP that couldnt tell the two apart.
The field of consciousness studies attracts every kind of crackpot and chancer. Everyone reasons, well they are conscious so already they must be an authority on the subject. It is thus important to know who is doing the serious work.
Quoting Wolfgang
Yes I agree we need to reduce to the point where life begins. But now you illustrate the mistake of talking just in terms of the physics - cells as bags of metabolism - and ignoring the semantics that must somehow be organising the chemistry.
The biophysical surprise is that the interface that produces this organic state of being is so throughly mechanical. A story of molecular switches, ratchets clamps and motors. And that this molecular machinery is mining the possibilities of physics at the quantum level while existing in the midst of a battering thermodynamical storm.
If you are still thinking of bags of autocatalytic chemistry, your are stuck back in the 1980s when the issue was how to square this kind of new complexity theory model of self-organising physics with the kind of informational mechanism that was a genome needing some kind of semantic bridge to allow that type of physical potential. Information needed to do less work if physics could organise itself. But it still had to do some absolutely critical work.
Hence biosemiosis. The search for how self-information could connect to dissipative structure. And the answer has turned out to be molecular machinery doing quantum biology.
I am no longer willing to tolerate your arrogance and condescension.
Your comments have crossed the line between discussion and self-display,
and they make it evident that you lack the philosophical and epistemological competence
to address these questions in any meaningful way.
Anyone who uses terms such as semantics, physics, and biosemiosis in a single breath
without understanding their categorical separation should first study the foundations of philosophy
before accusing others of misunderstanding.
This conversation is over.
There is no point in continuing with someone who mistakes opinion for knowledge
and anecdote for argument.
I nonetheless wish you well in your further search for clarity
perhaps philosophy itself may help before you misrepresent it again.
A brief afterthought.
Those who have truly shared a table with Chalmers, Koch, or Friston rarely feel the need to say so.
Serious minds tend to argue ideas, not acquaintances.
Authentic conversation leaves no room for name-dropping it reveals itself through clarity, not proximity.
Your rhetoric suggests admiration without understanding,
and a desire to belong where genuine thought begins by standing alone.
Well, no imo because its a description of what the maths says. Its directly analogous to variational principles of least action in physics which don't specifically have empirical content because they are formal tools that are used to describe lots of different things in physics.
https://en.wikipedia.org/wiki/Action_principles
Quoting Wolfgang
But this is the point. Free energy minimization gives you a framework where you can write down the equations describing the conditions for a system to maintain its own existence over time. That might not be interesting for a rock. But I think thats quite interesting for more complicated self-organizing systems. Its a framework for describing what complex self-organizing systems do, like choosing to describe physical systems as following paths of least action.
Quoting Wolfgang
Quoting Wolfgang
False. A system will have wants or preferences or seeks things out because if it doesn't it dies. And the point of FEP is that any system that continues to exist looks like it is modelling its environment. Things like preferences are an inherent part of that model and are necessitated. It comes for free. Nothing is required to be smuggled in.
Quoting Wolfgang
Yes, and Friston sees it the same way, I have been saying it that too. At the same time brains and what brains do in terms of constructing models and fulfilling the predictions of an organism is clearly a corollary of complicated systems that need extremely complicated forms of self-regulation in order to continue to survive. And we can use more specific, testable models of predictive processing or similar to describe what brains do.
Quoting Wolfgang
Disagree, it is a rigorous mathematical framework [s]with provable claims[/s] whose central, general claims are provable. Obviously, with regard to brains, it makes no specific predictions. But as a unifying theory of self-organization, it does exactly what it says on the tin, and its impossible for it to be any more precise empirically because the notion of a self-organizing system is far to general to have any specific empirical consequences. Exactly the same for a "general system's theory". Nonetheless, this theory is fundamentally describing in the most general sense what self-organizing systems do, and gives you a formal framework to talk about them which you can flesh out with your own specific models in specific domains.
Quoting Wolfgang
As I've said before, this isn't because theories are invalid. Predictive models to describe neurons are testable and can replicate single neuron responses. They are doing exactly what they should in an effective way.
The issue is you want them to describe something else. And thats fine, but no theory in neuroscience has ever claimed to explain subjective experience, nor do they want to. Thats not the most interesting part of neuroscience.
Quoting Wolfgang
Nothing incoherent. Its a mathematical framework you can use as a tool to describe self-organizing systems, and it can be put to effective use for many purposes. This is an interesting one I have cited before:
https://arxiv.org/abs/2502.21217
I take the point that on its own FEP is not a theory of consciousness (and I am talking in the easy sense, NOT about subjectivity), because it is too general, but I think it can have an important role in the hierarchy of different ways, different theories of describing what brains do as living, self-organizing systems. And I think the generality is a positive because it necessarily acknowledges that you will never find strict boundaries or dividing lines between conscious and non-conscious, living and non-living, and I don't think you can have a full account of these things without that acknowledgement.
Edit: spelling and crossing out
Precisely. It is not a theory of consciousness but a meta-theory of self-organisation. And one large enough to encompass the meta-theories of self-organisation that had arisen already in the different metaphysical settings of physicalism and semantics. So Wolfgang is comparing apples and oranges. :up:
It would be guilty of the dualism of Cartesian representationalism if it were not instead the enactive story of a self in interaction with its world. The triadic thing of a semiotic modelling relation where reality emerges as the mediating truth of an Umwelt.
So the question is not how to explain mental subjectivity in physically objective terms - the Cartesian hard problem we are all familiar with. It is instead the different question of how an Umwelt can emerge that connects a model and a world in a mediated relation. How can such a thing be? And why would it so naturally evolve?
Your critique is based on attacking metaphysical dualism. And that is fair enough. Do plenty of that myself.
But then Peircean semiotics in particular understood that dualism in nature is really the symmetry breaking of a dichotomy which then resolves itself in the recursive order of a hierarchy. Sort of like Hegelian dialectics, but with the added holism of recursive scale.
Anyway, a theory of Umwelts would be the target here, not a theory of subjectivity. An Umwelt is not a representation of the real world. It is the state of an interpretation which puts an organism into a pragmatic accommodation with the environment required to sustain its being.
That is sort of like looking at something as if it is really there. But as the enactivists would say, what is seen is a panorama of affordances. All the little levers and buttons we might pull or push to get things done in a way we might desire.
The Umwelt that arises as the mediating connection between self and world is not merely a model. It is a model of a world as it would be with a self existing at its semantic core. The world as a sweeping entropic flow rushing heedlessly towards its heat death. But now also with a central us included in this panorama - the bit that makes the sunset look so rosily beautiful and the apple so appealing to the bite. That is, until the model is updated be because an ugly wormhole has been noted and expectations have formed about the grub shortly to be discovered in that first juicy crunch.
So Fristons dualism is not in fact a dualism at all. It is a model of this semiotic modelling relation. It is a theory not of subjectivity as a special kind of mind stuff but of the Umwelt as the mediating connection by which a system of semantics can engage with a system of thermodynamics and immediately - habitually and reflexively - see it all in terms of the availability of an exergy or capacity for work. The shortest available path to achieving whatever it is we might happen to desire.