On delusions and the intuitional gap
Most of the philosophical stances w.r.t. the nature of consciousness are derived from faulty intuition. What we call the explanatory gap, is only an intuitional gap.
At least, that's an argument that I want to test the waters on.
I have a materialistic theory of consciousness that I believe provides a good in-road into explaining phenomenal consciousness. I'll give the details to that theory in another post soon. For now, I wanted to understand how such a claim would be taken - its strength. Let me explain....
I take a materialistic stance w.r.t. to the nature of both access and phenomenological consciousness. I believe they can be explained through physical processes that we already understand. The usual argument against such a stance is that it leaves an explanatory gap - that consciousness "feels" a certain way that cannot be explained mechanistically / representationally / reductively / and other variations on the theme.
Point number 1:
There's nothing inherently wrong with using our intuitions. Most of philosophy and science is driven by it. I'm a software engineer by trade, and my intuition is usually the first thing I rely on to identify the cause of a bug. But intuition is only good as a starting point, and must be followed up by analysis, empiricism, or both.
It's worth asking why our intuition suggests what it does w.r.t conscious feels and mechanistic explanations. And to what extent we should trust it? We have only one source of information about conscious experience - our own. Not even of yours, or theirs, just my own. A data point of one. We have no knowledge of any sort w.r.t the experience or lack thereof of any other process. Thus, it's natural that our intuition should suggest that the only other kind of thing that experiences anything like what we experience are those things that are very closely the same as ourselves. Our intuitions should be critiqued in the same way that we critique the ChatGPTs of the world - they're just extrapolating and mimicking what they've already seen. When the IntuitionGPT has only ever seen one example, it'll assume that that's the only outcome.
How trustworthy is that intuition? It's not. Kant's explanation that we cannot objectively know anything about the world is proving ever more prescient. Neuroscience has confirmed that our perceptions are hallucinations designed to approximate certain aspects of the outside world - they're optimized to feed us the information we need for our survival - they're not optimized to accurately represent the world (see Hoffman's Interface Theory of Perception for one particular explanation).
Point number 2:
This second point will likely not be accepted by all. I am biased here by my own theories of the mechanisms involved. But at least you must accept that the truth of this point is both possible, and likely; given our more recent understanding of the sheer fallibility of almost everything that goes on in the brain.
In brief, I hold that the content of consciousness is a high-level summary of the general "goings on" within the brain. A Higher Order Thought (HOT), or a Higher Order Perception (HOP), if you will, but without most of the prematurely decided upon details of the variants of those theories. Exteroceptive, interoceptive senses provide perceptual state to the brain, the brain processes that in certain ways, resulting in both brain state (at any given moment in time) and brain behavior (brain dynamics over time). I believe that the content of consciousness is a dimensionally reduced summary of all of that. And that conscious contents is "for" the same kind of thing that externally-focused perceptions are "for" - for providing sufficient information about the (external or internal) world in order to aid survival. Thus, just like for other perceptions, perception of consciousness is a hallucinated approximation of what's going on in the brain. The brain thinks it knows what it's doing, but in reality it has hardly any clue.
I could well be wrong, but I'm probably not. Not given everything else we know.
There's another point in favor of such a view. If you ignore any practical details for a moment, you could easily argue that the brain must surely have direct access to its own state. But that's impractical. The state of the brain is huge. Seriously huge. And most of that is there already for the purpose of considering the outside world. It's too huge for the brain to process it for the purpose of considering itself. Any attempt for the brain to consider 100% of its own state would lead to an infinite regression on the size of the brain. Sorry Lucy, I loved that movie, but it's totally unrealistic.
Point number 3:
Kant, the Interface Theory of Perception (a la Hoffman), the Predictive Perception theory (a la Friston and others), all say the same thing: our perceptions of the external world are only guesses. But there's one significant thing that differentiates perception of the outside world from perception of consciousness. The outside world gives a reality check. If our perceptual approximation is too different from the actual outside world, we won't survive. We'll stub our toes against rocks that we didn't perceive. We'll fall into holes when we perceived a flat land. We'll waste energy running away from imaginary dragons.
Conscious perception (ie: perception of consciousness) doesn't have the same kind of reality check. Everything in the brain is states flowing backwards and forwards across populations of neurons. There's no solid rocks. If a conscious perception hallucinates the existence of a particular brain state, and leads to a certain brain behavior as a consequence, this just leads to different state flows. Sometimes those state flows will be so bad that they harm survival, but many others will just flow away. Brain states are soft. They're more lenient than the outside world. There's just more room for variations of brain states and brain behaviors. I might also be wrong about this point. I'm in seriously tenuous water. But any argument otherwise is equally tenuous.
The most we can say for certain is that our perception of consciousness may be completely delusional.
What does this mean?
It means that the way it "feels" to be conscious is a result of our delusional conscious perception. The accuracy that such a feeling has to convey information about what is truly happening inside is highly questionable. ie: accuracy of conscious perception should be treated with the same Kantian spectacles as with all other perceptions. This is the complaint against introspection on steroids. Not only is introspection highly suspect for judgements about brain function in general, it is also highly suspect for judgements of introspection itself.
Point #1 was that our discomfort with some theories of consciousness is driven by our intuition, but that our intuition is guided by a data sample of 1. Point #3 says that, not only is the data size 1, but that even that 1 sample is not to be trusted.
Again, what does this mean?
Well, there's the solipsistic outtake - we should stop talking about this because we can't know anything.
My personal outtake is somewhat more selfish - I want to use this as an argument for why materialism can be accepted as an explanation for consciousness.
In any case, what do you think about the argument overall?
At least, that's an argument that I want to test the waters on.
I have a materialistic theory of consciousness that I believe provides a good in-road into explaining phenomenal consciousness. I'll give the details to that theory in another post soon. For now, I wanted to understand how such a claim would be taken - its strength. Let me explain....
I take a materialistic stance w.r.t. to the nature of both access and phenomenological consciousness. I believe they can be explained through physical processes that we already understand. The usual argument against such a stance is that it leaves an explanatory gap - that consciousness "feels" a certain way that cannot be explained mechanistically / representationally / reductively / and other variations on the theme.
Point number 1:
- Our intuition is the source of that complaint. We have no logical grounding to claim yay or nay to whether computational physical processes could yield conscious feels. I think it was Ned Block in his "The Harder Problem of Consciousness" that argued strongly that we have no grounding for such discussions. So why do we hold so strongly to such views? Clearly it's our intuition. It just "seems wrong".
There's nothing inherently wrong with using our intuitions. Most of philosophy and science is driven by it. I'm a software engineer by trade, and my intuition is usually the first thing I rely on to identify the cause of a bug. But intuition is only good as a starting point, and must be followed up by analysis, empiricism, or both.
It's worth asking why our intuition suggests what it does w.r.t conscious feels and mechanistic explanations. And to what extent we should trust it? We have only one source of information about conscious experience - our own. Not even of yours, or theirs, just my own. A data point of one. We have no knowledge of any sort w.r.t the experience or lack thereof of any other process. Thus, it's natural that our intuition should suggest that the only other kind of thing that experiences anything like what we experience are those things that are very closely the same as ourselves. Our intuitions should be critiqued in the same way that we critique the ChatGPTs of the world - they're just extrapolating and mimicking what they've already seen. When the IntuitionGPT has only ever seen one example, it'll assume that that's the only outcome.
How trustworthy is that intuition? It's not. Kant's explanation that we cannot objectively know anything about the world is proving ever more prescient. Neuroscience has confirmed that our perceptions are hallucinations designed to approximate certain aspects of the outside world - they're optimized to feed us the information we need for our survival - they're not optimized to accurately represent the world (see Hoffman's Interface Theory of Perception for one particular explanation).
Point number 2:
- Our perception of consciousness is equally subject to the same perceptual hallucinations as all other perceptions.
This second point will likely not be accepted by all. I am biased here by my own theories of the mechanisms involved. But at least you must accept that the truth of this point is both possible, and likely; given our more recent understanding of the sheer fallibility of almost everything that goes on in the brain.
In brief, I hold that the content of consciousness is a high-level summary of the general "goings on" within the brain. A Higher Order Thought (HOT), or a Higher Order Perception (HOP), if you will, but without most of the prematurely decided upon details of the variants of those theories. Exteroceptive, interoceptive senses provide perceptual state to the brain, the brain processes that in certain ways, resulting in both brain state (at any given moment in time) and brain behavior (brain dynamics over time). I believe that the content of consciousness is a dimensionally reduced summary of all of that. And that conscious contents is "for" the same kind of thing that externally-focused perceptions are "for" - for providing sufficient information about the (external or internal) world in order to aid survival. Thus, just like for other perceptions, perception of consciousness is a hallucinated approximation of what's going on in the brain. The brain thinks it knows what it's doing, but in reality it has hardly any clue.
I could well be wrong, but I'm probably not. Not given everything else we know.
There's another point in favor of such a view. If you ignore any practical details for a moment, you could easily argue that the brain must surely have direct access to its own state. But that's impractical. The state of the brain is huge. Seriously huge. And most of that is there already for the purpose of considering the outside world. It's too huge for the brain to process it for the purpose of considering itself. Any attempt for the brain to consider 100% of its own state would lead to an infinite regression on the size of the brain. Sorry Lucy, I loved that movie, but it's totally unrealistic.
Point number 3:
- We are delusional when it comes to our perception of consciousness.
Kant, the Interface Theory of Perception (a la Hoffman), the Predictive Perception theory (a la Friston and others), all say the same thing: our perceptions of the external world are only guesses. But there's one significant thing that differentiates perception of the outside world from perception of consciousness. The outside world gives a reality check. If our perceptual approximation is too different from the actual outside world, we won't survive. We'll stub our toes against rocks that we didn't perceive. We'll fall into holes when we perceived a flat land. We'll waste energy running away from imaginary dragons.
Conscious perception (ie: perception of consciousness) doesn't have the same kind of reality check. Everything in the brain is states flowing backwards and forwards across populations of neurons. There's no solid rocks. If a conscious perception hallucinates the existence of a particular brain state, and leads to a certain brain behavior as a consequence, this just leads to different state flows. Sometimes those state flows will be so bad that they harm survival, but many others will just flow away. Brain states are soft. They're more lenient than the outside world. There's just more room for variations of brain states and brain behaviors. I might also be wrong about this point. I'm in seriously tenuous water. But any argument otherwise is equally tenuous.
The most we can say for certain is that our perception of consciousness may be completely delusional.
What does this mean?
It means that the way it "feels" to be conscious is a result of our delusional conscious perception. The accuracy that such a feeling has to convey information about what is truly happening inside is highly questionable. ie: accuracy of conscious perception should be treated with the same Kantian spectacles as with all other perceptions. This is the complaint against introspection on steroids. Not only is introspection highly suspect for judgements about brain function in general, it is also highly suspect for judgements of introspection itself.
Point #1 was that our discomfort with some theories of consciousness is driven by our intuition, but that our intuition is guided by a data sample of 1. Point #3 says that, not only is the data size 1, but that even that 1 sample is not to be trusted.
Again, what does this mean?
Well, there's the solipsistic outtake - we should stop talking about this because we can't know anything.
My personal outtake is somewhat more selfish - I want to use this as an argument for why materialism can be accepted as an explanation for consciousness.
In any case, what do you think about the argument overall?
Comments (56)
Quoting Malcolm Lett
Kant recognized that the fundamental organizing principles
making the material world intelligible to science are not located in materiality itself but are given beforehand. One can apply a Kantian approach to questions concerning the embodied nature of cognition and the organizing role of affectivity and subjective point of view in formulating empirical concepts about the world. Doing so leads to
the recognition that empirical knowledge of materiality is inextricably tied to what matters to an embodied organism
relative to its ways of interacting pragmatically in its physical and social environment. Subjective valuation and point of view cannot be split off from material facts; such feeling-based frames of reference define the qualitative meaning of our concepts. A fact, like a tool, is meaningless outside of what we want to do with it, what larger purposes and goals we are using it for. Every fact ( the definition of a point) can be understood within an indefinite array of potentially incommensurable accounts. Which account is true depends on what we are using the account for.
Im not saying there are no real facts in the world. Im saying that embodied human practices are crucial part of what it means to know the real world.
Overall, not too bad, except for the false attributions of Kantian metaphysics. It would have been better to go your own way and leave him out of it.
Which is merely a friendly way of saying my opinions would have been happier .
I think consciousness is constituted by physical processes, but then I also think the explanatory gap is reputable. I do not see why these two views are at odds. I think Chalmers believes consciousness is constituted by physical processes but (to my knowledge) he also proposed the explanatory gap. So again I am not sure what is at stake in denying an explanatory gap.
Can you say how physical processes would explain consciousness? That is, can you bridge the explanatory gap?
Thoughts can be delusional too.
I am not particularly convinced by eliminitivist lines of argument. They seem to be a sort of bait and switch, or a fundemental misdiagnosis of the problem. They show all the ways in which conciousness is not what folk psychology takes it as, and provide a lot of information about current thinking in neuroscience, but I don't think any of this actually gets at the fundemental question of "why does subjective experience exist?"
My response would be: "ok, my thoughts are not what they seem. Ok, there are lots of plausible theories in neuroscience, global workspaces make sense, recursion and "high level summary," make sense. That's all good. But how does this explain how something mechanistically produces first person subjective experience? That my experience might be different from how I describe it doesn't really say anything about why it necessarily exists given x, y, z, etc."
So then we see the next move: "well, because conciousness is so different from what it seems to be, it turns out that your need for an explanation in terms of necessity is just a bad hunch. There is no reason for you to trust that what you think is an incomplete explanation is actually incomplete."
But then you could literally apply this to any explanation of any phenomena. "Actually, the explanation is perfect, it just seems bad because your thoughts don't work the way you think they do," undermines all claims about the world.
If our core intuitions can be this wrong, and there is "nothing to explain," then I have no idea why we should be referring to neuroscience for explanations in the first place. We only have a good reason to think science tells us anything about the world if our basic intuitions have some sort of merit.
Epiphenomenalism adds another wrinkle. If mechanism is understood in current terms then it follows that mental life can have no causal powers. But then, if what we experience and think has absolutely no effect on how we behave then there is no reason for us to think our perceptions and thoughts have anything to do with the real world. Why would natural selection ever select for accuracy? What we think or experience is completely irrelevant to survival given the causal closure principle, mental events never determine physical outcomes and so the accuracy of mental experience can never be something selected for.
Hoffman, who you mention, doesn't touch on this problem, but it's particularly acute. He just assumes that the way things "seem to us" on our "dashboard" plays a causal role in survival. The causal closure principle denies this. Of course, Hoffman ends up rejecting mechanistic explanations for other reasons, but he could have just stopped here with this disconnect.
If epiphenominalism is true, then we have no grounds for our faith in science, mathematics, etc. and no good grounds for the mechanistic view that leads to epiphenomenalism in the first place. Epiphenomenalism is self defeating.
Now if we don't assume epiphenomenalism, then we appear to have something like strong emergence. But if we have strong emergence, then we need to explain how it works. Yet, Kim's work suggests will be likely impossible in the current mechanism -substance framework, so there does seem to be an explanatory gap here in that some sort of paradigm shift seems needed to resolve this issue.
Its a good start to a good argument. My only quibble is the brain/body dualism.
Its common to situate consciousness in the brain, but because were not brains nor are we disembodied, consciousness cannot be reduced to states of the brain. Consciousness would be fundamentally different without bones, for instance, and the phenomena available to those who are standing is markedly different than those who are laying down. Most of the body is required in order to live, let alone be conscious, so all of it needs to be included in a materialist conception of consciousness lest he falls victim to the same dualism he accuses of dualists.
Well this is the fundamental difficulty of such arguments: "How come you know so much about how deluded we all are?" If the world we see is not the world, how can you talk about the world? It looks like some esoteric wisdom you have to claim there.
Now me, I claim that I am real and the world is real, and I don't know everything, but I know how many beans make five and that shit smells.
Then, you can take a lightbulb out of your pocket, hold it in your hand, and begin walking. As you walk, you can explain that all of these things you just described in such detail also produce electricity, as the lightbulb starts to glow.
I have a problem with this. Everything you said about the mechanics of walking accounts for the walking. I'm going to need a heck of an explanation as to how all of those things are doing the double duty of producing walking and generating electricity. Do we notice any of the events that you said are part of walking doing other things at the same time they are helping to produce walking, but have nothing to do with producing walking? From my understanding of electricity, nothing you described accounts it. I'm no electrician, but I know the general idea. And I can Google all day long, but I'm not finding anything that suggests the mechanics that produces walking also explains the production of electricity.
I'm going to start looking for another explanation. Maybe you're wearing something under your clothing that generates electricity with the movement of your body. Maybe there's a wire coming out of the back of your shirt that's plugged in somewhere, and goes to your hand where you hold the bulb. Maybe it turns out you're a robot, and have batteries inside your body.
You can also explain the workings of the brain in as much detail as you want. And that can be pretty extensive. This is from Darwin's Black Box, by Michael Behe:
That's the beginning of the beginning of the beginning of the beginning of the series of events and processes that explains/describes how we detect a certain range of the electromagnetic spectrum. Add some other events, and we can distinguish different frequencies within that range. More events explain how patterns of our perceptions are stored in our brain. And still more explain how what is stored becomes part of the algorithm that chooses which action we take when stored patterns are perceived again.
Then you explain that all of these things you just described in such detail also produce my subjective experience of red. I have the same problem I had with your lightbulb. Everything you said about the mechanics of vision accounts for vision. How are all of those things doing the double duty of giving us vision and generating subjective experience?.
This problem is, in fact, more difficult than the walking/electricity problem. Walking is a physical process, ultimately dependent on the physical properties of particles and laws of physics. Electricity is electrons. Physical things. Particles. The defining property of these particular particles, their negative charge, accounts for different things in different circumstances. In some circumstances, it accounts for electricity.
Consciousness, on the other hand, is not an obviously physical process, clearly built up from the physical properties or particles and laws of physics. We can't point to any property or event in the whole process of vision, and say, "There! That is redness." Whatever we point to will be a physical thing that plays a part in the explanation of perceiving part of the spectrum, distinguishing wavelengths, remembering what we've seen, using memories to help make decisions...
An explanation comprises explanans and explanandum. The explanandum is what it is that needs to be explained, and the explanans is that which provides the explanation. But then, any act of explanation, including the explanation of consciousness, is a conscious act. This means that consciousness is also a part of the explanans. When we articulate a theory or a model to explain consciousness, we are doing so using our conscious understanding, reasoning, and cognitive faculties.
This dual role of consciousness leads to a sort of circularity: we use consciousness (as part of explanans) to explain consciousness itself (the explanandum). It's akin to trying to illuminate a light bulb with its own lightit's both the source and the object of the inquiry.
Eliminative materialism doesn't address this problem. Instead, it ignores it, which is why Daniel Dennett's first book on the subject, Consciousness Explained, was parodied by many of his peers as Conciousness Ignored.
[quote=Thomas Nagel, Review of From Bacteria to Bach and Back]Dennett asks us to turn our backs on what is glaringly obviousthat in consciousness we are immediately aware of real subjective experiences of color, flavor, sound, touch, etc. that cannot be fully described in neural terms even though they have a neural cause (or perhaps have neural as well as experiential aspects). And he asks us to do this because the reality of such phenomena is incompatible with the scientific materialism that in his view sets the outer bounds of reality. He is, in Aristotles words, maintaining a thesis at all costs.[/quote]
The problems of phenomenal consciousness are to begin with the result of tension between different intuitions. It's like you have a bunch of witnesses and their testimonies don't add up to a coherent story, one of them has to be wrong. It's no good saying "if you doubt one, you have to doubt them all, so let's just not".
.
Not sure what you mean here. For most of the history of philosophy it wasn't really much of an issue. There are things. Of these, some are living. Of the living things, some are animals and have sensation. That's just part of their essence.
The Hard Problem only slowly comes into focus with the attempt to reduce all things to extension in space and motion. It even sort of goes back under the radar again with Newton, because now you have fundemental forces that can act at a distance, which led to people posting a similarly sui generis "life force," to explain conciousness.
But "things are only extension in space and motion," or "all that exists can be explained in terms of mathematics and computation," are not basic intuitions.
Neither is, "how does things being very 'complex' or involving lots of integrated information processing result in first person perspective?" a question of a violation of a basic intuition, it's a question of the explanation being extremely murky with no specific causal mechanism identified.
I'm not sure exactly how you make the distinction between "basic/core" and "regular" (historical popularity maybe?), but those ideas of space and motion are certainly products of intuition.
It's obvious that if you frame something as "intuition vs X", then X will always lose. But the neuromaniac eliminativist perspective is also the product of intuition, intuition isn't a big happy family to be collectively dismissed or embraced.
I'm using "intuitive" the way it is generally used throughout philosophy. Something is intuitive, a noetic "first principle," if we cannot conceive of it being otherwise. 2+2 is intuitively 4. It is intuitive that a straight line cannot also be a curved line, that a triangle cannot have four sides, etc.
There is nothing intuitive about the statement "when lots of information gets processed in a very complex way the result produces first person subjective experience." This is not intuitive in the way 2+2=4 is, so it requires demonstration, showing how the claim follows from first principles or empirical observations based on these same intuitive inference rules.
To say, "well I can't demonstrate it in a way that makes sense, but this is just because your intuition is broken," undermines virtually all truth claims, because now we can no longer feel certain about the principle of non-contradiction, inference rules, mathematics, etc. The work around of claiming "x is true, it just seems to not be because your reason is broken," can be applied equally to any claim, e.g. that we are actually light from the Pleroma trapped in a material prison, that 2+2=7, etc.
That everything is extension and motion is not an intuition. It is not intuitive that "color isn't real," for instance. People don't say, "color isn't real, this is obvious and could not possibly be otherwise." It's rather an inference from atomism/corpuscularism, which itself is created as a solution to the apparent unity of the universe and its equally apparent multiplicity and change (The One and the Many problems).
I didn't realize the bar was set so high, so then all it takes is for someone to claim that they can conceive of something being false, and it ceases to be intuitive? Presumeably the eliminativist has already done this, so are the claims they deny then dethroned? Or are they not included in this "we"?
* I'd foolishly argued that we can't know anything.
But I do see a hope. As I see it, there's approximately three things at play:
* perception
* intuition
* analysis
Classical philosophers and neuroscience have claimed that our perception is flawed. We all know that our intuitions are a good start, but should never be relied upon without verification. That leaves analysis, and all the wonderful legacy of arguments to and fro about the power of analysis to overcome the limitations within our perception and our intuition. I'm not even going to attempt to address any of that.
The outtake for me is that, should a suitable new analysis come to light, it can supplant our (likely faulty) intuition about the feels of consciousness.
I suppose that is exactly what Dennet was attempting to do (and apparently failed at) in his critique of conscious feels and the hard problem.
I am not really sure what you're trying to to get at here. What counts as intuitive might be debated, but certain statements like "a line of points cannot be simultaneously continuous and discrete," or "2+2=4," can largely be agreed upon. Are you claiming we lack good warrant for believing these sorts of things?
Eliminitivism, in its most extreme form, does violate these sorts of intuitions. This would be the claim that "you don't actually experience anything, see blue, hear sounds, etc." But does anyone actually advocate this? Dennett himself calls this type of eliminitivism "ridiculous," in "Conciousness Explained."
The problem with the claims of more plausible forms eliminitivism aren't necessarily that they are counter intuitive, it's that they claim that conciousness has been adequately explained when it hasn't been.
Ok, well we might debate what counts as adequate explanation here. But what is not a good response is to say, "yes, it does seem inadequate, but that's only because human reason is ultimately deficient." This essentially amounts to saying "I do not need to offer a convincing explanation or demonstration, because such a thing is not possible, but you should still accept the truth of what I'm saying."
This is like Luther's response to Erasmus. Erasmus says "a God who predestines forces man to sin, and then punishes him for it seems evil."
To which Luther responds: "yes, but it only seems evil because our reason is deficient due to the fall." This is not an explanation though.
Killed two birds with one stone there. :snicker:.
There is a difference, it seems to me. Perceptual hallucinations are complex, we construct a model which turns out to be contradicted by further data. Loads of stuff going on, plenty of room for error. But consciousness of consciousness is maximally simple, no? It doesn't specify any particular experience. We might be wrong in perceiving a lion in the grass, it might just be a patch of grass. But we can't be wrong that we have experienced something-or-other, i.e. a world. And to go one step further, when we turn consciousness on itself, in experience of experience, where the subject is the object, there is no gap for a mistake to exist in.
What is "these sorts" referring to here? Eliminativists do not reject 2+2=4 or other mathematical a priori stuff, that sort of thing is not in doubt here. It seems you are bunching some intuitions together into a group, but I don't understand the criteria for membership.
Quoting Count Timothy von Icarus
In my opinion, any eliminativist worth the name would of course advocate this. And why not?
Quoting Count Timothy von Icarus
I don't know that Dennett is an eliminativist, if so I think he is in the closet about it. I've always found him to be strangely diplomatic and "soft-selling" in expressing his views, it makes sense to me that he would disavow what you describe as "extreme". Maybe this partly explains his success, his books do seem to sell.
Good deal! That's another way of saying what I mean by : "Consciousness*1 is the function of brain activity". In math or physics, a Function*2 is a relationship between Input and Output. But, a relationship is not a material thing, it's a mental inference. Also, according to Hume, correlation is not proof of causation.
So, simply noting the correlation between low-level "goings-on" and high-level awareness-of-what's-happening is still a leap over the Intuitive Gap. The "hard Problem" remains : physically, how do you get from neural Inputs (energy) to mental Outputs (awareness)?
Because of that Causal Gap, some have dismissed Consciousness as a "delusion", in the sense that there is nothing physical in the output. However, as you noted, we could say that we get from IN to OUT by intuition*3, in the sense of metaphysical In-sight or Inner-vision. But that's not a material explanation of the steps between Input and Output.
Intuition is not physical vision --- traceable step by step from outer senses to inner sensations --- but a mysterious metaphysical way of knowing what's "going-on" inside the brain, without EEG or MRI. Unfortunately, that still doesn't suit your preference for a "materialistic theory". Do you have any ideas about how to fill the particular-to-holistic Intuition Gap? What's the "rule" for correlating impersonal sensory inputs to personal meaningful outputs? I'm still working on that ellipsis myself. :smile:
*1. Content of Consciousness :
Consciousness, at its simplest, is awareness of internal and external existence. ___Wikipedia
https://en.wikipedia.org wiki Consciousness
Note --- Where is Awareness or Meaning or Cognition in the material substrate? Could those functional features exist, potentially, within the Energy that transforms into Mass : E=MC^2? If so, that would provide a Physical, not Material, agency to explain the high-level manifestation of the power of Intuition to "summarize" (from concrete matter to abstract ideas) what can't otherwise be seen.
*2. Function :
A function (f) consists of a set of inputs, a set of outputs, and a rule for assigning each input to exactly one output.
https://www.utrgv.edu/cstem/utrgv-calculus/functions/definition-of-function/index.htm
*3. Intuition :
Often referred to as gut feelings, intuition tends to arise holistically and quickly, without awareness of the underlying mental processing of information. ___Psychology Today
https://www.psychologytoday.com basics intuition
I understand this view. But I think it's an over simplification. On the one hand, given that the brain is itself, it should have no trouble knowing itself. In practice, there are a number of problems with that notion.
1) Firstly, there's strong neurological and behavioral evidence that our access consciousness doesn't have access to everything that goes on in the brain. So, even if it were possible for the brain to observe everything about its own activity, the brain doesn't do that - at least not to the extent that we have conscious access to it.
2) Take a hypothetical brain, and imagine that every neuron of it's say, 1 billion neurons, is devoted to some form of 1st-order behavioral control in relation to the environment or the body. Now, imagine that this brain is going to develop the ability to observe itself. It's full state at any given moment is determined by the interactions between its 1 billion neurons, via its, say, 100 billion synapses. That's a large data space. The world is already pretty complex, and its existing 1 billion neurons are all needed just to understand that. So how many more neurons does the brain need to understand its own activity? Even if we assume conservatively a 1:1 relationship - that 1 billion additional neurons is enough to understand the activity of the first 1 billion neurons, now the brain is twice the size. Oh, and also, now the data space that needs to be monitored is twice the size. So the brain needs to double again. etc. etc. until infinity.
Well, that's obviously intractable. What is feasible? Well, instead of observing to such a low-level, we capture just some sample of brain activity. This is likely to be reduced by 1) limiting the scope of which parts of brain activity are observed, and 2) capturing a dimensionally reduced abstraction. The rest has to be inferred, which opens itself up to hallucinations.
3) There's problems too with simple connectivity issues. If you imagine a section of brain that is devoted to some 1st-order body function. In order for some other section of the brain to monitor this first section, there needs to be additional connections going out from the first. If we assume naively that there is one brain region devoted to meta-management, then it needs to get connections from all the brain regions that it cares about, which puts a strong limiting factor on how much data it can collect about the rest of the brain activity. And again, the rest has to be inferred = hallucinations. Now, the naive assumption of a single 2nd-order data collection center is almost certainly wrong. But some degree of differentiation certainly does occur in the brain, so the problem still exists to the degree that differentiation does occur in the brain.
Quoting Gnomon
If I understand you correctly. I think this is the non-reductive thesis - that the whole of consciousness is more than the sum of its parts, and thus that it cannot be fully explained by its parts alone. My apologies if I've misunderstood you, but I'll talk to that point anyway.
As I understand it. The non-reductive thesis about something, paraphrased as "more than the sum of its parts", says that something cannot be entirely explained by its parts and their interactions because it has some additional qualities that are not explained by those parts and/or their interactions. Thus, consciousness being an example of such a thing, consciousness cannot be explained via the existing reductive methods of science.
I'm yet to see an argument that proves the non-reductive thesis - though I probably just haven't read enough.
What I have seen is this:
1) Convincing arguments that consciousness might be more than the some of its parts. (Note: not arguments that it is is)
2) Lots of people saying in various ways that they cannot conceive of how a reductive explanation could explain consciousness.
3) #2 being used as a logical leap to conclude that consciousness definitely is non-reductive.
Some take #2 to conclude that consciousness isn't even physical in the traditional sense. Others accept that everything is still physical in nature, but instead suggest in one way or another that our science is incomplete - that we need non-reductive ways of theorising about things. Those discussions usually then trail off into meaninglessness - they eliminate the rationalisation mechanisms in science to understand how first-principles lead to bigger things, the particular-to-holistic process that you mentioned. And so the arguments conclude self-gratifyingly that consciousness cannot be explained mechanistically. The non-reductivity thesis creates the explanatory gap, by refusing to accept explanations.
My approach is to eschew the debates and to just provide such an explanation. I've started a discussion in https://thephilosophyforum.com/discussion/15091/the-meta-management-theory-of-consciousness if you're interested. There I've provided details of just such an explanation. The ellipsis to which you refer.
The problem I see with reductive materialism is really pretty simple. It is that the scientific approach that it assumes is defined entirely in terms of objectivity. It is what I describe as 'objective consciousness'. It is, of course, fantastically successful in an objective sense, but not necessarily in an existential sense. There is a vast scope of issues which are amenable to objective analysis, but the problems of philosophy, which are essentially existential in nature, may not be among them.
This goes back to the founding paradigm of modernity, which is Galilean objectivity and the universal reach of physical laws, combined with Cartesian geometry. That forms the basic paradigm of the materialism you're advocating. But as I explained elsewhere, it is analogous to a two-dimensional description of a three-dimensional shape, in that there is a dimension missing. By assigning reality to what is objectively material, the role of the perceiving subject, which synthesises and combines the information about the objective to generate what we understand as 'reality', is omitted or overlooked. But then, as the only criteria that are deemed acceptable are objective in nature, there is no way to demonstrate what, exactly, has been omitted or left out, which is a hard problem.
If consciousness was a brain process, then I would agree with you, the brain knowing itself would be riddled with opportunities for mistakes, illusions etc. I'm just pretty sure consciousness is not a brain activity.
After centuries of debates on the provenance of Consciousness, I doubt that you will find a slam-dunk argument either way. In most such discussions, the debater tends to end up at his own starting point. Materialism begins from the assumption that Matter is all there is, hence Mind must be a kind of matter. Idealism assumes that Mind is all that exists, so Matter must be a form of Mind. But my non-authoritative hypothesis, as an amateur philosopher, is that both Mind and Matter are forms of primordial Energy/Information (the power to transform). In other words, Consciousness is caused by Causation, not Substance.
What you call "Non-Reductive", I call "Holism" or "Systems Thinking". And your linked thread has a diagram showing a Feedback Loop, which is a major factor in multi-part Systems operation. Self-recursive flows of Information/Energy are the key to novel features & functions of a complex System, that emerge from inter-operation of parts that do not have that never-before-seen characteristic. A common example is Water, an inter-operative system of atomic oxygen & hydrogen, neither of which display the molecular properties of fluidity and wetness. But, working together, those atoms undergo Phase Transitions (transformations) from Gas to Liquid to Solid, due to energy inputs & outputs.
If you are interested in reading more along the lines of non-reductive Holism, I'll suggest two ground-breaking books : A & B below. They will not prove anything empirically, but then Mind is not an empirical topic, it's a philosophical subject. :smile:
Note --- "Emergence" is a dirty word for Reductionist thinkers. They seem to think it means "magic". But it simply refers to physical transformation, such as a new species, with different physical & behavioral features, stemming from the lineage of an older species.
A. Holism and Evolution (1925), by naturalist Jan Smuts, is mostly about how Life (a novel property) emerged from eons of evolutionary transformations of Matter. His technical term "holism" was quickly adopted by New Agers, so a different term, "Systems Theory", was coined by scientists, to avoid the "woo" factor of meditating hippies.
B. I Am a Strange Loop (2007), by Douglas Hofstadter -- cognitive & computer scientist -- is about how feedback loops (self-reference) in a dynamic cyclic structure may eventually produce the novel quality of Self-consciousness. It's a strange, but compelling and multi-disciplinary, exploration of Mind/Matter, by the author of the profound but bizarre Goedel, Escher, Bach. Your own term, "Meta-Management", may be an unintentional reference to a feedback loop.
Embracing Systems Thinking :
A Holistic Approach to Problem Solving
https://www.linkedin.com/pulse/embracing-systems-thinking-holistic-approach-problem-solving-brewton/
I Am a Strange Loop :
One of our greatest philosophers and scientists of the mind asks, where does the self come from and how our selves can exist in the minds of others. Can thought arise out of matter?
https://valsec.barnesandnoble.com/w/i-am-a-strange-loop-douglas-r-hofstadter/1100299015?ean=9780465030798
EVOLUTIONARY EMERGENCE due to sequential trans-form-actions :
Consciousness is a very different situation. H2O and skyscrapers both have physical properties, and no suggestion of non-physical properties. Neither has any non-physical properties that present a mystery, and need explanation. Even processes like flight, metabolism, and vision can be seen to come from purely physical foundations. Subjective experience cannot. The properties of matter that we know of, and have measured to an amazing degree, do not suggest subjective experience.
The argument for reductionism I hear most often is, just because we haven't figured it out with our sciences yet, doesn't mean we won't. My opinion is the fact that we haven't should not be considered evidence that we will. Nor is it evidence that the things we are aware of because of our sciences are the only things that exist, or the only things involved. The different nature of subjective experience, on the other hand, suggests something different is involved.
I don't think there is anything like proof for either case. However, I do think there are very strong arguments for not assuming the reductionist view is true until decisively proven otherwise. For the following reasons:
I would add that Jaegwon Kim's arguments against the possibility of strong emergence given current reductionist accounts of physicalism make it extremely difficult for anything like strong emergence to exist. But, assuming we are concious, and assuming panpsychism isn't true, I would take this to suggest that even if something like smallism is true, it will nonetheless require some sort of major paradigm shift that allows for some sort of "emergence-like" phenomena to occur to resolve this impasse. That is, something like what Einstein did for physics, reshaping our fundemental conceptions instead of trying to make the world fit into them.
Smallism I think is probably false for this reason, and others.
But, assuming panpsychism isn't true, what other ideas being suggested do?
This is the thing. The thing. It simply isn't needed, until we can assess why. At what point what a being need phenomenal consciousness? It's an accident, surely. Emergence, in whatever way, on the current 'facts' we know.
Quoting Patterner
I think it might present one, in the Nagel sense. I don't quite think it's anything new, generally. Panpsychism the concept has been around millennia.
Kastrup's Analytical Idealism is a contender, isn't it? The OP mentions Donald Hoffman, who's on the board of Kastrup's Essentia Foundation, although Hoffman appears to draw the opposite conclusion to the OP.
Quoting Essentia Foundation
-----
Quoting Malcolm Lett
Very poor. Relies on conjecture and tendentious arguments.
Really? Kastrup's arguments, of have I missed something?
That description is given the other discussion I mentioned:
https://thephilosophyforum.com/discussion/15091/the-meta-management-theory-of-consciousness
I think of this description as being reductive, but then I also think of the explanation of H2O producing the wetness of water as being reductive. So it sounds like it's just a matter of definitional differences, as is often the case. In any case, the theory I present there is grounded in materialism, but yet I am able to offer very clear explanations for a number of phenomenological descriptions of consciousness.
Unless someone can find major holes in my argument there, it makes the case for the need for alternate explanations much weaker.
Quoting Gnomon
Far from unintentional. The theory is based around the need for a feedback loop. The theory very much creates a Strange Loop.
But regardless
Quoting Malcolm Lett
So if you dont understand the criticisms, how do you know there are no major holes?
Let me point to a couple:
Quoting Malcolm Lett
Objection: the argument appeals to an indubitable fact, not a questionable intuition. The explanatory gap you summarily dismiss was the substance of an article published by a Joseph Levine in 1983 [sup]1[/sup], in which he points out that no amount of knowledge of the physiology and physical characteristics of pain as the firing of C Fibers actually amounts to - is equal to, is the same as - the feeling of pain.
The basic point is that knowledge of physical particulars is objective in nature, whereas the experience of pain is clearly subjective and so of a different order to any objective description. This point was elaborated in Chalmers now-famous Facing up to the Problem of Consciousness [sup]2[/sup]which your argument does nothing to rebut.
Quoting Malcolm Lett
Objection: the fact of ones own conscious experience is not a data point. It might be a data point to someone else - a demographer or a statistician - but the reality of first-person experience cannot be explained away as a data point.
You might know that when Daniel Dennett published his book Consciousness Explained, it was parodied - not by the popular media, but by peers including John Searle and Galen Strawson - as Consciousness Ignored. I say thats what any eliminative approach must do, regardless of what mechanisms it proposes to explain consciousness.
1. Levine, J. 1983. Materialism and qualia: the explanatory gap. Pacific Philosophical Quarterly, 64: 354361.
2. Chalmers, Facing Up to the Problem of Consciousness
We do not know that consciousness is a physical characteristic. We do not know how it comes about. Therefore, we cannot reduce it to the properties of its constituents.
Levine acknowledges that his argument is not proof. And Chalmer's view is based on his intuition about whether he can conceive of something or not.
Quoting Patterner
Precisely. There are so many arguments claiming that materialism can never explain consciousness that anyone who proposes a materialistic explanation is summarily dismissed. And yet the fact is that we don't know what consciousness is. So we can't be certain about the correctness of those arguments.
In relation to reductive explanations, @Count Timothy von Icarus earlier commented that there isn't proof either way. I think that's a far better stance than claiming that reductive explanations are definitely false, or that materialism is definitely false.
I'm also not trying to prove that materialism and reductive explanations are absolutely true. But I'm trying to show that a reductive materialistic explanation can go much further in explaining conscious phenomenology than is generally accepted by those who dismiss reductive materialism. I'm certain that there are gaps in my explanation, but I think if you read the full blog article you'll find that there's a lot less remaining than you expect.
It was my mistake to start a conversation about intuition/delusion without the background that my argument was actually based on.
Totally agree. Just adding more complexity at a computational process does mysteriously make consciousness happen. In my blog post I argue that there is a very specific evolutionary need for why consciousness evolved (well, technically meta-management) and a very specific kind of structure that leads to conscious phenomenology. There is a very valid argument about whether the meta-management processes I describe truly do lead to phenomenal consciousness, but if correct, it offers an explanation of why consciousness emerges.
:up:
Not so. The distinction between the feeling of pain and the objective description of pain is a factual distinction.
We do know what it is. It is the capacity to experience.
"Illusionists deny that experiences have phenomenal properties and focus on explaining why they seem to have them. They typically allow that we are introspectively aware of our sensory states burt argue that this awareness is partial and distorted, leading us to misrepresent the states as having phenomenal properties. Of course, it is essential to this approach that the posited introspective representations are not themselves phenomenally conscious ones. It would be self-defeating to explain illusory phenomenal properties of experience in terms of real phenomenal properties of introspective states. Illusionists may hold that introspection issues directly in dispositions to make phenomenal judgements judgements about the phenomenal character of particular experiences and about phenomenal consciouisness in general. Or they may hold that introspection generates intermediate representations of sensory states, perhaps of a quasi-perceptual knind, which ground our phenomenal judgementts. Whatever the details, they must explain the content of the relevant states in broadly function terms, and the challenge is to provide an account that explains how real and vivid phenomenal consciousness seems. This is the illusion problem. "
Indeed. And the problems with trying to explain how it comes about leads to the idea that maybe it didn't come about at all, but was always there in the first place.
We may have a conceptual disagreement, I'm not sure. I think you are suggesting some kind of phenomenality/proto-consciousness as a precursor to consciousness which isn't full-on consciousness, whereas I don't think such a thing is conceptually distinguishable from full-on consciousness.
There doesn't seem to be any intermediate stage between having an experience and not having one.
Goff and Antony have written about it, and Eric Schwitzgebel I think. The non-vagueness of consciousness.
Trying to understand the terminology. If full-on consciousness can be of not very much experience/very little content, is our consciousness also full-on, but with much more experience/greater content? Or is our consciousness called something other then full-on? I realize this is your term, not one found in books about panpsychism. But I want to understand your thinking.
Quoting bert1My thought is that there isn't any not having an experience.
Yes, that's my view. Experience is consciousness of something, whether that something is simple and uninteresting, or complex and interesting. In either case it's still experience. The content is different, but the consciousness is no different at all.
Quoting Patterner
Yes, I pretty much agree with you. Just because I can form the idea of an object which doesn't experience anything, doesn't entail that I think there actually are any objects which don't experience anything.
Gotcha.