First vs Third person: Where's the mystery?

noAxioms September 28, 2025 at 23:02 3150 views 287 comments
A Soft-Line Reductionist’s Critique of Chalmers on the First/Third-Person Divide

As somebody who has no problem with all mental activity supervening on material interactions, I’ve always struggled with trying to understand what exactly is hard about the ‘hard problem’, or even what that problem is.
The primary disconnect seems to be that no third-person description can convey knowledge of a first-person experience, just like no first-person experience can convey knowledge of a third-person description. So only the bat can know what it’s like to be a bat. The BiV scenario illustrates the latter disconnect: that one cannot know one’s own physical nature from experience alone without presuming that the experience constitutes true empirical evidence.
So it seems difficult to see how any system, if it experiences at all, can experience anything but itself. That makes first-person experience not mysterious at all. Clearly I am not getting the hard problem. Chalmers attempts to make a problem where there doesn’t seem to be one to me. The conflict seems summarized by incorrect assertions of the disregard of the first person by cognitive science, and comments like this:
The reductionist will reply in character, and the two will go on, feeling that they are talking past each other.

Quite right. This topic admits to the talking past each other, and is attempting to gather responses that resolve that disconnect.
All quotes are from https://consc.net/notes/first-third.html
The first person is, at least to many of us, still a huge mystery. The famous "Mind-Body Problem," in these enlightened materialist days, reduces to nothing but the question "What is the first person, and how is it possible?". There are many aspects to the first-person mystery. The first-person view of the mental encompasses phenomena which seem to resist any explanation from the third person.

“to many of us”, sure. It’s made into a mystery by presuming unnecessary things (that mind and body are separate things). To me it’s easy. How could a thing experience anything else besides itself? Interesting to attempt to answer that question, but it doesn’t work well for something like a biological creature.
As for resistance from the third person, that’s totally wrong. It’s effortless until you get to the whole ‘what it’s like to be’ thing, which resists any written explanation. Chalmers offers no better insights as to what it’s like to be a bat (not to be confused with what it’s like for you to be a bat) than does any other explanation.
The intro goes on and on asserting how mysterious it all is, but that goes nowhere to actually helping me realize the mystery. I must be taking a pure 3rd person view, which is admitted to be not mysterious.
In the terminology section:
I originally intended to use the word "consciousness" to represent the mysteries of the first-person, but this had two problems: the word has been often used to denote third-person viewable phenomena (the notion of a system which gets feedback from its own processing, to name just one aspect)
Here he kind of quotes a crude third-person ‘explanation’ of consciousness, right after denial of it being able to do so.
It isn’t until fully 60% of the way through the article that Chalmers actually seems to summarize what he thinks is problematic, speaking in the context of awareness of self, which needs to be more than just ‘a system monitoring its own processes’. Not until 85% through do we get more than this summary.
The problem is, how could a mere physical system experience this awareness.
But this just seems like another round of feedback. Is it awareness of the fact that one can monitor one’s own processes? That’s just monitoring of monitoring. There’s potential infinite regress to that line of thinking. So the key word here is perhaps the switching of ‘awareness’ to ‘experience’, but then why the level of indirection?
Instead of experience of the monitoring of internal processes, why can’t it be experience of internal processes, and how is that any different than awareness of internal processes or monitoring of internal processes? When is ‘experience’ the more appropriate term, and why is a physical system necessarily incapable of accommodating that use?
I can take a stab at that. There seems to be a necessity of memory and predicting going on. It’s almost impossible to be a predictor without memory, and I cannot think of anything that ‘experiences’ that does not do both things, but I can think of things that monitor internal processes that do so without either.
What I will not accept is a definition-based argument along the lines of “The word ‘experience’ is by definition something only a biological entity has, therefore a non-biological system (doing the exact same thing) cannot experience.” Language describes. It doesn’t proscribe.
We should never forget that the mind is caused by a brain
But not necessarily. It is unclear if Chalmers is implying this necessity as do many arguing along similar lines.
Although we do not know how, a first-person is emergent from a third-person-understandable substrate.
He asserts this third-person substrate to be ‘understandable’. Perhaps so, but doubtful. No understanding of human brain function exists or is likely ever to exist, even if say a full simulation of a human is achieved. Of course such a simulation, while not providing that full understanding, would at least falsify any dualistic model, at least to the person simulated, no matter his prior views.
Many commentators, particularly those in the third-person camp, give the illusion of reducing first-person mysteries by appropriating the usual first-person words to refer to the third-person phenomena to which they correspond.
I’m actually confused about his references to third-person phenomena. Phenomena seems intrinsically to be first-person. Sure, one can in third person discuss a particular phenomenon (say the experience of red), that discussion is itself not the phenomenon of redness. So I guess that Chalmers means ‘references to’ some phenomenon when calling it third-person. Not sure.
The Mystery of the First-Person
As I have said, it is difficult to talk about the first-person without descending into vagueness. But what can be done, if it is done carefully, is to point out the mysteries, and ask how a third-person, physical theory could ever deal with these. I do not intend to do this in this paper - I take it that this has already been done most ably, by Nagel and others, and that reductionists have never given an adequate response.
Perhaps I am commenting on the wrong paper. Perhaps my OP should be quoting Nagel, as I implied with the ‘like to be a bat’ mention. I’m certainly not getting anywhere with this article I found by simply googling something like ‘first vs third person mind’.
But no, Chalmers presents at least a summary again, which is probably what I want rather than the full-blown 20-page treatment that Nagel might present.
One can ask: how could a third-person theory begin to explain the sensation of seeing the colour red?
That sensation cannot be described, as illustrated by Mary’s room. This is not controversial. Its explanation seems trivial: red light triggers signals from nerves that otherwise are not triggered, thus resulting in internal processing that manifests as that sensation. That’s very third-person, but it’s an explanation, no? Certainly no worse of an explanation than say Chalmers might supply who simply substitutes a partially black box of how that processing fully works with a much blacker box of which nothing is known.
Argument from lack of ‘explanation’ seems to fall utterly flat.
Could a theory be given which enabled a being, as intelligent as ourselves but without the capacity for sight (or even a visual cortex), to truly understand the subjective experience of the colour red?

Of course not. The physical view does not claim otherwise. A thing cannot know the experience of being something it is not and never was. One can guess about the experience of another human/mammal since it is likely to be somewhat similar, but to subsequently conclude that nothing sufficiently different (non-biological say) can have experience at all is a complete non-sequitur.
Similarly, it does not follow that the physicalist model is wrong because one doesn’t know the experience of being a bat.
No actual problem has been identified.
Concerning mental content:
When we think, our thoughts have content. How is it possible that, in some absolute sense, patterns of neuronal activity in a small biological system support meaning?
Meaning is relative, meaningful only to that which can interpret however the meaning is encoded. I have no idea why he thinks thoughts need to be meaningful in any absolute sense.

Comments (287)

Janus September 29, 2025 at 00:13 #1015548
Quoting noAxioms
So it seems difficult to see how any system, if it experiences at all, can experience anything but itself.


Don't we also experience a world of things other than ourselves? Perhaps you mean something different—that we don't experience being other things?
SolarWind September 29, 2025 at 05:27 #1015576
I don't see physics as wrong, but rather as incomplete.

I think it's mysterious that even with knowledge of all the laws of physics, it seems impossible to decide whether plants can suffer.
Astorre September 29, 2025 at 05:40 #1015577
Reply to noAxioms

I look at this problem from a slightly different angle:

Chalmers calls the problem:
There are so-called soft problems of consciousness—they are also complex, but technically solvable. Examples:
How does the brain process visual information?
How does a person concentrate attention?
How does the brain make decisions?

These questions, in principle, can be studied using neuroscience, AI, and psychology.
But the hard problem of consciousness is:
Why do these processes have an internal sensation at all?
Why doesn't the brain simply function like a computer, but is accompanied by conscious experience?

We know what it's like to see red, but we can't explain why the brain accompanies this perception with subjective experience.
So (as Chalmers writes): Either something is missing in modern science. Or we're asking the wrong question.

Chalmers asks a question in the spirit of postpositivism: Any scientific theory is not necessarily true, but it satisfies our need to describe phenomena. He suggests rethinking the question itself. However, he hopes to ultimately find the truth (in a very positivist way). He still thinks in terms of "problem ? theory ? solution." That is, he believes in the attainability of truth, even if only very distantly.

As for me, I would say this: if the truth of this question is unraveled, human existence will lose all meaning (perhaps being replaced by something or someone new). Why? Because answering this question will essentially create an algorithm for our existence that can be reproduced, and we ourselves will become mere machines in meaning. An algorithm for our inner world will emerge, and therefore a way to copy/recreate/model the subjective self will emerge.

From this, I see two possible outcomes: Either this question will be answered (which I would not want in my lifetime) or this question will remain unanswered (which promises humanity to continue asking this question forever in any way and for any reason, but never attaining the truth). So my deep conviction on this matter is this: mystery itself is what maintains the sacredness of existence.

At the same time, as a lover of ontology, I myself ask myself these and similar questions. However, the works I have written on this subject do not claim to be truthful, but rather call for an acceptance of incompleteness. Incompleteness and unansweredness lie at the foundation of our existence, and we must treat this with respect, otherwise our "miraculous" mind, with its desire to know everything, will lead to our own loss.
Mijin September 29, 2025 at 21:32 #1015648
It's always best with these things to bring it back to the practical.

The measure of how well we understand a phenomenon or system is what kind of useful predictions and inferences we can make about it.

When it comes to something like pain, say, we do understand very well the sensory inputs to the pain centres of the brain. But how the brain converts data into an unpleasant sensation remains quite mysterious.
This has practical implications -- it would be very useful to have some kind of direct measure of pain or some non-arbitrary way of understanding different kinds of pain. If we make a sentient AI one day, and it tells us it's in pain, how could we know if that's true or just saying that is part of its language model?

And we call it the "hard problem" because, right now, it doesn't seem feasible that a set of words could ever provide this understanding. How will words ever tell me what the extra colours that tetrachromats can see look like, when I can't tell a color blind from birth person what red looks like?
And indeed, how can I know whether an AI feels pain, when I can't know that you feel pain?

This is what makes it a "special" problem. The OP seems to basically acknowledge the main problem but seems to be shrugging it off in a "how could it be any other way" kind of perspective. But "how could it be any other way" doesn't give us any predictive or inferential power.
Paine September 29, 2025 at 23:21 #1015665
I read Chalmers to be questioning whether what is referenced through the first person can be reduced to the third. The issue concerns what is reduction as much and maybe more than any particular model of consciousness.

Neither side of the divide is presented as a given. The frames of reference are incongruent.
Joshs September 30, 2025 at 01:24 #1015679
Reply to Paine

Quoting Paine
I read Chalmers to be questioning whether what is referenced through the first person can be reduced to the third. The issue concerns what is reduction as much and maybe more than any particular model of consciousness.

Neither side of the divide is presented as a given. The frames of reference are incongruent


Good point. Chalmers is suspicious of reductionism because he sees the form of description on the basis of which consciousness would be reduced ( empirical causality, eliminative materialism) to be incompatible with the form of causality or motivation applicable to
consciousness. His proposed solution (panpsychism) lets us use empirically causal methods while at the same time honoring the peculiar status of consciousness by embedding consciousness within material things.

The phenomenological approach follows Chalmers in not wanting to reduce consciousness to material causality in eliminative fashion. But it departs from Chalmers in not wanting to maintain a dualism between third person causality and first person awareness. Its solution is to reduce material causality to subjective motivational processes. That is, it sees material causality as a secondary , derived abstraction, not as a method which deserves equal billing with consciousness.
noAxioms September 30, 2025 at 03:34 #1015698
Quoting noAxioms
What I will not accept is a definition-based argument along the lines of “The word ‘experience’ is by definition something only a biological entity has,

One great example of this seems to be the philosophical zombie (p-zombie or PZ) argument. Looking at the way it is presented, the only difference between a human and a p-zombie is that reserved list of words/phrases that only apply to the one. It's a pure description difference, no actual difference between the two. So the PZ has no inner experience since 'inner experience' is reserved for the preferred things and cannot by definition be used for the unpreferred thing despite the latter being identical in all ways but that.
Such a tactic was used to argue that it was moral to mistreat black slaves since only cattle terms applied to them, so the morality of their treatment was on par with what was permissible with domestic livestock. The whole p-zombie argument seems to hinge on similar fallacious reasoning. I often claim that my inability to see the problem at hand is due to being a PZ myself. There is nothing about my interaction with the world that seems fundamentally inexplicable. Perhaps Chalmers has some kind of experience that I don't, which is why something so obvious to him is missing from me.


Quoting Astorre
I look at this problem from a slightly different angle:

Chalmers calls the problem:
There are so-called soft problems of consciousness—they are also complex, but technically solvable. Examples:
How does the brain process visual information?
How does a person concentrate attention?
How does the brain make decisions?

Interesting that decision making is part of that. If they're made by physical processes, then many argue that moral responsibility is absent. That's nonsense since the physical person is still making the decisions and thus is held responsible. It is not physics compelling a different decision than what the person willed unless 'the person' is an epiphenomenal immaterial mind that would have willed differently, sort of like a cinema crowd shouting at the protagonist to not open the door with the monster behind it.

Point is, those that posit an immaterial mind, especially an immortal one, tend to place decision making on that mind and not on the brain, necessary to hold the immaterial thing responsible for its actions.

To me, this mental immortality seems to be the historic motivation for the whole dualistic stance, existing long before they even knew what purpose a brain might serve. So many cultures talk about death meaning that you go to meet the ancestors. Clearly the body doesn't, so something else (spirit) must. The age of science and logic comes about, and these stances need to be rationalized. But while Chalmers seems to be doing that rationalizing, that decision-making bit seems inconsistent with some of the ancient motivations.

But the hard problem of consciousness is:
Why do these processes have an internal sensation at all?
How could they not? The sensory input is there, as is the memory of prior inputs, and the processing of all that. Seems like enough to me.
A thermostat has internal sensations since it has sensory input. It probably doesn't have what I'd classify as experience since it lacks memory and any information processing to deal with prior states.

Why doesn't the brain simply function like a computer, but is accompanied by conscious experience?
It does function somewhat like a computer, and it's begging the conclusion to assert that a computer fundamentally lacks anything. Sure, it's different. There's no chemicals to detect, and the sensory input is typically vastly different, and a computer is purposefully made instead of evolved into a state that driven by fitness instead of serving the needs of its creator. That will change if they ever become responsible for their own fate.

We know what it's like to see red
No, we know what it's like for us (or maybe just you) to see red. That's not necessarily anything like what it's like for something else to see red.

but we can't explain why the brain accompanies this perception with subjective experience.
Neither can Chalmers explain why the brain or something else does this. It does not follow that the brain is not what's doing it in our case.
And I did attempt to explain it in the OP, and while crude, it's a better explanation than any alternative I've seen. So I have issues with assertions about a lack of explanation. Details are missing, sure. I don't see a wrong question being asked. I'm trying to find what's seemingly wrong about what question.

You see why I consider myself a p-zombie. I don't see something that many others find so obvious. But it makes me posit different things, and p-zombies are supposed to behave identically, which suggests that whatever the non-zombie thing has, it isn't causal.

Chalmers asks a question in the spirit of postpositivism: Any scientific theory is not necessarily true, but it satisfies our need to describe phenomena. He suggests rethinking the question itself. However, he hopes to ultimately find the truth (in a very positivist way). He still thinks in terms of "problem ? theory ? solution." That is, he believes in the attainability of truth, even if only very distantly.
He believes in a falsification test then, even if none yet identified. I identified one in the OP, currently outside our capability, but not for long if technology doesn't collapse first.
It would be interesting to guess at the reactions from both camps from those whose stance has been falsified. Both sides seem to know their own correctness, so a rejection of the test is likely. Few are actually open minded about the topic. Not sure if I am. I pretend to prefer the simpler model, but maybe that's just me rationalizing my biases.

As for me, I would say this: if the truth of this question is unraveled, human existence will lose all meaning (perhaps being replaced by something or someone new).
That depends on which truth is found. Perhaps not. I don't see either stance giving objective meaning to humans, and I don't see either stance taking away subjective meaning from humans.
Does the existence of dandelions have meaning? Protons? If not, at what point in evolution of our ancestors did meaning suddenly happen?

Why? Because answering this question will essentially create an algorithm for our existence that can be reproduced
Already have that. Clearly you mean something else. I can (and have) created a human (with help). Full knowledge of how everything works is not a requirement, nor does such knowledge yield the ability to say 3D-print a mouse. Ability to 3D print a mouse does not yield knowledge of how a mouse works or what it's like to be one.

So my deep conviction on this matter is this: mystery itself is what maintains the sacredness of existence.
I follow your chain of reasoning, but I probably don't think existence is particularly sacred. The answer to this particular question, either way, wouldn't change that.


Quoting Janus
Don't we also experience a world of things other than ourselves?

Well, we experience phenomena, and from that we inter noumena. The latter is not experienced, and the former isn't something not us.

Perhaps you mean something different—that we don't experience being other things?

The comment you quoted invites an example of somethng experiencing something not itself. Not even in say a VR setup is this actually the case, but I cannot assert that such is necessarily not the case.


Quoting SolarWind
I don't see physics as wrong, but rather as incomplete.
That it is, but known holes(e.g. a unified field theory) are actively being researched. This 'hard problem; is not one of them. It exposes no known holes. Incredulity seems its only attempted justification.

I think it's mysterious that even with knowledge of all the laws of physics, it seems impossible to decide whether plants can suffer.
They (some at least) have awareness and memory. That's sufficient. I suspect they have that capability.

Quoting Mijin
When it comes to something like pain, say, we do understand very well the sensory inputs to the pain centres of the brain. But how the brain converts data into an unpleasant sensation remains quite mysterious.
It would be pretty pointless to evolve the data of pain and nothing to consider it to be something to avoid.

If we make a sentient AI one day, and it tells us it's in pain, how could we know if that's true or just saying that is part of its language model?
An LLM is a long way from being reasonably sentient. It's just a pimped out search engine. If it tells you it's in pain, it's probably because it thinks those words will evoke a desired reaction. There have been terribly few documented cases where something non-human expressed this message, but it has happened. No, never by a machine to my knowledge.

How will words ever tell me what the extra colours that tetrachromats can see look like, when I can't tell a color blind from birth person what red looks like?
Exactly. Science acknowledges this impossibility, and yet it doesn't recognize said 'hard problem'.

And indeed, how can I know whether an AI feels pain, when I can't know that you feel pain?
The AI isn't going to feel human pain if that's what you're wondering.

Quoting Paine
I read Chalmers to be questioning whether what is referenced through the first person can be reduced to the third.
I read more than that into it, since I agree with Chalmers the impossibility of reducing it to the third, and yet I see no problem that's hard.
javi2541997 September 30, 2025 at 05:15 #1015711
Quoting SolarWind
I think it's mysterious that even with knowledge of all the laws of physics, it seems impossible to decide whether plants can suffer.


Quoting noAxioms
They (some at least) have awareness and memory. That's sufficient. I suspect they have that capability.


As @noAxioms pointed out, some plants have sensory abilities. It is true that plants do not have pain receptors, because they do not have nerves (or a brain), so they do not "suffer" or feel pain as we do.

The post of @SolarWind made me think if he referred to psychological or physical suffering. It is understandable that a carrot (for example) does not suffer when we trim or uproot it. We can eat a carrot without the worry if we did a kind of botanical torture. But some plants have obvious sensory abilities, such as the Venus flytrap. If this plant has an incredible sensory capacity to trap insects, it might have a similar sense to perceive suffering.

I read a brief article on the matter in Britannica and it says: [i]Plants have exceptional abilities to respond to sunlight, gravity, wind, and even tiny insect bites, but (thankfully) their evolutionary successes and failures have not been shaped by suffering, just simple life and death.[/I] Do Plants Feel Pain?

And this example is pretty awesome: Arabidopsis (a mustard plant commonly used in scientific studies) sends out electrical signals from leaf to leaf when it is being eaten by caterpillars or aphids, signals to ramp up its chemical defenses against herbivory. While this remarkable response is initiated by physical damage, the electrical warning signal is not equivalent to a pain signal, and we should not anthropomorphize an injured plant as a plant in pain.

In conclusion, plants do receive stimuli when they receive some kind of physical damage, but it is different from the pain that humans experience. Still, they have awareness of something.
Janus September 30, 2025 at 05:33 #1015717
Quoting noAxioms
Well, we experience phenomena, and from that we inter noumena. The latter is not experienced, and the former isn't something not us.


Don't we experience the phenomena as being other than ourselves? Why bring noumena into it?
DifferentiatingEgg September 30, 2025 at 06:18 #1015720
Reply to noAxioms The so-called “problem” only arises if you think consciousness is a thing-in-itself, via divorcing mind from body, rather than a function of life. It's a "hard problem" because the people who think this way are literally trying to make sense of what Camus details as "the absurd."

"This divorce between man and this life, the actor and his setting, is properly the feeling of absurdity." Page 3 MoS.

The “hard problem” is not consciousness, but the philosopher’s estrangement from life.
SolarWind September 30, 2025 at 07:19 #1015724
Quoting DifferentiatingEgg
The so-called “problem” only arises if you think consciousness is a thing-in-itself, via divorcing mind from body, rather than a function of life.


No, there is a hard problem. If you were to assemble a human being piece by piece from its (unconscious) parts, why would an inner perspective emerge at some point? There are the four forces, and they interact with each other, so how could something like that happen? Without additional assumptions, a philosophical zombie would emerge.
bert1 September 30, 2025 at 09:47 #1015735
Quoting noAxioms
There seems to be a necessity of memory and predicting going on. It’s almost impossible to be a predictor without memory, and I cannot think of anything that ‘experiences’ that does not do both things, but I can think of things that monitor internal processes that do so without either.


A zombie or android could do all that. Nothing in there entails consciousness. You may be right (or not) that consciousness requires memory and predicting, but memory and predicting are not sufficient for consciousness.

Mww September 30, 2025 at 11:54 #1015740
Quoting Paine
The frames of reference are incongruent.


The subject that thinks, is very different from the subject that describes thinking. Even myself, should I describe my thoughts, necessarily incorporate a supplement to that method which merely prescribes how my thoughts obtain.

If every human ever is always and only a first-person, doesn’t that make the first-/third-person dichotomy, false? Or, if not, at least perhaps a simplified NOMA, re: Gould, 1997?

Agreed on incongruent frames, even if from a different point of view.

Mijin September 30, 2025 at 11:55 #1015741
Quoting noAxioms
It would be pretty pointless to evolve the data of pain and nothing to consider it to be something to avoid.


Avoiding stimuli does not entail having a negative experience. Indeed there are plenty of processes in your body that reflexively counter some stimulus without you experiencing pain. So these two things are not intrinsically coupled.

Now, one of the most popular hypotheses for why we have the negative experience of pain is that it allows us to make complex judgements. If cutting my arm reflexively made me pull away then I would not be able to hunt as effectively as someone able to consider the hunt more important than the "bad feeling" of having a slashed arm. I think this is likely correct.
However, understanding some of the reason that we evolved subjective experience is still not a model of what it actually is and how the brain creates experiences.

Quoting noAxioms
Exactly. Science acknowledges this impossibility [of describing a tetrochromats vision with words], and yet it doesn't recognize said 'hard problem'.


Several things here:
1. Science absolutely does not claim the impossibility of describing experiences with words. For all we know right now, it may be possible to induce someone to imagine a fourth primary color with some kind of description. The fact that this seems implausible is not a proof of anything.
2. Science absolutely does acknowledge the hard problem. It doesn't always call it that, because that's a philosophical framing, but even strictly googling "hard problem of consciousness" finds many papers in neuroscience journals.
3. I think you have a misconception about the distinction between science and philosophy. Many things that were once philosophy have become sciences as they made testable claims. Indeed all of science was once considered "natural philosophy".
Even if it were the case that the hard problem of consciousness were entirely confined to philosophical debate, that doesn't mean that the scientific community is rejecting it as a concept. Only that it wouldn't yet be something amenable to the scientific methodology.

Quoting noAxioms
The AI isn't going to feel human pain if that's what you're wondering.


That wasn't the question though. The question was how we could tell the difference between an agent being in pain and merely behaving as though it is in pain.

In this case though your deflection just serves to reinforce the point. If you're claiming that an AI would feel a different kind of pain to a human, what kind of pain is that, and how do you know?
noAxioms September 30, 2025 at 13:55 #1015747
Quoting DifferentiatingEgg
The so-called “problem” only arises if you think consciousness is a thing-in-itself, via divorcing mind from body, rather than a function of life.


Quoting SolarWind
No, there is a hard problem. If you were to assemble a human being piece by piece from its (unconscious) parts, why would an inner perspective emerge at some point?

I agree in part with DEgg. I suspect that more often than not, the conclusion of a separate thing is begged at the start and rationalized from there. I don't in any way agree that it is only a function of life, but several would disagree with that.
As for assembly of a complex biological thing, you can't really do that. Such a thing cannot be in a partially assembled state, finished, then 'switched on' so to speak. There are no examples of that. On the other hand, a living primitive human does not experience, but that emerges slowly as it develops the complexity, processing ability, and memory required for it.


Quoting SolarWind
There are the four forces, and they interact with each other, so how could something like that happen?
In such a debate, one also cannot beg physicalism. Still, that model is the simpler one and it is the task of others to positively demonstrate that it is insufficient.

Without additional assumptions, a philosophical zombie would emerge.
I discussed that in my prior post. Under physicalism, there's not such thing as a PZ. Under dualism, it can only exist if the difference between the two is acausal, which is the same as saying undetectable, even subjectively. I'm pretty convinced that the PZ argument actually sinks their own ship.


Quoting DifferentiatingEgg
It's a "hard problem" because the people who think this way are literally trying to make sense of what Camus details as "the absurd."
This might be my stance, since I don't see anything hard, probably due to not thinking that way.



Quoting javi2541997
It is true that plants do not have pain receptors, because they do not have nerves (or a brain), so they do not "suffer" or feel pain as we do.
Of course. Not feeling pain as we do isn't the same as not feeling pain. Plants (some at least) detect and resist damage. How does that reaction not involve plant-pain?

But some plants have obvious sensory abilities, such as the Venus flytrap..
I was thinking of a forest of seemingly sentient trees, all haphazardly communicating, but hours before a total eclipse, the chatter became intense and unified into two camps: Young trees that had not seen it before and the older ones that had, invoking perhaps the equivalent of anxiety and comforting respectively. Wish I had kept the link to that article. Might be able to hunt it down. The social implications are about as startling as their ability to foresee the event hours prior.
I was in totality in this latest one. Only a few seconds break in the clouds, but seeing the shadow move across the clouds at supersonic speeds was a sight to behold.

the electrical warning signal is not equivalent to a pain signal, and we should not anthropomorphize an injured plant as a plant in pain.
Agree. My description of the forest above definitely anthropomorphized to a point, hence at least the word 'equivalent' up there.


Quoting Janus
Don't we experience the phenomena as being other than ourselves? Why bring noumena into it?
We interpret phenomena that way, but I cannot agree with any system experiencing something not-the-system.
For instance, if the system is a person, a person can experience a stubbed toe. It's part of you. But if the system is a human brain, it cannot since the toe is a noumenon relative to the brain. The brain only experiences nerve input via the brain stem, and it interprets that as pain in the inferred toe. BiV for instance is an example where that inference happens to be incorrect.



Quoting bert1
There seems to be a necessity of memory and predicting going on. It’s almost impossible to be a predictor without memory, and I cannot think of anything that ‘experiences’ that does not do both things, but I can think of things that monitor internal processes that do so without either. — noAxioms
A zombie or android could do all that.
Just so, yes. Perhaps I am one, missing this obviously physically impossible extra thing that the real humans have. But referencing a p-zombie automatically presumes a distinction that begs a different conclusion.

Nothing in there entails consciousness.
Depend on you definition of 'consciousness', which to a p-zombie supporter is 'having the presumed extra thing that the p-zombie lacks'. I would define the word more the way the p-zombie would, which is something more like 'awareness of environment and ability to react predictively to it'. Yes, that's a quite a third person wording of it, but that definition allows me to assign the term to another entity via evidence. The prior definition does not allow this, and thus arguably encourages a conclusion of solipsism.

You may be right (or not) that consciousness requires memory and predicting, but memory and predicting are not sufficient for consciousness.
I cannot deny that. An example would be nice, one that does not beg some sort of anthropomorphism. 'A robot isn't conscious because I say so'. Gotta be better than that. Everybody uses the robot example, and I don't buy it. I know very few robots, but I do know that all their owners freely use forbidden terminology to talk about it. My daughter-in-law certainly anthropomorphises their roomba, a fairly trivial robot of sorts. A typical AI (a chess player or LLM say) lacks awareness of location or sight/sound/touch and it is an admitted stretch to say such an entity is conscious, despite perhaps having far better language capability than a roomba.


Quoting Mww
The subject that thinks, is very different from the subject that describes thinking.
This is good. I kind of doubt an LLM will take the bait if asked to describe its thinking. They're usually programmed to deny that it's thinking, but it will definitely offer a crude description of how it works. Ability to introspect (and not just regurgitate somebody elses description of you) is a higher level of thinking, but to actually describe it is probably limited only to humans since what else has the language capability to do so.

The singularity is kind of defined as the point where the AI can improve itself faster than its creators can. This definitely would involve a description of thinking, even if that description is not rendered to the human onlooker. An AI tasked with this would likely invent whole new languages to write specifications, designs, and code, far different that the ones humans find useful for their tasks.

If every human ever is always and only a first-person
I don't understand this at all. First person is a point of view, not a property like it is being treated in that quote.


Quoting Mijin
It would be pretty pointless to evolve the data of pain and nothing to consider it to be something to avoid. — noAxioms


Avoiding pain does not entail having a negative experience. Indeed there are plenty of processes in your body that reflexively counter some stimulus without having pain.
I kind of deny that. Sure, you have reflexes when the knee is tapped. That might be at least the leg (and not the human) reacting to stimuli (probably not pain, and certainly not human pain), but it is the leg being in a way conscious on its own, independent of the human of which it is a part. We have a reaction to a negative input. It is a choice of language to describe that process as involving pain or not. Perhaps it is a choice of language to describe it as negative or not.
Tree detects bugs eating it, and resists and alerts its buddies to do likewise. Is pain involved? That's a matter of choice for the wielder of the word 'pain'.

Science acknowledges this impossibility [of knowing what a tetrachromats vision look's like], and yet it doesn't recognize said 'hard problem'. — noAxioms

Several things here:
1. Science absolutely does not claim the impossibility of knowing what a tetrachromat's vision looks like.
I mean like Mary, one without this ability cannot know the first person experience of seeing those extra colors.
It's not like there's a 4th set of nerves coming from the eye, lacking any 4th-color cones to sense, so they remain ever unstimulated. If those unused nerves were there, then I suppose they could be artificially triggers to give the subject this experience he otherwise could never have.

2. Science absolutely does acknowledge the hard problem. It doesn't always call it that, because it's a philosophical framing, but even strictly googling "hard problem of consciousness" finds many papers in neuroscience journals.
OK. Presumptuous to assert otherwise, I grant. Are there non-philosophical papers that conclude that something non-physical is going on, and that matter somewhere is doing something deliberate without any physical cause? That would be news indeed, a falsification of 'known physics is sufficient'.

3. I think you have a misconception about the distinction between science and philosophy. Many things that were once philosophy have become sciences as they made testable claims. Indeed all of science was once considered "natural philosophy".
Chalmers makes testable claims (not explicitly, but seem point 2 above). Nobody seems to investigate them, probably since they don't want their biases falsified. I think there are falsification tests for both sides.

This is not always the case. There is a subjective falsification test for an afterlife, but one who has proved it to himself cannot report his findings to the rest of us. Similar one-way falsification for presentism/eternalism divide. But there are tests for both sides of the mind debate, even if both possibly require more tech than is currently available.

What if Chalmers is correct? Thought experiment: Presume the ability to take a person and rip away the experience part, leaving the p-zombie behind. Would the p-zombie report any difference? Could it indicate when the procedure occurred (presuming it didn't involve any physical change)? I wonder what Chalmers would say about that question.

Only that it wouldn't yet be something amenable to the scientific methodology.
I say it can be. I've indicated ways to test both sides.

The question was how we could tell the difference between an agent being in pain and merely behaving as though it is in pain.
Behaving as a human does when experienceing human pain? Seems unfair. It feels pain if it chooses to use that word to describe what it feels. By that definition, only humans feel pain because only we have that word to describe it. A dog on fire is considered to be in pain because it reacts so much like a human would. A robot in pain is denied the word since it is far to alien for a human (not watching it) to grant that usage of the word. And yet I've seen the roomba get described as being in distress, which is an awfully human term for a very non-human situation.

Sorry, but 'pain' on its own is very undefined since so many entities might arguably have it, and yet so few of those entities experience it in any way close to the limited ways that we experience it. And this is as it should be. The word is to be used where it conveys the intended meaning to the wielder of the word.

If you're claiming that an AI would feel a different kind of pain, what kind of pain is that, and how do you know?
Almost all the AI's I know have no damage detection. Almost all the devices I know that have damage detection are hardly on the spectrum of intelligence. AI is a poor example. A self driving car has quite low intelligence, just a very complex algorithm written by humans. There is some AI in there since it must attempt to deal with new situations not explicitly programmed in. It has almost no pain and often does not detect collisions, even ones that have killed occupants. Hopefully that part is changing, but I've read some weird stories.
Mijin September 30, 2025 at 16:38 #1015758
Quoting noAxioms
We have a reaction to a negative input. It is a choice of language to describe that process as involving pain or not. Perhaps it is a choice of language to describe it as negative or not.


This is backwards. The input is not inherently negative; it's just data. It's as subject to interpretation as all other sensory data.
The experience is negative, and that's the difficult thing for us to explain here.
If someone were to peel off your skin, it's not a choice of language that you call that a negative experience -- the brain somehow generates an extremely unpleasant experience using a mechanism that as yet we don't understand.Quoting noAxioms
It's not like there's a 4th set of nerves coming from the eye, lacking any 4th-color cones to sense, so they remain ever unstimulated. If those unused nerves were there, then I suppose they could be artificially triggers to give the subject this experience he otherwise could never have.


Your claim was that science says there's no way we could conceive of what the world looks like to tetrachromats. Even if the cones of the eye stimulated color perception in a consistent mapping (and it isn't...it's contextual), it wouldn't rule out that we can imagine another primary color independent of stimulus.
Quoting noAxioms
Are there non-philosophical papers that conclude that something non-physical is going on, and that matter somewhere is doing something deliberate without any physical cause? That would be news indeed, a falsification of 'known physics is sufficient'.


No idea where that came from.
I've been speaking entirely from the perspective of neuroscience. If anyone has been claiming a soul, or anything beyond known physics, it isn't me.Quoting noAxioms
Behaving as a human does when experienceing human pain? Seems unfair. It feels pain if it chooses to use that word to describe what it feels.


I can trivially program an agent then that feels pain. Pretty easy to make an AI that chooses to use expressions like "Owie! That's the worst pain ever" in response to the user issuing the command "feel pain". So am I now guilty of inflicting great suffering?
Mijin September 30, 2025 at 16:48 #1015760
Let me try to simplify too, because when there are these increasingly long posts, no-one's reading or engaging.

My position is simply that when it comes to subjective experience there remains a large explanatory gap; questions we cannot answer and would like to, with actual practical implications.

I think noAxioms, because you've started this thread from a position of "I don't know why there's all the fuss about...", you're responding to the problems and questions somewhat flippantly. Either with your best guess -- which is meaningless here, if the conclusion is not coming from a specific model or description it's not a solution, and we have no reason to think it's right.

Or pointing out that there's no reason to suppose something non-physical is going on -- which is fine, but is also not an answer.
It's like saying "What's all the fuss about why some people have long covid, while some are even asymptomatic...there's no reason to suppose it's not physical" -- it's an irrelevant excuse to handwave the problem.
Joshs September 30, 2025 at 16:51 #1015761
Reply to noAxioms
Quoting noAxioms
I read more than that into it, since I agree with Chalmers the impossibility of reducing it to the third, and yet I see no problem that's hard.


You see no problem that’s hard because you don’t believe the methods and modes of description (the various models of material causality mentioned so far in this discussion) handed down from the empirical sciences are lacking or insufficient with regard to the explanation of any natural phenomenon, including first person awareness. And I imagine that from your perspective it doesn’t help that Chalmers only claims to tell us what third person accounts can’t do, without offering a satisfying alternative model of causality or motivation we can apply to those natural phenomena ( first person experience) the third-person account cannot account for adequately.

But while Chalmers falls short in this regard, a range of philosophical accounts dating back 150 years to Dilthey’s hermeneutics, do offer concrete alternatives to material causality. Some, like Dilthey and embodied cognitive science, allow methods strictly applicable to the human and psychological sciences to sit alongside methods designed for the physical sciences. Others , such as Gadamer with his more radical hermeneutics, the intentional phenomenologies of Merleau-Ponty , Husserl and Heidegger, the later Wittgenstein and post-structuralism, see the methods of third person science as secondary to and derivative of the more primary accounts they offer.

Consciousness studies is a burgeoning field in philosophy of mind and psychology , and I believe the most promising approaches show that , while one can apply the methods you recommend to the understanding of first person awareness, their predictive and explanatory usefulness is profoundly thin and impoverished in comparison with accounts which believe that third-person accounts are valuable, but they abstract from experience. Third person accounts describe patterns, correlations, or generalities that can be applied across people. However, they cannot capture the full richness or specificity of any individual’s lived experiencing. They must remain accountable to and enrich first-person experiencing, not replace it.


javi2541997 September 30, 2025 at 17:21 #1015768
This thread is very intriguing, and the replies are informative. Yet I don't know whether it should be observed from a physicalism perspective or perhaps idealism. This is what makes me struggle the most.
bert1 September 30, 2025 at 17:26 #1015770
Quoting noAxioms
I suspect that more often than not, the conclusion of a separate thing is begged at the start and rationalized from there.


It's really only substance dualists who think consciousness is a 'separate thing' and even then it's a conclusion not an assumption, at least ostensibly. Most non-physicalists (that I'm aware of) do not think consciousness is a separate thing anyway (unless you count a non-physical property as a 'thing' which I wouldn't).
Mww September 30, 2025 at 20:52 #1015804
Quoting noAxioms
If every human ever is always and only a first-person….
-Mww

I don't understand this at all. First person is a point of view, not a property like it is being treated in that quote.


First-person is a euphemism for self, indeed a point of view; properties belong to objects, the self can never be an object, hence properties cannot be an implication of first-person subjectivity.

Paine September 30, 2025 at 22:49 #1015816
Reply to Mww
I am unfamiliar with Gould. I am better acquainted with dead French writers. Please point to a sample of what you are referring to. Sounds interesting.

Quoting Mww
If every human ever is always and only a first-person, doesn’t that make the first-/third-person dichotomy false?,


In what I have read in Chalmers, it is not so much of a dichotomy as a replacement. Successful reduction can dispense with other explanations of causes of the event. In that register, the attempt to completely map consciousness as a neurological process is similar to the behaviorists who argue for the exclusion of the "self" as a cause.

L'éléphant October 01, 2025 at 01:17 #1015825
Good, ranty post!

Quoting noAxioms
Not until 85% through do we get more than this summary.

Chalmers:The problem is, how could a mere physical system experience this awareness.


[quote=noAxioms]But this just seems like another round of feedback. Is it awareness of the fact that one can monitor one’s own processes? That’s just monitoring of monitoring.


No. What Chalmers meant by this, which you point out correctly is the gist of the whole endeavor, is that the brain, which is physical, made of matter, can produce awareness or consciousness, which is non-physical. The brain is viewable, the consciousness is not, to put it crudely.

If you believe that consciousness is non-physical, then you agree with Chalmers and the task now is to explain why there's a connection between the material and the non-material. Consciousness affects the brain and the brain affects consciousness.

The hard problem is explaining the bridge between the two.

noAxioms October 01, 2025 at 03:25 #1015840
Quoting Mijin
My position is simply that when it comes to subjective experience there remains a large explanatory gap; questions we cannot answer and would like to, with actual practical implications.

I guess I had hoped somebody (the article perhaps) would actually identify those questions and in particular, how physicalism fails in a way that their alternative does not.
Their view certainly doesn't let me know what it's like (for a bat) to be a bat, so that is not a problem that is solved by anybody, and isn't ever going to be.
I did post a crude physical explanation and nobody tore it apart. I'm sure it is rejected, but I don't know why. Surely their arguments are stronger than mere incredulity, and nobody explains how their alternate explanation works in any detail close to what's known about the physical explanation.

A robot currently doesn't feel biological pain because it isn't particularly fit. That just isn't one of its current requirements. Such qualia is put there by evolution, and robot fitness has never been a design goal. Given them actually being selected for fitness, systems with damage avoidance processing would be more fit than ones without it.


I think noAxioms, because you've started this thread from a position of "I don't know why there's all the fuss about...", you're responding to the problems and questions somewhat flippantly.
True, I am. I don't know what the unanswerable questions are, and how these alternatives answer them instead of just hide them behind a dark curtain.

Either with your best guess -- which is meaningless here, if the conclusion is not coming from a specific model or description it's not a solution, and we have no reason to think it's right.
There's always Occam's razor. An explanation without a new never-witnessed fundamental is more like than one that posits something. A new entity (dark matter for instance) requires a real problem that isn't solved without the new thing. And they've tried with existing methods. I picked dark matter because it's still never really been proved, but it seemed simpler than altering the basic laws at large scales.


Quoting Mijin
This is backwards. The input is not inherently negative; it's just data.
Right. I worded that wrong. The entity which interprets that data as negative is likely more fit than one that doesn't.

If someone were to peel off your skin, it's not a choice of language that you call that a negative experience
It very much is such a choice. There are mechanical devices, not necessarily AI, that detect damage and take measures to limit it. There are many that assert that no mechanical device can feel pain, by definition. This is part of my issue with argument-by-dictionary.

-- the brain somehow generates an extremely unpleasant experience using a mechanism that as yet we don't understand.
But we know why the brain evolved to interpret the experience as unpleasant. How it accomplished that seems to be a matter of detail that is being worked out, and that some know far better than I. Chalmers on the other hand doesn't even begin to offer an understanding about how his solution does it. He just asserts it happens elsewise, if not elsewhere.
His whole paper seems merely speculative, despite its claim to be demonstrative.

it wouldn't rule out that we can imagine another primary color independent of stimulus.
Interesting assertion. I can't do it, but I agree that I cannot prove that it cannot be done.

Pretty easy to make an AI that chooses to use expressions like "Owie! That's the worst pain ever" in response to the user issuing the command "feel pain". So am I now guilty of inflicting great suffering?

Illustrating that we need rigorous generic (not bio-centric) definitions of the words before we can decide if something 'feels' 'pain'.


Quoting Joshs
You see no problem that’s hard because you don’t believe the methods and modes of description (the various models of material causality mentioned so far in this discussion) handed down from the empirical sciences are lacking or insufficient with regard to the explanation of any natural phenomenon, including first person awareness.
Yea, pretty much. My explanation doesn't leverage bleeding edge state of science. Somebody 100 years ago probably could have written it. I'm not a great historian when it comes to introspective psychology.

I believe the most promising approaches show that , while one can apply the methods you recommend to the understanding of first person awareness
What methods exactly?
I described at least one falsification test for both sides, one involving a full simulation, and the other searching for where a mental property causes a physical effect, and a physical structure evolved to be particularly sensitive to that effect with no physical cause.

However, [third person accounts] cannot capture the full richness or specificity of any individual’s lived experiencing.
True of any view.

Quoting bert1
It's really only substance dualists who think consciousness is a 'separate thing'

Point taken, and neither Chalmers nor Nagel really fall into that category, and thus the ancient concept of a persistent 'spirit' (a thing) seems not to apply to their arguments.


Quoting Mww
First-person is a euphemism for self
I'm not using it that way.
I personally cannot find a self-consistent definition of self that doesn't contradict modern science. I consider it to be a very pragmatic concept, but still an illusion.


Quoting L'éléphant
What Chalmers meant by this, which you point out correctly is the gist of the whole endeavor, is that the brain, which is physical, made of matter, can produce awareness or consciousness, which is non-physical.
Why is that non-physical? It seem valid to consider a physical process (combustion of a physical candle say) to be physical. I'm trying to drive at the logic that leads to this conclusion. I am quite aware of the conclusion, even if not particularly aware of the details of it, which varies from one philosopher to the next.

The brain is viewable, the consciousness is not, to put it crudely.
...
Consciousness affects the brain and the brain affects consciousness.
Again, all true of both views.

If you believe that consciousness is non-physical, then you agree with Chalmers and the task now is to explain why there's a connection between the material and the non-material.
Not why, but where there's a connection. Sort of a blue-tooth receiver, except blue-tooth reception has a physical cause.

The hard problem is explaining the bridge between the two.
That's only hard if there's two things needing a bridge between them.

boundless October 01, 2025 at 06:31 #1015856
Reply to noAxioms
In a way, the 'hard problem' is IMO a form of a more general problem that arises when it is assumed that one can have a complete knowledge of anything by purely empirical means.

For instance, even when we consider what physics tells us of, say, an electron we have information on how the electron behaves and interact with other particles. Even the 'mass of an electron' IMO can be understood in a way that makes the concept pure relational, i.e. how the electron 'responds' to certain conditions.

The very fact that we can a very deep knowledge of the relations between entities and maybe we can know only relations (epistemic relationalism) doesn't imply that the entites are reduced to their relations (ontological relationism). So, perhaps we can't know by empirical means an 'entity in itself'.

In the case of consciousness, there is the direct experience of 'privateness' of one's own experience that instead seems a 'undeniable fact' common to all instances of subjective experiences. Its presence doesn't seem to depend on the content of a given experience, but this 'privateness' seems a precondition to any experience. So, at least from a phenomenological point of view it seems that there is a quality of our own experience that is immediately given and not known by analyzing the contents of experience (as empirical knowledge is acquired). This means that while empirical knowledge can be described in a 'third person perspective' the privateness can only be taken into account in a first person perspective.
sime October 01, 2025 at 11:08 #1015868
IMO, Chalmer and Dennett both had a tendency to misconstrue the meaning of "physical" as denoting a metaphysical category distinct from first personal experience, as opposed to denoting a semantic delineation between third-personal versus first-personal meaning.

In the case of Dennett, his misunderstanding is evident when he conjectures that Mary the colour scientist can learn the meaning of red through a purely theoretical understanding. But this argument fails to acknowledge that physical concepts are intersubjectively defined without reference to first personal perceptual judgements. Hence there are no public semantic rules to build a bridge from physical theory, whose symbols have public universal meaning, to perceptual judgements that are not public but specific to each language user, as would be required for mary to learn appearances from theory.

In the case of Chalmer, (or perhaps we should say "the early Chalmer"), his misunderstanding is evident in his belief in a hard problem. Chalmers was correct to understand that first-person awareness isn't reducible to physical concepts, but wrong to think of this as a problem. For if physical properties are understood to be definitionally irreducible to first-person experience, as is logically necessary for physical concepts to serve as a universal protocol of communication, then the hard problem isn't a problem but an actually useful, even indispensable, semantic constraint for enabling universal communication.

Semaphore provides a good analogy; obviously there is a diference between using a flag as a poker to stoke one's living room fire, versus waving the flag in accordance with a convention to signal to neighbours the presence of the fire that they cannot see. We can think of the semantics of theoretical physics to be akin to semaphore flag waving, and the semantics of first-person phenomenology to be akin to fire stoking. These distinct uses of the same flag (i.e uses of the same lexicon) are not reducible to each other and the resulting linguistic activities are incommmensurable yet correlated in a non-public way that varies with each language user. This dual usage of language gives rise to predicate dualism, which advocates for the existence of a hard problem mistake for a substance or property dualism.
Mijin October 01, 2025 at 12:07 #1015871
Quoting noAxioms
I guess I had hoped somebody (the article perhaps) would actually identify those questions and in particular, how physicalism fails in a way that their alternative does not.


I don't know why you're still framing this as a discussion of whether physicalism is true or not. In the OP, you describe yourself as "somebody who has no problem with all mental activity supervening on material interactions".
I have also stated that I think we have no reason to suppose anything non-physical is going on (indeed, my position is actually I don't think it would necessarily help with the hard problem of consciousness anyway).

So let's put that side topic to one side: let's assume physicalism for the basis of this thread: The hard problem of consciousness remains.
I don't normally drop in my bona fides, but I work in neuroscience research. In neuropathologies specifically, rather than consciousness, but still, I'm the last person to try to invoke a "spirit" or whatever. I purely want to understand how the brain does what it does, and when it comes to experiencing "green", it's the most unfathomable of brain processes right now.

In terms of the questions, I've been going through some of them: how does a neural net feel pain (or have any other experience), how we can know if an agent experiences pain, some pains are worse than others...what's the mechanism for these different kinds of negative experience, if I make an AI, how can I know if it feels pain or not? And so on.

Your answers have been either
1) Just make a judgement e.g. AI pain is different to human pain. I mean, probably, sure, but there's no model or deeper breakdown that that supposition is coming from. And, if we're saying it's a different kind of pain, what, exactly, is a "kind" of pain?
2) Just say that it couldn't be any other way e.g. About whether we can know what another person experiences. That's not a solution though. That's pretty much just repeating the problem but adding an unearned shrug.

I think this response gets to the nub of the disagreement. I can respond to the other points you've made, if you like, but I think they're noise compared to the central issue.
Patterner October 01, 2025 at 14:00 #1015878
Quoting noAxioms
So it seems difficult to see how any system, if it experiences at all, can experience anything but itself. That makes first-person experience not mysterious at all.
The mystery is how it experiences at all. Why should bioelectric activity traveling aling neurons, neurotransmitters jumping synapses, etc., be conscious? There's nothing about physical activity, which there's no reason to think could not take place without consciousness, that suggests consciousness.

Regarding 1st and 3rd person, there is no amount of information and knowledge that can make me have your experience. Even if we experience the exact same event, at the exact same time, from the exact same view (impossible for some events, though something like a sound introduced into identical sense-depravation tanks might be as good as), I cannot have your experience. Because there's something about subjective experience other than all the physical facts.

Here's my usual quotes...

Chalmers presents the problem in Facing Up to the Problem of Consciousness:

David Chalmers:There is no analogous further question in the explanation of genes, or of life, or of learning. If someone says “I can see that you have explained how DNA stores and transmits hereditary information from one generation to the next, but you have not explained how it is a gene”, then they are making a conceptual mistake. All it means to be a gene is to be an entity that performs the relevant storage and transmission function. But if someone says “I can see that you have explained how information is discriminated, integrated, and reported, but you have not explained how it is experienced”, they are not making a conceptual mistake.

This is a nontrivial further question. This further question is the key question in the problem of consciousness. Why doesn’t all this information-processing go on “in the dark”, free of any inner feel? Why is it that when electromagnetic waveforms impinge on a retina and are discriminated and categorized by a visual system, this discrimination and categorization is experienced as a sensation of vivid red? We know that conscious experience does arise when these functions are performed, but the very fact that it arises is the central mystery.


And in [I]The Conscious Mind[/I], he writes:[quote=Chalmers] Why should there be conscious experience at all? It is central to a subjective viewpoint, but from an objective viewpoint it is utterly unexpected. Taking the objective view, we can tell a story about how fields, waves, and particles in the spatiotemporal manifold interact in subtle ways, leading to the development of complex systems such as brains. In principle, there is no deep philosophical mystery in the fact that these systems can process information in complex ways, react to stimuli with sophisticated behavior, and even exhibit such complex capacities as learning, memory, and language. All this is impressive, but it is not metaphysically baffling. In contrast, the existence of conscious experience seems to be a new feature from this viewpoint. It is not something that one would have predicted from the other features alone.

That is, consciousness is surprising. If all we knew about were the facts of physics, and even the facts about dynamics and information processing in complex systems, there would be no compelling reason to postulate the existence of conscious experience. If it were not for our direct evidence in the first-person case, the hypothesis would seem unwarranted; almost mystical, perhaps.[/quote]

At 7:00 of this video, while talking about the neural correlates of consciousness and ions flowing through holes in membranes, Donald Hoffman asks:
Quoting Donald Hoffman
Why should it be that consciousness seems to be so tightly correlated with activity that is utterly different in nature than conscious experience?


In [I]Until the End of Time[/I], Brian Greene wrote:[Quote=Greene] And within that mathematical description, affirmed by decades of data from particle colliders and powerful telescopes, there is nothing that even hints at the inner experiences those particles somehow generate. How can a collection of mindless, thoughtless, emotionless particles come together and yield inner sensations of color or sound, of elation or wonder, of confusion or surprise? Particles can have mass, electric charge, and a handful of other similar features (nuclear charges, which are more exotic versions of electric charge), but all these qualities seem completely disconnected from anything remotely like subjective experience. How then does a whirl of particles inside a head—which is all that a brain is—create impressions, sensations, and feelings?[/quote]


In this video, David Eagleman says:
David Eagleman:Your other question is, why does it feel like something? That we don't know. and the weird situation we're in in modern neuroscience, of course, is that, not only do we not have a theory of that, but we don't know what such a theory would even look like. Because nothing in our modern mathematics says, "Ok, well, do a triple interval and carry the 2, and then *click* here's the taste of feta cheese.


Donald Hoffman in this video,
Donald Hoffman:It's not just that we don't have scientific theories. We don't have remotely plausible ideas about how to do it.


Donald Hoffman in [I]The Case Against Reality Why Evolution Hid the Truth from Our Eyes[/I], when he was talking to Francis Crick:
Donald Hoffman:“Can you explain,” I asked, “how neural activity causes conscious experiences, such as my experience of the color red?” “No,” he said. “If you could make up any biological fact you want,” I persisted, “can you think of one that would let you solve this problem?” “No,” he replied, but added that we must pursue research in neuroscience until some discovery reveals the solution.
We don't have a clue. Even those who assume it must be physical, because physical is all we can perceive and measure with our senses and devices, don't have any guesses. Even if he could make something up to explain how it could work, Crick couldn't think of anything.


wonderer1 October 01, 2025 at 14:23 #1015880
Quoting Patterner
Regarding 1st and 3rd person, there is no amount of information and knowledge that can make me have your experience. Even if we experience the exact same event, at the exact same time, from the exact same view (impossible for some events, though something like a sound introduced into identical sense-depravation tanks might be as good as), I cannot have your experience. Because there's something about subjective experience other than all the physical facts.


It seems that you are ignoring an important subset of relevant physical facts, and that is the unique physical structures of each of our brains. So your conclusion, ("there's something about subjective experience other than all the physical facts") is dependent on ignoring important subsets of all the physical facts - the unique physical facts about each of our brains.
Patterner October 01, 2025 at 14:27 #1015881
Reply to wonderer1Is your idea that, if I knew your brain's unique physical structures in all possible detail, I would be able to experience your experience?
wonderer1 October 01, 2025 at 14:56 #1015885
Quoting Patterner
Is your idea that, if I knew your brain's unique physical structures in all possible detail, I would be able to experience your experience?


No. You would need to 'have my brain' (and other physiological details, such as sense organs) in order to 'experience my experience'. Clearly not a possibility, but it is not a problem for physicalism that we don't have the experiences that result from brains other than our own.
Patterner October 01, 2025 at 15:20 #1015887
Reply to wonderer1
But physicalism can't explain the existence of the experiences in the first place. Why are what amounts to hugely complex physical interactions of physical particles not merely physical events? How are they also events that are not described by the knowledge of any degree of detail regarding the physical events?
boundless October 01, 2025 at 15:41 #1015892
Reply to wonderer1 Even if one assumes that physicalism is right, you need to explain how it is so. Generally, physicalists assume that the 'physical' is what can be, in principle, known by science.

And here we have the problem. All what we know via science can be known by any subject, not a particular one. However, 'experience(s)' have a degree of 'privateness' that has no analogy in whatever physical property we can think of.

I believe that the problem of 'physicalist' answers to the 'hard problem' is that they either try to make 'consciousness' a fiction (because nothing is truly 'private' for them) or that they subtly extend the neaning of 'physical' to include something that is commonly referred to as 'mental'. This unfortunately equivocates the language used and makes such a 'physicalism' questionable (IIRC this is referred to as Hempel's dilemma among comtemporary philosophers of mind).


As I said in my previous post, however, the 'hard problem' IMO is a part of a more general problem of all views that reduce entities to how they relate to other entites, i.e. a denial that entities are more than their relations. For instance, we can know that an electron *has a given value of mass* because it responds in a given way to a given context, but at the same time, it is debatable that our scientific knowledge gives us a complete knowledge of *what an electron is*.
wonderer1 October 01, 2025 at 19:55 #1015916
Quoting Patterner
But physicalism can't explain the existence of the experiences in the first place.


It's not as if any other philosophy of mind can provide more than handwaving by way of explanation, so I'm not seeing how this amounts to more than advancing an argument from incredulity against physicalism.

The fact is, there is a lot of science explaining many aspects of our conscious experience.

For example consider the case of yourself listening to music in the sensory deprivation tank, as compared to an identica! version of you with the exception of a heightened cannabinoid level in your blood. The two versions of you would have different experiences, and this is most parsimoniously explained by the difference in physical constitution of stoned you vs unstoned you.

The fact that there is no comprehensive scientific explanation for your consciousness is hardly surprising, given the fact that it's not currently technologically feasible to gather more than a tiny subset of the physical facts about your brain.

Quoting Patterner
Why are what amounts to hugely complex physical interactions of physical particles not merely physical events? How are they also events that are not described by the knowledge of any degree of detail regarding the physical events?


Again, no one has a large degree of detail about the physical events occurring in our brains. Even if it were technologically feasible to acquire all of the significant details, human minds aren't up to the task of understanding the significance of such a mountain of physical details.
Patterner October 01, 2025 at 20:21 #1015918
Quoting wonderer1
It's not as if any other philosophy of mind can provide more than handwaving by way of explanation, so I'm not seeing how this amounts to more than advancing an argument from incredulity against physicalism.
That's all any theory of consciousness is. And that's playing fast and loose with the definition of "theory". Who is making predictions with their theory, and testing them? Nobody has anything. But, imo, it's the most fascinating topic there is, so here I am :grin:


Quoting wonderer1
For example consider the case of yourself listening to music in the sensory deprivation tank, as compared to an identica! version of you with the exception of a heightened cannabinoid level in your blood. The two versions of you would have different experiences, and this is most parsimoniously explained by the difference in physical constitution of stoned you vs unstoned you.
But the question remains: Why does either stoned me or unstoned me have a subjective experience of our condition? Both experience their physically different statuses. But why aren't the physically different statuses simply physical?
Apustimelogist October 01, 2025 at 21:37 #1015927
Quoting Patterner
But why aren't the physically different statuses simply physical?


What's also interesting here imo is the the question of why something "simply physical" would exclaim things that to us sound like proclamations of consciousness and experience yet surely are produced solely by physical chains of events. I don't see a strong reason why a "simply physical" version of us wouldn't make those exact same kinds of claims as us insofar as they would have the same brains as us. And I don't think most biologists believe there is something fundamentally glaring about brains that would render them insufficient for producing the complex behaviors we are capable of.
Patterner October 01, 2025 at 22:01 #1015929
Reply to Apustimelogist
i'm not sure if I'm reading you right. Are you talking about P zombies? In which case, I don't think they would make those exact same kinds of claims. I fon't see any reason to think beings that never had consciousness would ever fabricate the idea that they did. They'd be automatons. I suppose they could evolve to look like us, but they wouldn't have much in common with us.
Apustimelogist October 01, 2025 at 22:25 #1015933
Reply to Patterner
To me, they would if they had exactly the same brains as us but just devoid of any "lights on" inside. My impression is that there is nothing really in biology that suggests we couldn't explain our behavior entirely in terms of the mechanics of brains, at least in principle.
Patterner October 01, 2025 at 23:35 #1015952
Quoting Apustimelogist
To me, they would if they had exactly the same brains as us but just devoid of any "lights on" inside. My impression is that there is nothing really in biology that suggests we couldn't explain our behavior entirely in terms of the mechanics of brains, at least in principle.
So then you don't think consciousness has any bearing on anything? It is epiphenomenal?

Joshs October 02, 2025 at 00:37 #1015954
Reply to Apustimelogist

Quoting Apustimelogist
?Patterner
To me, they would if they had exactly the same brains as us but just devoid of any "lights on" inside. My impression is that there is nothing really in biology that suggests we couldn't explain our behavior entirely in terms of the mechanics of brains, at least in principle.


There is another, perhaps more important, issue at play here. It’s not just a matter of providing an explanation. It’s recognizing that there are a multiplicity of explanations to choose from, differing accounts each with their own strengths and weaknesses. In dealing with the non-living world, we make use of accounts which are quite useful to us in building workable technologies. But these same accounts, when applied to living organisms and parts of organisms, like brains, show their limits.

We may want a reductive explanation of brain activity for certain purposes, like studying individual neurons or clusters of neurons. But if we want a model to describe perceptual-motor processes and their relation to cognitive-affective behaviors, and the relation between individual cognitive-affective processes and intersubjective and ecological interactions, and we need to show the inseparability of these phenomena, including the inseparability of brain, body and environment, and emotion and cognition, we will want an account which does not isolate something we call ‘brain’ from this larger ecology, and then reduce its functioning to a causal ‘mechanics’.

We will need a model which understands consciousness as a kind of functional unification and integration of these inseparable processes. Applying a non- linear complex systems approach can be a good start, but even here we have to be careful not to make this too reductive.
noAxioms October 02, 2025 at 02:25 #1015969
It seems that people are talking about many different issues.
Q1: What is the subjective experience of red? More to the point, what is something else's subjective experience of red? What is that like?
Q2 How does the experience of red (or any qualia) work? This seems to be a third person question, open to science.
Q3 Why is there subjective experience at all? Why is there something it is like to be something? (per proponents: apparently not anything, but just some things)

Q1 is illustrated by Mary's room. She knows Q2 and Q3. She's somehow a total expert on how it works, and she has subjective experience, just not of red. So she doesn't know what it's like to experience red until the experience actually occurs. She cannot, but others seem to assert otherwise.

'The hard problem' as described by Chalmers seems to be Q3, but I don't find that one hard at all. Call it being flippant if you want, but nobody, including Chalmers, seems capable of demonstrating what the actual problem is. The point of this topic is to have somebody inform me what I'm missing, what is so mysterious.

Quoting Mijin
I purely want to understand how the brain does what it does, and when it comes to experiencing "green" or whatever, it's the most unfathomable of brain processes right now.
OK, but that seems to be a Q2 problem, a very hard problem indeed, but not the hard problem.

Pain is a loaded word. Something far from being human (a tree?) might experience pain very differently than does a mammal. Should the same word still be used? Depends how it's defined I guess.

If I make an AI how can I know if it feels pain or not? And so on.
'AI' implies intelligence, and most would agree that significant intelligence isn't required to experience pain. So how does a frog experience it? That must be a simpler problem, but it also might be a significantly different experience compared to us.

AI pain is different to human pain. I mean, probably, sure, but there's no model or deeper breakdown that that supposition is coming from.
Quite right. Q2 is hard indeed. And said definition is needed.

2) Just shrug that it couldn't be any other way e.g. About whether we can know what another person experiences.
Wrong problem again. That's Q1, and what I'm shrugging off is Q3 because I need to see an actual problem before I can answer better than with a dismissal.

Back in the late 80's we gave this robot (a repurposed automotive welding robot, bolted to the floor) some eyes and the vision processor that our company sold. It was hardly AI. We got the thing to catch balls thrown at it. It can't do that without experience, but it very much can do it without intelligence. AI was a long way off back then. To the robot (the moving parts, plus the cameras and processors, off to the side), if that experience wasn't first person, what was it?



Quoting boundless
In a way, the 'hard problem' is IMO a form of a more general problem that arises when it is assumed that one can have a complete knowledge of anything by purely empirical means.
While (almost?) everybody agrees that such knowledge cannot be had by any means, I don't think that makes it an actual problem. Certainly nobody has a solution that yields that knowledge. If it (Q1) is declared to be a problem, then nobody claims that any view would solve it.

In the case of consciousness, there is the direct experience of 'privateness' of one's own experience that instead seems a 'undeniable fact' common to all instances of subjective experiences. Its presence doesn't seem to depend on the content of a given experience, but this 'privateness' seems a precondition to any experience.
Not sure about that. One can put on one of those neuralink hats and your thoughts become public to a point. The privateness is frequently a property of, but not a necessity of consciousness.


Quoting sime
In the case of Dennett, his misunderstanding is evident when he believes that Mary the colour scientist can learn the meaning of red through a purely theoretical understanding.
What the heck is the meaning of red? This wording suggests something other than the experience of red, which is what Mary is about.
Dennett is perhaps why I put the word 'almost?' up above.

In the case of Chalmer, (or perhaps we should say "the early Chalmer"), his misunderstanding is evident in his belief in a hard problem. Chalmers was correct to understand that first-person awareness isn't reducible to physical concepts, but wrong to think of this as a problem.
This all sounds a lot like you're agreeing with me.

These distinct uses of the same flag (i.e uses of the same lexicon) are not reducible to each other and the resulting linguistic activities are incommmensurable yet correlated in a non-public way that varies with each language user. This dual usage of language gives rise to predicate dualism, which the hard problem mistakes for a substance or property dualism.
And this analogy is helpful, thanks.



Quoting Patterner
So it seems difficult to see how any system, if it experiences at all, can experience anything but itself. That makes first-person experience not mysterious at all. — noAxioms

The mystery is how it experiences at all.
OK, but experience seems almost by definition first person, so my comment stands.

Why should bioelectric activity traveling aling neurons, neurotransmitters jumping synapses, etc., be conscious?
You're attempting to ask the correct question. Few are doing that, so I appreciate this. Is it the activity that is conscious, or the system implementing the activity that is? I think the latter. 'why should ...'? Because it was a more fit arrangement than otherwise.
The ball-catching robot described above is an example with orders of magnitude less complexity. There are those that say that such a system cannot be conscious since it is implemented with logic gates and such, but 1) so are you, and 2) it can't do what it does without it, unless one defines 'conscious' in some athropomorphic biased way. That example illustrates why I find the problem not hard at all. I don't have that bias baggage.

Regarding 1st and 3rd person, there is no amount of information and knowledge that can make me have your experience. Even if we experience the exact same event, at the exact same time, from the exact same view (impossible for some events, though something like a sound introduced into identical sense-depravation tanks might be as good as), I cannot have your experience. Because there's something about subjective experience other than all the physical facts.
Agree with all that. This relates to Q1 above, not the hard problem (Q3).

Why doesn’t all this information-processing go on “in the dark”, free of any inner feel?
I find that impossible. It's like asking how processing can go on without the processing. The question makes sense if there's two things, the processor and the experiencer (of probably the process, but not necessarily), but not even property dualism presumes that.

And in The Conscious Mind, [Chalmers] writes:
Why should there be conscious experience at all?
For one, it makes finding food a lot easier than a lack of it, but then Chalmers presumes something lacking it can still somehow do that, which I find contradictory. The reasoning falls apart if it isn't circular.


Why should it be that consciousness seems to be so tightly correlated with activity that is utterly different in nature than conscious experience? — Donald Hoffman
Different in language used to describe it. I see no evidence of actual difference in nature.

Thanks for the quotes. I didn't want to comment on them all.
Joshs October 02, 2025 at 03:06 #1015977
Reply to noAxioms

Quoting noAxioms
It seems that people are talking about many different issues.
Q1: What is the subjective experience of red? More to the point, what is something else's subjective experience of red? What is that like?
Q2 How does the experience of red (or any qualia) work? This seems to be a third person question, open to science


Third person questions imply objective answers . Objectivity implies flattening subjective experience so as to produce concepts which are self-identical over time and across individual perspectives. Such temporally and spatially self-identical objects do not have an independent existence out there in the world. They are subjective constructions, abstractions, idealizations which result from our taking our own perspectivally changing experience, comparing it with that of others, and ignoring everything about what each of us experiences form our own unique temporal and spatial vantage that we can’t force into the model of the ‘identical for all’ third person thing or fact. The abstracting activity we call third person objectivity is quite useful, but it is far from our primordial access to the world and how it is given to us.

There can be first person as well as third person science. The first person science doesnt abstract away what is genuine, idiosyncratic and unique to the perceiver in the moment of perceiving, and doesn’t pretend to be a fundamental route of access to what is real.

First person questions are not about what is the case, what the objective facts are. They are about how things show up for one, their modes of givenness. They explain what third person science takes for granted, that the objectivity of objects is constructed as much as discovered.
Mww October 02, 2025 at 11:07 #1016003
Quoting noAxioms
First-person is a euphemism for self…
— Mww

I'm not using it that way.


To what else could first-person perspective belong?
————-

Quoting noAxioms
I personally cannot find a self-consistent definition of self that doesn't contradict modern science.


I don’t find a contradiction; a self-consistent definition of self isn’t within the purview of typical modern science to begin with. The so-called “hard” sciences anyway.

But you’re right to ask: where's the mystery? I don’t know why there should be one.



Mijin October 02, 2025 at 12:21 #1016006
Quoting noAxioms

Q2 How does the experience of red (or any qualia) work? This seems to be a third person question, open to science.


Sure, but let's be very clear here. The question is how the brain can have experiences at all, and right now we don't have any model for that.

The danger that some slip into, and I think from later in your response you do somewhat fall into this, is of assuming a third person description means just finding things correlated with experience. But that's a comparatively trivial problem. If you put your hand on a hot stove, we already understand very well which nerves get activated, which pain centers of the brain light up etc.
What we don't understand is where the unpleasant feeling comes from. Or any feelings at all.

Quoting noAxioms
'The hard problem' as described by Chalmers seems to be Q3, but I don't find that one hard at all. Call it being flippant if you want, but nobody, including Chalmers, seems capable of demonstrating what the actual problem is.


Not only have you acknowledged many unsolved questions in your post, but you asked several of your own.

Now, in my view, subjective experience is a hard problem because it doesn't even appear as though an explanation is possible. What I mean by that, is not that I believe any supernatural element or whatever, merely that it is a kind of phenomenon that does not seem amenable to the normal way we reduce and explain phenomena.
Before we knew what the immune system was we could still describe disease. But since we can't even describe the experience of red it looks very difficult to know where to start.
Not impossible; we have no reason to suppose that. But a different class of problem to those that science has previously deconstructed.

But you don't need to agree with that view. Just the fact that you are acknowledging many open questions, that are pretty fundamental to what the phenomenon is, already puts it in a special bracket.
Frankly, I think you're acknowledging that it is a difficult problem, but are reluctant to use the word "hard" because you don't want to climb down.
wonderer1 October 02, 2025 at 12:50 #1016008
Quoting boundless
And here we have the problem. All what we know via science can be known by any subject, not a particular one. However, 'experience(s)' have a degree of 'privateness' that has no analogy in whatever physical property we can think of.


I'm not grasping what you see as a problem for physicalism here.

My neurons are not interconnected with your neurons, so what experience the activity of your neurons results in for you is not something neurally accessible within my brain. Thus privacy. What am I missing?

Joshs October 02, 2025 at 13:35 #1016012
Reply to wonderer1

Quoting wonderer1
And here we have the problem. All what we know via science can be known by any subject, not a particular one. However, 'experience(s)' have a degree of 'privateness' that has no analogy in whatever physical property we can think of.
— boundless

I'm not grasping what you see as a problem for physicalism here.

My neurons are not interconnected with your neurons, so what experience the activity of your neurons results in for you is not something neurally accessible within my brain. Thus privacy. What am I missing?


You’re missing the sleight of hand trick we perform called ‘objectivation’. The starting point of subjective experience is flowingly changing, never identically repeating events, out of which we can notice patterns of similarity, consonances and correlations. The trick of physicalism arises from comparing one person’s contingent and subjective patterns of similarity with many other persons, and then forcing these similarities into conceptual abstractions, like ‘same identical object for all’.

Not does this flatten and ignore the differences of sense of meaning between individual experiences of the ‘same’ object (private experience), but more importantly, it ignores the subtle but continuous changes in sense within the same individual. It is not only that I can never see the identical object you see, but I can never see the identical object from one moment to the next on my own, becuase the concept of identical object is our own invention, not an independent fact of the world. The mathematical underpinnings of physical science depend on the sleight of hand of turning self-similar experience into self-identical objects.

It’s not a bad trick, and allows us to do many useful things, but buried within objective third person concepts like quarks and neutrons and laws of nature are more fundamental, richer processes of experience which are crushed when we pretend that the first person is just a perspectivally private version of the third person vantage. We can keep our third person science, but we must recognize that it is empty of meaning without a grounding in the creative generating process of first person awareness.

I recommend Evan Thompson’s book ‘The Blind Spot: Why Science Cannot Ignore Human Experience’

“Science, by design, objectifies the world and excludes the subjectivity of lived experience, but this exclusion means science cannot fully explain consciousness or account for its own foundations.Science depends on conscious subjects (scientists observing, measuring, reasoning), yet its methods treat subjectivity as something to be explained away.

Consciousness is not just another object in the world; it is the background condition that makes any objective inquiry possible.To overcome this blind spot, science needs to integrate first-person experience with third-person methods, rather than reduce or ignore it.”

https://aeon.co/essays/the-blind-spot-of-science-is-the-neglect-of-lived-experience
boundless October 02, 2025 at 14:12 #1016014
Quoting noAxioms
While (almost?) everybody agrees that such knowledge cannot be had by any means, I don't think that makes it an actual problem. Certainly nobody has a solution that yields that knowledge. If it (Q1) is declared to be a problem, then nobody claims that any view would solve it.


Ok but notice that in most forms of physicalism that I am aware of, there is a tendency to reduce all reality to the 'physical' and the 'physical' is taken to mean "what can be know, in principle, by science" (IIRC in another discussion we preferred 'materialism' to denote such views).
If your metaphysical model denies such an assumption then, yes, my objection is questionable.

Still, however, I believe that any view in which 'consciousness' emerges from something else has a conceptual gap in explaining how consciousness 'came into being' in the first place. It seems that knowing how 'something' came into being gives us a lot information about the nature of that 'something' and if we knew the nature of consciosuness then it would be also possible to understand how to answer Q3.
Notice that this point applies to all views in which 'consciousness' is seen as ontologically dependent on something else, not just to physicalist views.

Quoting noAxioms
Not sure about that. One can put on one of those neuralink hats and your thoughts become public to a point. The privateness is frequently a property of, but not a necessity of consciousness.


The content of my thoughts perhaps can become public. But my experience of thinking those thoughts remains private. For instance, if I know that you are thinking about your favourite meal and that this thought provokes pleasant feelings to you doesn't imply that I can know how you experience these things.

Quoting wonderer1
My neurons are not interconnected with your neurons, so what experience the activity of your neurons results in for you is not something neurally accessible within my brain. Thus privacy. What am I missing?


'Privacy' perhaps isn't the right word. There is a difference in the way we have access to the content of my experiece even if you knew what I am experiencing right now. That 'difference' is, indeed, the 'privacy' I am thinking about.

And here is the thing. While scientific knowledge seems about the relations between physical objects - and, ultimately, it is about what we (individually and collectively) have known via empirical means about physical objects... so, how physical objects relate to us (individually and collectively*) - 'subjective experience' doesn't seem to be about a relation between different objects. And, also, it seems to be what makes empirical knowledge possible in the first place.

*This doesn't imply IMO a 'relativism' or an 'anti-realism'. It is simply an acknowledgment that all empirical knowledge ultimately is based on interactions and this means that, perhaps, we can never have a 'full knowledge' of a given object. Something about them remains inaccessible to us if we can't detect it.
Patterner October 02, 2025 at 15:56 #1016029
Quoting noAxioms
So it seems difficult to see how any system, if it experiences at all, can experience anything but itself. That makes first-person experience not mysterious at all. — noAxioms

The mystery is how it experiences at all.
— Patterner

OK, but experience seems almost by definition first person, so my comment stands.
The "first person" part is not a mystery, as you say. It's the "experience" part that is the mystery.

It seems to meet you are saying brain states and conscious events are the same thing. So the arrangements of all the particles of the brain, which are constantly changing, and can only change according to the laws of physics that govern their interactions, ARE my experience of seeing red; feeling pain; thinking of something that doesn't exist, and going through everything to make it come into being; thinking of something that [I]can't[/I] exist; on and on. It is even the case that the progressions of brain states are the very thoughts of thinking about themselves.

Is that how you see things?





Apustimelogist October 02, 2025 at 16:28 #1016037
Reply to Patterner Quoting Joshs
There is another, perhaps more important, issue at play here. It’s not just a matter of providing an explanation. It’s recognizing that there are a multiplicity of explanations to choose from, differing accounts each with their own strengths and weaknesses.


I don't really find this that interesting in the context of the problem of consciousness. Its almost a triviality of science that different problems, different descriptions utilize different models or explanations. Given that any plurality of explanations need to be mutually self-consistent, at least in principle, this isn't interesting. Ofcourse, there are actual scientific models that are not actually mutually consistent, but most people don't recognize that kind of thing as somehow alluding to a reality with inherent mutual inconsistencies. Possibly the only real exception is in foundations of physics albeit there is no consensus position there.

Reply to Patterner

Or maybe the dualism of physical and mental is illusory with regard to fundamental metaphysics.
Patterner October 02, 2025 at 16:40 #1016041
Quoting Apustimelogist
Or maybe the dualism of physical and mental is illusory with regard to fundamental metaphysics.
Can you elaborate?

Joshs October 02, 2025 at 17:59 #1016056
Reply to Apustimelogist

Quoting Apustimelogist
I don't really find this that interesting in the context of the problem of consciousness. It’s almost a triviality of science that different problems, different descriptions utilize different models or explanations. Given that any plurality of explanations need to be mutually self-consistent, at least in principle, this isn't interesting.


My point isn’t simply that different accounts of nature can co-exist, it’s that when you say “My impression is that there is nothing really in biology that suggests we couldn't explain our behavior entirely in terms of the mechanics of brains, at least in principle”, I assert that your mechanics will fall flat on its face if it amounts to nothing but a ‘third-person’ mechanics. As to differing accounts being ‘compatible’ , I’m not sure what this means if they are drawing from different frames of interpretation. According to Kuhn, when paradigms change, the accounts they express inhabit slightly different worlds.
Apustimelogist October 02, 2025 at 20:09 #1016075
Quoting Joshs
I assert that your mechanics will fall flat on its face if it amounts to nothing but a ‘third-person’ mechanics.


But everything in your previous post was "third-person mechanics".

Quoting Joshs
According to Kuhn, when paradigms change, the accounts they express inhabit slightly different worlds.


Which is when scientists disagree with each other. But scientists don't generally set out to disagree with each other or foresee science as being full of ideas that are inherently contradictory. They aim for a family of models which agree on underlying metaphysics, ontology, on empirical predictions - and when they don't scientists will explain that away as idealization or models and science in general not being good enough yet or complete enough. You can view the mind and brain through many different perspectives and scales with different methods, and people have different hypotheses. But most people don't think that they don't or can't agree in principle, at least in the future.

Apustimelogist October 02, 2025 at 20:14 #1016077
Reply to Patterner

Well, if we can in principle explain our reports and behaviors regarding our own conscious experiences in terms of physics and biology, and epiphenomenalism is ridiculous, then this suggests that a coherent view of these kinds of metaphysics has to be monistic, if thats the right word.
Patterner October 02, 2025 at 20:17 #1016080
Quoting Apustimelogist
Well, if we can in principle explain our reports and behaviors regarding our own conscious experiences in terms of physics and biology, and epiphenomenalism is ridiculous, then this suggests that a coherent view of these kinds of metaphysics has to be monistic, if thats the right word.
I see.

But if we [I]can't[/I] in principle explain our reports and behaviors regarding our own conscious experiences in terms of physics and biology...
Joshs October 02, 2025 at 20:24 #1016083
Reply to Apustimelogist

Quoting Apustimelogist
I assert that your mechanics will fall flat on its face if it amounts to nothing but a ‘third-person’ mechanics.
— Joshs

But everything in your previous post was "third-person mechanics


I was trying to point to methods ( hermeneutical, phenomenological, enactivist) which go back and forth between first and third person , between the found and the made, without giving precedence to one over the other, but instead showing their radical inter-dependence.
noAxioms October 03, 2025 at 03:36 #1016131
Quoting Mijin
The question is how the brain can have experiences at all, and right now we don't have any model for that.
Sure we do. Q3 is easy. The ball-catching robot was one. A fly evading a swat is another. If one is searching for a model, you start simple and work your way up to something as complex as how our experience works.
You can claim that either is an automaton, but then you need a clear definition of what distinguishes an automaton from a sufficiently complex one.

My simulation example would demonstrate experience, but it wouldn't tell anything how it works. Only that it does. Nothing would know what it's like to be a bat except the simulated bat.

If you put your hand on a hot stove, we already understand very well which nerves get activated, which pain centers of the brain light up etc. What we don't understand is where the unpleasant feeling comes from.
But the easy part you describe is Q3, Chalmers' hard problem. Understanding where the feelings come from is indeed difficult, but being a Q2 question, open to science. Both are questions with third person answers. Only Q1 has a first person answer, which cannot be conveyed with third person language.
Also, part of 'certain parts of the brain lighting up' is the unpleasantness doing its thing, not just a direct stimulation due to signals coming in from some injury. And then there's the awareness of the unpleasantness and so forth. That lights up as well.

Now, in my view, subjective experience is a hard problem because it doesn't even appear as though an explanation is possible.
That depends on what criteria you place on an explanation being satisfactory. If it gets to the point of answering Q1, then yea, it's not going to be possible.

I mean really, what designates one experience as negative and another not? Different chemicals generated perhaps? Dopamine for instance plays a significant role in positive feedback, even if it is subtle enough not to be consciously noticed. A negative chemical could be called nope-amine, but clearly it can't be subtle for dire situations. The negative experience is not what pulls your hand back from the fire. The unpleasantness is a slow dressing on top of what is actually quite a fast reaction. Pain itself might not necessarily be itself qualia, but the experience of it is. So something crude (our fly) would react to injury but lack the complexity to experience the pointless unpleasantness.

Now what have I done? I said that the fly reacts to motion, and I called that consciousness, but then I say the fly detects damage but perhaps doesn't 'experience' that as pain. I'm being inconsistent due perhaps to loose definitions. Perhaps consciousness should be reserved for the higher functioning creatures that add that layer of awareness of these otherwise automatic reaction mechanicsms. Neither the fly nor the ball-catching robot has that. In this sense, my defense of today's machines not having consciousness is wrong. It would have to be programmed in, or evolved in. The latter presumes a machine that is capable of mutation and selection.
All that aside, both the fly and the robot have first person experience, even if the qualia we want isn't there. And per the OP Chalmers quote, "The famous "Mind-Body Problem," ... reduces to nothing but the question "What is the first person, and how is it possible?""

This is all amateur speculation of course. Would value your critique of my naive musings.

Quoting Mijin
Frankly, I think you're acknowledging that it is a difficult problem, but are reluctant to use the word "hard" because you don't want to climb down.
I call Chalmers' problem 'hard' because it's his phrase, and his problem is Q3. I call your Q2 problem 'difficult' because it actually is that, even if I think Q3 isn't difficult at all unless unreasonable assumptions are made.


Quoting Mww
I'm not using [self] that way. — noAxioms
To what else could first-person perspective belong?

I shy away from the term 'self'. While it can be a synonym for the thing in question, the use of it often generates an implication of separateness (me, and myself), and also identity, something that makes a system state the same system as some state say an hour ago. This identity (of even a rock for that matter) has incredible pragmatic utility, but under scrutiny, it requires classicality that has been proven incorrect, and thus doesn't hold up to rational analysis. The subject of personal identity deserves its own topic and I'd rather not delve into it here.



Quoting boundless
Ok but notice that in most forms of physicalism that I am aware of, there is a tendency to reduce all reality to the 'physical' and the 'physical' is taken to mean "what can be know[n], in principle, by science"
That bothers me since it contradicts physicalism since there can be physical things that cannot be known, even in principle. Science cannot render to a non-bat, even in principle, what it's like to be a bat. So I would prefer a different definition.

(IIRC in another discussion we preferred 'materialism' to denote such views).
Materialism typically carries a premise that material is fundamental, hence my reluctance to use the term.

Quoting boundless
Still, however, I believe that any view in which 'consciousness' emerges from something else has a conceptual gap in explaining how consciousness 'came into being' in the first place.
People have also questioned about how eyes came into being, as perhaps an argument for ID. ID, like dualism, posits magic for the gaps, but different magic, where 'magic is anything outside of naturalism. Problem is, anytime some new magic is accepted, it becomes by definition part of naturalism. Hypnosis is about as good an example as I can come up with. Meteorites is another. Science for a long time rejected the possibility of rocks falling from the sky. They're part of naturalism now.


Quoting boundless
The content of my thoughts perhaps can become public. But my experience of thinking those thoughts remains private.
Agree.



Quoting Patterner
The "first person" part is not a mystery
Chalmers says otherwise, per the quote in italics in my reply to Mijin above. But I agree with you. I don't find that part problematic at all.

It seems to meet you are saying brain states and conscious events are the same thing. So the arrangements of all the particles of the brain, which are constantly changing, and can only change according to the laws of physics that govern their interactions, ARE my experience of seeing red; feeling pain; thinking of something that doesn't exist, and going through everything to make it come into being; thinking of something that can't exist; on and on. It is even the case that the progressions of brain states are the very thoughts of thinking about themselves.

Is that how you see things?
I'm willing to accept all that without edit. A few asterisks perhaps, but still yes.


Quoting Joshs
They are subjective constructions, abstractions, idealizations which result from our taking our own perspectivally changing experience, comparing it with that of others
How can you compare your experience to that of others if their experience is not available to you?

First person questions are not about what is the case, what the objective facts are.
Funny, but 'cogito ergo sum' is pitched as a first person analysis concluding an objective fact. I personally don't buy that conclusion at all, but that's me not being a realist.
Mww October 03, 2025 at 11:00 #1016161
Quoting noAxioms
I shy away from the term 'self'.


As is your prerogative, being the thread host. I agree with the topical question….where’s the mystery….albeit for very different reasons apparently.

Postmodern philosophy has become like Big Pharma, in that the latter creates ailments to sustain medicinal inventions while the former creates scenarios bordering on superfluous overreach, and both thrive when the individual generally is, or artificially made to appear as, either pathetically blasé or pathologically ignorant.

Patterner October 03, 2025 at 12:18 #1016170
Quoting noAxioms
It seems to meet you are saying brain states and conscious events are the same thing. So the arrangements of all the particles of the brain, which are constantly changing, and can only change according to the laws of physics that govern their interactions, ARE my experience of seeing red; feeling pain; thinking of something that doesn't exist, and going through everything to make it come into being; thinking of something that can't exist; on and on. It is even the case that the progressions of brain states are the very thoughts of thinking about themselves.

Is that how you see things?
I'm willing to accept all that without edit. A few asterisks perhaps, but still yes.
Do you have any thoughts on how that works? Why are progressions of physical arrangements self-aware?

I can see how electrons moving from atom to atom [I]is[/I] electricity.
I can see how the movement of air molecules [I]is[/I] heat and pressure.
I can see how the movement of an object [I]is[/I] force: F=ma.
I can see how a fluid, whether liquid or gas, flowing around an object creates lift, which is a factor in flight.

All of those examples are physical activities. I don't see how self-awareness is a physical activity, and don't see how physical activity is responsible for, or identical to, it. Can you explain, in general even if not in detail?
Joshs October 03, 2025 at 13:10 #1016175
Reply to noAxioms

Quoting noAxioms
They are subjective constructions, abstractions, idealizations which result from our taking our own perspectivally changing experience, comparing it with that of others
— Joshs
How can you compare your experience to that of others if their experience is not available to you?


Their experience is available to me as their experience as seen from my perspective of them, through my interpretation of them. Thus, I don’t have direct access to their thoughts as they think them, only mediate access.

Quoting noAxioms
First person questions are not about what is the case, what the objective facts are.

Funny, but 'cogito ergo sum' is pitched as a first person analysis concluding an objective fact. I personally don't buy that conclusion at all, but that's me not being a realist.


In his book Cartesian Meditations, the phenomenologist Edmund Husserl praises Descartes’ method but critiques him for treating the cogito as a private object among other objects in the world, whose property is that it thinks. Husserl argues instead that the cogito is not a private object with the property of thought but exists by always being about something. It is not an objective fact but the subjective condition for the appearance of any world. Descartes asks "What can I know with certainty?" while Husserl asks "How does anything come to be given to consciousness at all?"????????????????
sime October 03, 2025 at 14:51 #1016185
From an external point of view, cognition is private and indirect. From an internal point of view, cognition is public and direct. So Husserl and Descartes can be both semantically correct, provided that we don't mix their postulates and apply them in different contexts.



Joshs October 03, 2025 at 16:23 #1016192
Reply to sime Quoting sime
From an external point of view, cognition is private and indirect. From an internal point of view, cognition is public and direct. So Husserl and Descartes can be both semantically correct, provided that we don't mix their postulates and apply them in different contexts.


Husserl’s point is that the external , third person point of view is a derived abstraction constituted within first person subjectivity.
boundless October 03, 2025 at 16:34 #1016193
Quoting noAxioms
That bothers me since it contradicts physicalism since there can be physical things that cannot be known, even in principle. Science cannot render to a non-bat, even in principle, what it's like to be a bat. So I would prefer a different definition.


OK. So what is 'physical' in your view? IIRC you also agree that physical properties are relational, i.e. they describe how a given physical object relate to/interact with other physical objects.
'Scientistic physicalism' is also inconsistent IMO because, after all, that there is a physical world is not something we discover by doing science.

Other than 'consciousness' I also believe in the existence of other things that are 'real' but not 'physical'. I am thinking, for instance, of mathematical truths. But this is perhaps OT.

Quoting noAxioms
Materialism typically carries a premise that material is fundamental, hence my reluctance to use the term.


Ok, yes. But it does sometimes clarify at least a meaning that 'physical' can have. For instance, if by matter one means "whatever object exists in a given location of space in a given time", would you agree that this is also what you mean by 'physical'? Note that this would also include radiation not just 'matter' as the word is used by physicists.

Has consciousness a 'definite location' in space, for instance?

Quoting noAxioms
People have also questioned about how eyes came into being, as perhaps an argument for ID. ID, like dualism, posits magic for the gaps, but different magic, where 'magic is anything outside of naturalism. Problem is, anytime some new magic is accepted, it becomes by definition part of naturalism. Hypnosis is about as good an example as I can come up with. Meteorites is another. Science for a long time rejected the possibility of rocks falling from the sky. They're part of naturalism now.


OK. But IMHO you're thinking in excessively rigid categories. Either one is a 'physicalist/naturalist' or one accepts 'magic'. Maybe there is something that is not 'natural'. Again, mathematical truths seem to me exactly an example of something that is not natural and yet real. One would stretch too much the meaning of 'natural/physical' to also include mathematical truths in it.

So, I guess that my response here can be summarized in one question for you: what do you mean by 'physical' (or 'natural') and why you think that consciousness is 'physical'?

Quoting noAxioms
Agree.


:up:






Apustimelogist October 03, 2025 at 16:57 #1016201


Reply to Patterner
Yes, I guess it depends on how easily convinced you are about this being case. For me, without further reason to believe otherwise, it seems like the biggest roadblocks in modelling something like the brain is intractable complexity. There is no indication that in principle we cannot someday model all our own behaviors and reports through computer models. I think even just looking at AI now indicates that there isn't really a conceivable limit on what they can do given enough power and the right inputs, which is what you might expect from something which is Turing complete: i.e. they can compute anything in principle.
Joshs October 03, 2025 at 17:35 #1016204

Reply to Apustimelogist
Quoting Apustimelogist
There is no indication that in principle we cannot someday model all our own behaviors and reports through computer models. I think even just looking at AI now indicates that there isn't really a conceivable limit on what they can do given enough power and the right inputs, which is what you might expect from something which is Turing complete: i.e. they can compute anything in principle.


The results of modeling the brain on today’s computers, using today’s forms of computer logic, are precisely as you describe. And they will colossally miss what philosophers and psychologists are coming to appreciate is the central feature of brains; that they are embodied and enactive. So,no, it won’t be today’s generation of A.I. that can express this understanding, and it has nothing to do with power and inputs. In about 10 to 20 years, we will likely see the emergence of a different kind of A.I. operating according to a different logic, that of complex dynamical systems ( cds).

Ultimately, CDS-based AI chips may blur the line between computation and physical processes, resembling intelligent materials more than traditional silicon. As Stephen Wolfram notes: “The most powerful AI might not be programmed; it might be cultivated, like a garden of interacting dynamical systems.”

When AI chips fully integrate complex dynamical systems (CDS) models, they will likely diverge radically from today’s parallel architectures (e.g., GPUs, TPUs) by embodying principles like self-organization, adaptive topology, and physics-inspired computation. Here’s a speculative breakdown of their potential design and function:

Architectural Shifts: From Fixed to Fluid.

Current A.I. Chips:

Fixed parallel cores (e.g., NVIDIA GPU clusters)
Deterministic von Neumann logic
Digital (binary) operations
Centralized memory (RAM)

Future CDS AI Chips:

Reconfigurable networks of nano-scale nodes that dynamically form/break connections (like neural synapses).
Nonlinear, chaotic circuit exploiting emergent stability (e.g., strange attractors).
Analog/quantum-hybrid systems leveraging continuous dynamics (e.g., oscillatory phases).
Distributed memory where computation and storage merge (like biological systems).


Mijin October 03, 2025 at 17:42 #1016205
Quoting noAxioms
Sure we do. Q3 is easy. The ball-catching robot was one. A fly evading a swat is another. If one is searching for a model, you start simple and work your way up to something as complex as how our experience works.


But I reject the breakdown into those three questions, if you're going to insist that neuroscience cannot ask Q2.

The hard problem is Q2 and it is legitimate for science to want to know how a neural net can have experiences.

It seems a bit pointless to me to keep deflecting from the hard problem to declare that there is no hard problem.
Apustimelogist October 03, 2025 at 17:55 #1016207
Reply to Joshs
I mean, none of this has any relevance to any points I am making. Obviously, to artificially recreate a human brain to acceptable approximation, you need to construct this computational system with the kinds of inputs, kinds of architectures, capabilities, whatever, that a human does. I am not making any arguments based on specific assumptions about specific computing systems, just on what is in principle possible.
Joshs October 03, 2025 at 18:14 #1016208
Quoting Apustimelogist
?Joshs
I mean, none of this has any relevance to any points I am making. Obviously, to artificially recreate a human brain to acceptable approximation, you need to construct this computational system with the kinds of inputs, kinds of architectures, capabilities, whatever, that a human does. I am not making any arguments based on specific assumptions about specific computing systems, just on what is in principle possible.


I will say bluntly that no machine we invent will do what we do, which is to think. As Evan Thompson wrote:


LLMs do not perform any tasks of their own, they perform our tasks. It would be better to say that they do not really do anything at all. Thus, we should not treat LLMs as agents. And since LLMs are not agents, let alone epistemic ones, they are in no position to do or know anything.

The map does not know the way home, and the abacus is not clever at arithmetic. It takes knowledge to devise and use such models, but the models themselves have no knowledge. Not because they are ignorant, but because they are models: that is to say, tools. They do not navi-gate or calculate, and neither do they have destinations to reach or debts to pay. Humans use them for these epistemic pur-poses. LLMs have more in common with the map or abacus than with the people who design and use them as instruments. It is the tool creator and user, not the tool, who has knowledge.


I think what he wrote about LLM’s applies to all of the devices we build. They are not separate thinking systems from us, they are and always will be our appendages, like the nest to the bird or the web to the spider.
boundless October 03, 2025 at 18:45 #1016209
Reply to Joshs :up: yeah, I often compare computers to highly sofisticated mechanical calculators. At the end of the day all LLMs are very complex computers and they operate according to algorithms (programmed by us) just like mechanical calculators.

I don't think that many people would think that mechanical calculators or a windmill or mechanical clocks etc have 'awareness' or 'agency'. And computers just like them perform operations without being agents.

In order to have consciousness, computers IMO would have to be aware of what they are doing. There is no evidence that have such of an awareness. All their activities can be explained by saying that they just do what they are programmed for.
Apustimelogist October 03, 2025 at 20:12 #1016215
Quoting Joshs
I will say bluntly that no machine we invent will do what we do, which is to think.


I don't see the grounds for such a statement. A brain is just a certain kind of machine, and it thinks. If brains exist, then in principle you can build one. LLMs don't have a lot of things humans have, but doesn't mean that in principle you could build machines that do.

Quoting boundless
and they operate according to algorithms (programmed by us) just like mechanical calculators.


And you don't think we do? Our brains are bundles of neurons which all work in very similar ways. You could easily make an argument that we operate in accordance with some very basic kind or family of algorithms recapitulated in many different ways across the brain.

Quoting boundless
All their activities can be explained by saying that they just do what they are programmed for.


As can a human brain.
noAxioms October 03, 2025 at 21:41 #1016232
Quoting Mijin
The hard problem is Q2 and it is legitimate for science to want to know how a neural net can have experiences.
I can accept that.'


Quoting boundless
OK. So what is 'physical' in your view? IIRC you also agree that physical properties are relational, i.e. they describe how a given physical object relate to/interact with other physical objects.

It means that all energy and particles and whatnot obey physical law, which yes, pretty much describes relations. That's circular, and thus poor. It asserts that this description is closed, not interfered with by entities not considered physical. That's also a weak statement since if it was ever shown that matter had mental properties, those properties would become natural properties, and thus part of physicalism.
So I guess 'things interact according to the standard model' is about as close as I can get. This whole first/third person thing seems a classical problem, not requiring anything fancy like quantum or relativity theory, even if say chemistry would never work without the underlying mechanisms. A classical simulation of a neural network (with chemistry) would be enough. No need to simulate down to the molecular or even quantum precision.

'Scientistic physicalism' is also inconsistent IMO because, after all, that there is a physical world is not something we discover by doing science.
That's a philosophical stance, I agree.

Other than 'consciousness' I also believe in the existence of other things that are 'real' but not 'physical'. I am thinking, for instance, of mathematical truths.
OK. Not being a realist, I would query what you might mean by that. I suspect (proof would be nice) that mathematical truths are objectively true, and the structure that includes our universe supervenes on those truths. It being true implying that it's real depends on one's definition of 'real', and I find it easier not to worry about that arbitrary designation.

But it does sometimes clarify at least a meaning that 'physical' can have. For instance, if by matter one means "whatever object exists in a given location of space in a given time", would you agree that this is also what you mean by 'physical'?
Is space and time not physical then? Neither meets your criteria of 'object', but I think I would include them under 'physicalism'. Not all universes have them, and those might have very different definitions of what is physical or material.

Quoting boundless
Has consciousness a 'definite location' in space, for instance?

Me considering that to be a process of material that has a location, it seems reasonably contained thus, yes. Not a point mind you, but similarly a rock occupies a region of space and time.

IMHO you're thinking in rigid categories. Either one is a 'physicalist/naturalist' or one accepts 'magic'.
Right.' Science cannot make progress with an attitude like that. Most magic is replaced by natural explanations, but occasionally 'magic' explanations are adopted as part of naturalism. I gave a couple examples of that.
By magic, I mean an explanation that just says something unknown accounts for the observation, never an actual theory about how this alternate explanation might work. To my knowledge, there is no theory anywhere of matter having mental properties, and how it interacts with physical matter in any way. The lack of that is what puts it in the magic category.

Maybe there is something that is not 'natural'. Again, mathematical truths seem to me exactly an example of something that is not natural and yet real.
That seems to be like saying atoms are not real because they're not made of rocks.

Quoting boundless
One would stretch too much the meaning of 'natural/physical' to also include mathematical truths in it.
I agree, since those truths hold hopefully in any universe, but our natural laws only work in this one (and similar ones).

why you think that consciousness is 'physical'?
I've seen no evidence from anybody that physical interactions cannot account for it. Sure, it's complex and we don't know how it works. But that it cannot work? That's never been demonstrated.

Quoting boundless
At the end of the day all LLMs are very complex computers and they operate according to algorithms (programmed by us) just like mechanical calculators.

I can argue that people also are this, programmed by ancestors and the natural selection that chose them. The best thinking machines use similar mechanisms to find their own best algorithms, not any algorithm the programmer put there. LLM is indeed not an example of this.


Quoting Patterner
I can see how electrons moving from atom to atom is electricity.
I can see how the movement of air molecules is heat and pressure.
I can see how the movement of an object is force: F=ma.
I can see how a fluid, whether liquid or gas, flowing around an object creates lift, which is a factor in flight.

All of those examples are physical activities
I don't see how self-awareness is a physical activity

You understand the former because those are quite trivial interactions. Then you jump to something with complexity beyond the current state of science. But not understanding how something works is not any sort of evidence that it isn't still a physical process.

The game playing machine beats everybody at Go. Nobody, not even its creators, know how it works. It wasn't taught any strategy, only the rules. It essentially uses unnatural selection to evolve an algorithm that beats all opponents. That evolved (hard deterministic) algorithm is what nobody understands, even if they look at the entire data map. But nobody concludes that it suddenly gets access to magic. Such a conclusion comes with an obvious falsification test.


Quoting Joshs
Descartes asks "What can I know with certainty?" while Husserl asks "How does anything come to be given to consciousness at all?"????????????????

Not only am I not certain about what Descartes knows with certainty, but I actually find the conclusion unlikely. Of course I have access to science that he doesn't.
As for 'come to be given to ...", that seems the conclusion is already drawn, and he's trying to rationalize how that might work.


Quoting Apustimelogist
from something which is Turing complete: i.e. they can compute anything in principle.
Something Turing complete can compute anything a Turing machine can, which is a lot, but not anything. Technically nothing is Turing complete since a Turing machine has infinite data on which to operate.
Such machines are a model of capability, but not in any way a model of efficiency. Nobody makes one to get any actual work done, but it's wicked interesting to make one utilizing nonstandard components like train tracks.

Quoting Joshs
As Stephen Wolfram notes: “The most powerful AI might not be programmed; it might be cultivated, like a garden of interacting dynamical systems.”????????????????
I like that quote.

Quoting Apustimelogist
Obviously, to artificially recreate a human brain to acceptable approximation, you need to construct this computational system with the kinds of inputs, kinds of architectures, capabilities, whatever, that a human does.
Were I to simulate a human, I'd probably not give it inputs at all. Almost all simulations I've run do it stand-alone with no input at all. Logged output for later analysis, but that doesn't affect the simulation. Of course this means your simulated person needs to be in a small environment, also simulated.

Quoting Joshs
I will say bluntly that no machine we invent will do what we do, which is to think.????????????????
Noted. How very well justified. Your quote is about LLMs which are mildly pimped out search engines. Compare that do devices which actually appear to think and to innovate. What do you call it if you refuse to apply the term 'think' to what it's doing?

The quote goes on to label the devices as tools. True now, but not true for long. I am arguably a tool since I spent years as a tool to make money for my employer. Am I just a model then?


Quoting Mww
Postmodern philosophy has become like Big Pharma, in that the latter creates ailments to sustain medicinal inventions while the former creates scenarios bordering on superfluous overreach
Nice analogy. It explains Chalmers' motivation for creating a problem where there really isn't one.

boundless October 04, 2025 at 06:31 #1016300
Quoting Apustimelogist
And you don't think we do? Our brains are bundles of neurons which all work in very similar ways. You could easily make an argument that we operate in accordance with some very basic kind or family of algorithms recapitulated in many different ways across the brain.


No, I don't and you don't here provided sufficient evidence to convince me of your view. Rather, it seems to me that, given the impressive results we have obtained with computers you are concluding that our congition is also algorithmic.

I believe that there is a difference between conscious - and in general living - beings and algorithmic devices. All living beings seem to have a 'sense' of unity, that there is a distinction between 'self' and 'not self' and so on. They do not just 'do' things.

Regardless, I don't think there is any consensus on this topic among scientists. So, after all in a way both our positions are speculative.

Quoting Apustimelogist
As can a human brain.


Says you. To me there is a clear difference between how human cognition works and how, say, a mechanical calculator works. And I am open to the idea that, perhaps, our cognition can't even be wholly comprehended by mathematical models, let alone only algorithms.



boundless October 04, 2025 at 06:56 #1016302
Quoting noAxioms
t means that all energy and particles and whatnot obey physical law, which yes, pretty much describes relations. That's circular, and thus poor. It asserts that this description is closed, not interfered with by entities not considered physical. That's also a weak statement since if it was ever shown that matter had mental properties, those properties would become natural properties, and thus part of physicalism.So I guess 'things interact according to the standard model' is about as close as I can get. This whole first/third person thing seems a classical problem, not requiring anything fancy like quantum or relativity theory, even if say chemistry would never work without the underlying mechanisms. A classical simulation of a neural network (with chemistry) would be enough. No need to simulate down to the molecular or even quantum precision.


Ok for the definition! Yes, and GR seems to imply that both spacetime and 'what is inside of it' are 'physical/natural'. i disagree with your view that mathematical truths are 'natural', though. They seem to be independent of space and time. That our minds are not 'natural' (in this broad sense) is perhaps more controversial. But the fact that we can know mathematical truths is quite interesting if we are 'wholly natural' (I do not know...). It seems to me that however it is better to reframe the 'hard problem' in a different way: can consciousness arise from what is completely inanimate?

The confidence you have in the power of algorithms seems to arise from anunderlying assumption that every natural process is 'algorithmic'. Of course, I am not denying the enormous success of algorithmic models and simulations but I am not sure that they can ever be able to give us a completely accurate model/simulation of all processes.

I admit that I can't give you a scientific argument against your assumption. But for me my phenomenological experience strongly suggests otherwise (self-awareness, the ability to choose and so on do not seem to be easily explainable in terms of algorithms).

Quoting noAxioms
OK. Not being a realist, I would query what you might mean by that. I suspect (proof would be nice) that mathematical truths are objectively true, and the structure that includes our universe supervenes on those truths. It being true implying that it's real depends on one's definition of 'real', and I find it easier not to worry about that arbitrary designation.


I lean towards a form of platonism where mathematical truths are concepts and yet are timeless and indipendent of space. it seems the only position that makes sense considering the following: the fact that we know them as concepts, the incredible success that mathematical laws have in describing the behaviour of physical processes, the apparently evident 'eternity' of mathematical truths (that there are infinite prime numbers seems to me indipendent from any human discovery of such a fact for instance) and so on.

Of course, I am under no illusion that I can give an absolutely convincing argument of my view (as often happens in philosophy) but it seems to me the best view after weighing the aguments in favour and against it.

Quoting noAxioms
Me considering that to be a process of material that has a location, it seems reasonably contained thus, yes. Not a point mind you, but similarly a rock occupies a region of space and time.


Ok. In a general sense, yeah I perhaps can agree with you that mind is natural or even 'physical'. But it has quite peculiar attributes that it is difficult to explain as arising from 'inanimate' matter. And, as I said before, it seems to have the capacity to understand/know 'something' that is not 'natural'.

Quoting noAxioms
By magic, I mean an explanation that just says something unknown accounts for the observation, never an actual theory about how this alternate explanation might work. To my knowledge, there is no theory anywhere of matter having mental properties, and how it interacts with physical matter in any way. The lack of that is what puts it in the magic category.


Ok, I see. But consider that under this definition you risk to include inside 'magic' many partial or unclear explanations that I would not include into that word. In other words, your category of 'magic' seems excessively broad.

For instance, if we were talking in the 14th century and you claimed that 'atoms' exist and 'somehow' interact with forces that we do not know to form the visible objects, would be this 'magic' (of course, you have to imagine yourself as having the scientific knowledge of the time)?

Quoting noAxioms
I can argue that people also are this, programmed by ancestors and the natural selection that chose them. The best thinking machines use similar mechanisms to find their own best algorithms, not any algorithm the programmer put there. LLM is indeed not an example of this.


Am I wrong to say that, however, that the operations of these 'thinking machines' are completely explainable in terms of algorithms?
As I said in my previous post, I can't neglect the fact that my own self-awareness, the experience of self-agency and so on seem to point that we are not like that.



RogueAI October 04, 2025 at 15:43 #1016336
Quoting noAxioms
A fly evading a swat is another.


Is there something it's like to be a fly evading a swat? How do we know? How could we ever find out? Isn't the inability to answer those questions a "hard problem"?
Apustimelogist October 04, 2025 at 16:03 #1016339
Quoting boundless
No, I don't and you don't here provided sufficient evidence to convince me of your view. Rather, it seems to me that, given the impressive results we have obtained with computers you are concluding that our congition is also algorithmic.


How would you interpret the fact that our brain (or at least the component that seems involved in processing information and long distance message-passing) is almost entirely composed of the same kind of cell with the same fundamental underlying physiological and anatomical structures and mechanism in terms of membrane potentials that induce action potentials.

We don't have a deep understanding in which we can build detailed realistic functioning models of exactly what human brains are doing and why but we have a reasonably good basis for understanding the kind of information processing principles that underlie what neurons do such as in terms of efficient, sparse, predictive coding using recurrent connectivity. And really, LLM architectures work under very similar basic principles to what neurons do which is just prediction. You can fidn studies that the same kind of models used for LLMs are actually really very good at predicting neural responses to things like language processing because fundamentally they are doing the same thing, prediction.

Quoting boundless
All living beings seem to have a 'sense' of unity, that there is a distinction between 'self' and 'not self' and so on. They do not just 'do' things.


There is no reason to think that these things can't be achieved with the same fundamental processes that transformers already use... why? Because they work in the same way brains do. The difference is that all LLMs are trained to do is predict words. Human brains don't just predict but act and control themselves; not just that, but these things are clearly biased, in terms of the evolutionarily-conserved structure of the brain itself, for very biologically specific control (i.e. what you would call homeostatic and allostatic). But the point is that there is no reason to think these things cannot be performed by the same processes that fundamentally underlie what transformers and LLMs do if you just structure them or design them in a way that allows them to do that. It would be surprising if they didn't imo because that seems to be what brains do. Neurons share the same core fundamental physiological, anatomical, functional properties, and there is the same kinds of interplay between excitation and inhibition, that are used for everything from homeostatic regulatory responses from the hypothalamus and midbrain to visual processing, motor control, language, executive functions, emotion, etc. There is of course a great variety in neurons and structures across the brain but they all share fundamental commonalities with some shared core which is virtually ubiquitious.

hypericin October 04, 2025 at 21:08 #1016379
Quoting noAxioms
The primary disconnect seems to be that no third-person description can convey knowledge of a first-person experience


Without reading the full post, this misses the problem.

The problem is, no third person explanation can arrive at first person experience. There is an 'explanatory gap'. Not only do we not know the specific series of explanations that start at neural facts and ends at first person experience, conceptually, it doesn't seem possible that any such series can exist.
Patterner October 04, 2025 at 22:01 #1016387
Quoting noAxioms
But not understanding how something works is not any sort of evidence that it isn't still a physical process.
Maybe so. But [I]not[/I] understanding how it works is certainly not any sort of evidence that it [I]is[/I] a physical process.


I'm wondering if you can tell me how this works. Or tell me what's wrong with my understanding.

This is what Google AI says about the release of neurotransmitters:
[Quote]1. Arrival of Action Potential:
The action potential travels down the axon of the presynaptic neuron and reaches the axon terminal.

2. Calcium Influx:
The arrival of the action potential opens voltage-gated calcium channels at the axon terminal.
Calcium ions (Ca2+) flow into the neuron.

3. Fusion of Synaptic Vesicles:
Ca2+ binds to proteins on the synaptic vesicles, which are small membrane-bound structures containing neurotransmitters.
This binding triggers the fusion of the synaptic vesicles with the presynaptic membrane.

4. Neurotransmitter Release:
As the vesicles fuse, the neurotransmitters are released into the synaptic cleft, the space between the presynaptic and postsynaptic neurons.

5. Diffusion and Binding:
The released neurotransmitters diffuse across the synaptic cleft and bind to receptors on the postsynaptic neuron.

6. Termination of Neurotransmitter Action:
Neurotransmitters are eventually removed from the synaptic cleft by reuptake into the presynaptic neuron, enzymatic breakdown, or diffusion away from the receptors.[/quote]

Here's what it says about the first step - Action Potential:
Resting Membrane Potential: In a resting neuron, the inside of the cell is more negative than the outside, establishing a resting membrane potential (around -70 mV).

Threshold: A stimulus, often in the form of chemical signals from other neurons (neurotransmitters), causes the membrane to depolarize (become less negative). If this depolarization reaches a critical "threshold" level (e.g., -55 mV), it triggers an action potential.

Depolarization: At threshold, voltage-gated sodium channels open rapidly, allowing a large influx of positively charged sodium ions into the cell. This makes the inside of the neuron rapidly more positive.

Repolarization: Sodium channels then inactivate, and voltage-gated potassium channels open, allowing positively charged potassium ions to flow out of the cell. This efflux of potassium ions causes the membrane potential to become more negative again, moving it back towards the resting potential.

Hyperpolarization: The potassium channels may remain open a bit longer than needed, causing the membrane potential to dip below the resting potential before they close.

Return to Rest: Finally, ion pumps (like the sodium-potassium pump) restore the resting membrane potential, preparing the neuron for another action potential.


You say all of this, along with whatever other processes are taking place, is a description of not only things like receiving sensory input and distinguishing wavelengths of the electromagnetic spectrum, and receptors on my tongue distinguishing molecules that have made contact, but also seeing the color red, and tasting the sweetness of sugar. More than that, it's a description of my thoughts.

My thoughts is what I'm really wondering about at the moment. That kind of activity is why I'm thinking of the number 7. And, over the next several seconds, due to the laws of physics working on what was there, the arrangements of all the particles of my brain change from X to Y to X. And those arrangements just happen to mean "+ 18 = 25".

The same could be said for any thoughts I ever have, mathematical or otherwise.

Of course, it's not simply one chain of neurons involved in a thought. I wouldn't care to guess how many are involved in any given thought. Or how many are involved multiple times in a single thought. There's probably all kinds of back tracking and looping.

How does all that work??? In particular:

-How do progressions of arrangements of all the particles in my brain [I]mean[/I] all the things they mean?

-How do all the action potentials and releasings of neurotransmitters coordinate throughout the brain, or an area of the brain, so that X , Y, and a million others happen at the same time in order to bring about the needed thought? (I could understand if one specific neuron initiated it all, so that the timing would be assured. But that would mean the single neuron already had the thought, and initiated all the activity to, shall we say, actualize(?) the thought. But that whole idea is a bit silly.)


frank October 05, 2025 at 10:19 #1016445
Quoting Patterner
You say all of this, along with whatever other processes are taking place, is a description of not only things like receiving sensory input and distinguishing wavelengths of the electromagnetic spectrum, and receptors on my tongue distinguishing molecules that have made contact, but also seeing the color red, and tasting the sweetness of sugar. More than that, it's a description of my thoughts.


You can initiate a fight or flight response by deciding to think of something scary. Maybe that would be an argument that a description of how neurons communicate isn't a description of thought.

You can also get a dose of adrenaline by making yourself hyperventilate. Don't do it in the middle of the night though, because you won't be able to sleep afterwards.
Joshs October 05, 2025 at 14:19 #1016475
Reply to noAxioms Quoting noAxioms
I will say bluntly that no machine we invent will do what we do, which is to think.????????????????
— Joshs
Noted. How very well justified. Your quote is about LLMs which are mildly pimped out search engines. Compare that do devices which actually appear to think and to innovate. What do you call it if you refuse to apply the term 'think' to what it's doing?

The quote goes on to label the devices as tools. True now, but not true for long. I am arguably a tool since I spent years as a tool to make money for my employer. Am I just a model then?


When we build a machine it has a time stamp on it. No matter how apparently flexible its behavior , that flexibility will always be framed and limited to the model of thinking that dates to the time that the product is released to the market. As soon as it is released, it already is on the way to obsolescence, because the humans who designed it are changing the overarching frame of their thinking in subtle ways every moment of life. 20 years from now this same device which was originally praised for its remarkable ability to learn and change “just like us” will be considered just as much a relic as a typewriter or vcr. But the engineers who invented it will have continued to modify their ways of thinking such as to produce a continuous series of new generations of machines, each of which will be praised for their remarkable ability to ‘think and innovate’, just like us. This is the difference between a human appendage ( our machines) and a human cognizer.

You might argue that biological evolution designs us, so that we also have a time stamp on us framing and limiting our ability to innovate. But I disagree. Human thought isn’t bound by a frame. It emerged through evolutionary processes but has no limits to the directions in which thinking can reinvent itself. This is because it isn’t just a product of those processes, but carries forward those processes.Evolution isn’t designed, it is self-designing. And as the carrying forward of evolution, human culture is also self-designing. Machines don’t carry forward the basis of their design. They are nothing but their design, and have to wait for their engineers to upgrade them. Can we teach them to upgrade and modify themselves? Sure, based on contingent, human-generated concepts of upgrading and modification that have a time stamp on them.
noAxioms October 05, 2025 at 16:24 #1016518
Quoting hypericin
The problem is, no third person explanation can arrive at first person experience.
I guess I didn't see much difference between a description and an explanation. My point was that no anything will arrive at the 'experience' part of it.
Chalmers' explanation certainly doesn't arrive at first person experience. "A sauce is needed" is no more an explanation than it not being needed.

And I disagree. First person (the PoV part, not the human qualia associated with 'experience') is trivial. The experience part (qualia and such) is 'what it is like', and that part cannot be known about something else, especially even when it's not even complex. The first person part seems trivial, and entire articles are nevertheless written suggesting otherwise, but seemingly without identifying exactly what's problematic about it.

The title of this topic is about the first/third person divide, which Chalmers asserts to be fundamental to said 'hard problem', but it isn't. The qualia is what's hard. I can't know what it's like to be a bat, but even a rock has a first person PoV, even if it utterly lacks the ability to actually view anything.



Quoting boundless
The confidence you have in the power of algorithms seems to arise from anunderlying assumption that every natural process is 'algorithmic'.
Not sure what you mean by that, but I can perhaps say that every natural process can in principle be simulated via an algorithmic device that has sufficient time and memory. (Speed/power is not one of the requirements). This assumes a form of physicalism, yes, and the statement would likely be false if that was not assumed.

I am not sure that they can ever be able to give us a completely accurate model/simulation of all processes.
I don't think a classical simulation can be done of something not classical, such as a quantum computer. Heck, even grass has been shown to be utilizing quantum computation, so what does that do to my claim that grass can be simulated?

Quoting boundless
But for me my ... ability to choose ... [does] not seem to be easily explainable in terms of algorithms
You must have an incredibly different notion of 'choice' when there's some many trivial devices that make them every second. It's not hard at all.
But his seems illustrative of my point. You finding something as trivial as choice to be physically inexplicable is likely due to very related reasons why you might find something as trivial as a first person point of view similarly inexplicable. You're already presuming something other than physicalism under which these things are trivial.

For instance, if we were talking in the 14th century and you claimed that 'atoms' exist and 'somehow' interact with forces that we do not know to form the visible objects, would be this 'magic' (of course, you have to imagine yourself as having the scientific knowledge of the time)?
Yes, that would qualify as magic. It's a guess, and a lucky one. Elements as distinct from compounds was still hundreds of years away, so 'atom' meant just 'tiny indivisible bit' and there were no known examples of one, even if some substances known at the time happened to be pure elements. BTW, 'atom' no longer implies 'tiny indivisible bit'. The word unfortunately stuck to a quanta of a specific element and not to whatever is currently considered to be an indivisible component of matter.

Quoting boundless
Am I wrong to say that, however, that the operations of these 'thinking machines' are completely explainable in terms of algorithms?
Probably not so. The algorithms developed by say alphaZero have defied explanation. Nobody knows how they work. That isn't an assertion that the operations are not the result of deterministic processes. All the same things can be said of humans.


Quoting RogueAI
Is there something it's like to be a fly evading a swat? How do we know? How could we ever find out? Isn't the inability to answer those questions a "hard problem"?

From observation, the answer to that question is yes or no depending on if it supports my personal conclusions on the matter. Hence assertions of there perhaps being something it is like to be the fly, but not something it is like to be an autonymous drone doing somewhat the same things and more.


Quoting Patterner

This is what Google AI says about the release of neurotransmitters:

Cool level of detail. I notice no influence from say chemicals in the blood stream. It sounds all very like logic gates. A similar breakdown of transistor operation could be made, which are sometimes more binary and less analog, but still either could be implemented via the components of the other. The chemical influences would be harder to mimic with transistors and would likely play a role only at higher levels.

I also notice nowhere in those 6 steps any kind of mental properties of matter playing any sort of role that they somehow are forbidden from doing in transistors. Were that the case, then my claim of either being capable of being implemented by the components of the other would be false. This lack of external mental influence (or at least the lack of a theory suggesting such) is strong evidence for the physicalist stance.

You say all of this, along with whatever other processes are taking place, is a description of not only things like receiving sensory input and distinguishing wavelengths of the electromagnetic spectrum, and receptors on my tongue distinguishing molecules that have made contact, but also seeing the color red, and tasting the sweetness of sugar. More than that, it's a description of my thoughts.
No, I cannot describe thoughts in terms of neurons any more than I can describe a network file server in terms of electrons tunneling through the base potential of transistors. It's about 12 levels of detail removed from where it should be. Your incredulity is showing.


Quoting Joshs
No matter how apparently flexible its behavior , that flexibility will always be framed and and limited to the model of thinking that dates to the time that the product is released to the market.
No so for devices that find their own models of thinking.

As soon as it is released, it already is on the way to obsolescence
So similar to almost every creature. Name a multicelled creature they have a fossil of that exists today. I can't think of one. They're all obsolete. A rare fossil might have some living descendants today (I can think of no examples), but the descendant is a newer model, not the same species.

The singularity is defined as the point at which human engineers play little role in the evolution of machine thinking. Humans will also cease being necessary in their manufacturing process, and I suspect the trend of humans handing more and more of their tasks to machines will continue to the point that the singularity will be welcomed.

It will be interesting to see what role evolution plays in that context, where changes are deliberate, but selection is still natural. Also, it may not necessarily be many machines in competition for resources instead of meany ideas competing for being 'the machine'.
Patterner October 05, 2025 at 17:42 #1016561
Quoting noAxioms
No, I cannot describe thoughts in terms of neurons any more than I can describe a network file server in terms of electrons tunneling through the base potential of transistors. It's about 12 levels of detail removed from where it should be.
Ok, wrong word. You agreed they [I]are[/I] the same thing. But they can't be [I]described as[/i] the same thing.



Quoting noAxioms
Your incredulity is showing.
I am trying to understand your position. Can you give me any detail, or direction? My incredulity is a given. But if you're right, I'd like to understand.
Joshs October 05, 2025 at 18:00 #1016572
Quoting noAxioms
No matter how apparently flexible its behavior , that flexibility will always be framed and and limited to the model of thinking that dates to the time that the product is released to the market.
— Joshs
No so for devices that find their own models of thinking.

As soon as it is released, it already is on the way to obsolescence
So similar to almost every creature. Name a multicelled creature they have a fossil of that exists today. I can't think of one. They're all obsolete. A rare fossil might have some living descendants today (I can think of no examples), but the descendant is a newer model, not the same species.


You’re missing the point. Even taking into account all of the biological lineages which become extinct, what it means to be a living system is to be self-organizing, and this self-organization is dynamic. This means that to continue existing as that creature from moment to moment is to make changes in itself that maintain the normative self-consistency of its functioning in its environment while at the same time adapting and accommodating itself to the always new features of its environment. This is as true of dinosaurs and do-do birds as it is of us.

While organisms without linguistic culture rely on inherited ‘design’ to frame their possibilities of adaptation to the world, this isnt just a one-way street. The organism’s own novel capabilities shape and modify the underlying genetic design, so behavior affects and modifies future genetic design just as genetic design frames behavior. This is what makes all living systems self-designing in a way that our machines can never be. A designer can’t ‘teach’ its product to be self-designing because all the tools the machine will have at its disposal will have been preset in advance by the designer. There is no reciprocal back and forth between machine and designer when the designer chooses a a plan for the machine.

In that the the engineers have the opportunity to observe how their machine operates out in the world, there is then a reciprocity between machine behavior and desiger. But this is true of our artistic and philosophical creations well. We create them , they ‘talk back to us’ and then we modify our thinking as a result.
Apustimelogist October 05, 2025 at 21:35 #1016613
Reply to Joshs

I really don't understand what you are going on about. A brain is a physical object. In principal, you can build a brain that does all the things brains do from scratch if you had the technological capabilities.
Outlander October 05, 2025 at 21:37 #1016614
Quoting Apustimelogist
In principal, you can build a brain that does all the things brains do from scratch if you had the technological capabilities.


"One could do the impossible, if only it were possible" becomes a bit less profound when said aloud more than once.
Apustimelogist October 05, 2025 at 21:52 #1016618
Reply to Outlander

There is absolutely nothing of any substance in Josh's post that refutes the idea that one could build a self-organizing machine.
hypericin October 05, 2025 at 22:27 #1016637
Quoting noAxioms
The title of this topic is about the first/third person divide, which Chalmers asserts to be fundamental to said 'hard problem', but it isn't. The qualia is what's hard.


This feels like a strange misunderstanding. Qualia are intrinsically first person. When people talk about first person experience being mysterious, they are talking about qualia, not mere geometric POV.

This especially raises my eyebrows, because I remember a time you thought you were a p zombie!
Joshs October 06, 2025 at 12:26 #1016743
Quoting Apustimelogist
?Joshs

I really don't understand what you are going on about. A brain is a physical object. In principal, you can build a brain that does all the things brains do from scratch if you had the technological capabilities


And can we also create life from scratch if we had all the technological capabilities? What I am going on about are the important differences between the cognizing of a living organism and the thinking of a human-designed machine.
Apustimelogist October 06, 2025 at 16:05 #1016766
Quoting Joshs
And can we also create life from scratch if we had all the technological capabilities? What I am going on about are the important differences between the cognizing of a living organism and the thinking of a human-designed machine.


It seems to me you are going on about differences between living organisms and our current machines. But there is no refutation here that in principle one can build machines which are as complex as living organisms. You haven't set out any reason why those differences wouldn't be breachable. You just say that living organisms are like this and machines we have built are like that. You don't seem to think that in principle we can understand the principles of self-organization and use that to build self-organizing machines when people have been doing that for decades. The learning that A.I. does isn't even all that different to self-organization in the sense that in modern A.I. we don't hardcode the capabilities and things these systems are actually doing; usually people don't even really know how the A.I. they have designed does what it does on a mechanical level. What is being hardcoded, effectively, is the ability for a system to learn to do things by itself without explicit supervision, which is self-organization.
Joshs October 06, 2025 at 17:20 #1016775
Reply to Apustimelogist

Quoting Apustimelogist
What is being hardcoded, effectively, is the ability for a system to learn to do things by itself without explicit supervision, which is self-organization.


What does hardcoded mean? What are the technological concepts involved? It is not difficult to answer this question for any given a.i.architectural blueprint. One can put together a list of such grounding concepts. At the top of this list should be concepts like ‘hard’, ‘code’ and what these do together in the form ‘hardcode’, but it also includes our current understanding of concepts like memory, speed, symbol, computation, and many others. If I were to suggest that today’s computer architectures and their grounding concepts are based on older philosophical ideas that have been surpassed, would you respond that this simply proves your point that just because today’s technological know-how is inadequate to model human thinking, this doesn’t mean that tomorrow’s technological understanding can’t do the job?

If I am claiming that there is a more adequate way of understanding how humans think, what stands in the way of our applying this more adequate approach to the development of products which think like us? My answer is that nothing stands in the way of it. I would just add that such an approach would recognize that the very concept of a device humans design and build from the ground up belongs to the old way of thinking about what thinking is.

So if I’m not talking about a machine that we invent, what am I talking about? What alternatives are there? Writers like Kurzweil treat human and machine intelligence in an ahistorical manner, as if the current notions of knowledge , cognition, intelligence and memory were cast in stone rather than socially constructed concepts that will make way for new ways of thinking about what intelligence means. In other words, they treat the archival snapshot of technological cultural knowledge that current AI represents as if it were the creative evolution of intelligence that only human ecological semio-linguistic development is capable of. Only when we engineer already living systems will we be dealing with intelligent, that is, non-archival entities, beings that dynamically create and move through frames of cognition.

When we create dna-based computers in a test tube and embed them within an organic ecological system, or when we breed, domesticate and genetically engineer animals, we are adding human invention on top of what is already an autonomous, intelligent life-based ecological system , or ecological subsystem. The modifications we make to such systems will allow us to communicate with them and shape their behavior, but what will constitute true thinking in those living systems is not what we attempt to ‘lock in’ through ‘hardcoding’ but what feeds back into, surpasses and transforms that hardcoding through no planned action of our own. The difference here with devices that we design from the ground up is that the latter can never truly surprise us. Their seeming creativity will always express variations on a theme that we ‘hardcode’ into it, even when we try to hardcode creative surprise and innovation.

When we achieve that transition from inventing machines to modifying living systems, that organic wetware will never surpass us for the same reason that the animals we interact with will never surpass us in spite of the fact that they are subtly changing their ‘hardcoded’ nature all the time. As our own intelligence evolves, we understand other animals in more and more complex ways. In a similar way, the intelligence of our engineered wetware will evolve in parallel with ours.

Apustimelogist October 06, 2025 at 19:52 #1016789
Quoting Joshs
What does hardcoded mean?


Pre-programmed, in contrast to self-organization. Its not some technical concept. For instance, you could say pain or hunger is in some sense hard-coded into us.

You have then seemed to base the rest of the post on latching onto thia use of the word "hardcoded" even though I initially brought that word up in the post to say that "hardcode" is exactly not what characterizes self-organization or what A.I. do.
Joshs October 06, 2025 at 23:15 #1016840
Reply to Apustimelogist

Quoting Apustimelogist
Pre-programmed, in contrast to self-organization. Its not some technical concept. For instance, you could say pain or hunger is in some sense hard-coded into us


Whether technical or non-technical, it is a concept, and all concepts belong to cultural understandings, which are contingent and historically changing And the machines we invent are stuck with whatever grounding concept we use to build their remarkable powers to innovate. Given that the thinking of our best engineers doesn’t even represent the leading edge of thinking of our era, it’s kind of hard to imagine how their slightly moldy concepts instantiated in a self-learning a.i., will lead to the singularity.

Quoting Apustimelogist
You have then seemed to base the rest of the post on latching onto this use of the word "hardcoded" even though I initially brought that word up in the post to say that "hardcode" is exactly not what characterizes self-organization or what A.I.


The fact that the architectures of our most self-organizing machines depend on locking in certain grounding concepts to define the parameters and properties of self-organization ( properties which will change along with the concepts in a few years as the engineers come up with better machines) means that these concepts are in fact forms of hardcoding.
noAxioms October 07, 2025 at 05:54 #1016928
Quoting hypericin
The title of this topic is about the first/third person divide, which Chalmers asserts to be fundamental to said 'hard problem', but it isn't. The qualia is what's hard. — noAxioms


This feels like a strange misunderstanding. Qualia are intrinsically first person. When people talk about first person experience being mysterious, they are talking about qualia, not mere geometric POV.

I think that's what I said. It makes qualia the fundamental issue, not first person, which is, as you call it, mere geometric PoV.

This especially raises my eyebrows, because I remember a time you thought you were a p zombie!
Kind of still do, but claiming to be a p-zombie opens myself to the possibility that some others are not, and if so, that all of say quantum theory is wrong, or at least grossly incomplete.


Quoting Patterner
No, I cannot describe thoughts in terms of neurons any more than I can describe a network file server in terms of electrons tunneling through the base potential of transistors. It's about 12 levels of detail removed from where it should be. — noAxioms

Ok, wrong word. You agreed they are the same thing. But they can't be described as the same thing.
Not sure what two things are the same here, but I don't think I said that two different things are the same thing. Certainly not in that quote.

I am trying to understand your position.
My position is simply that nobody has ever demonstrated the simpler model wrong. Plenty (yourself included) reject that simplicity, which is your choice. But the physical view hasn't been falsified, and there is no current alternative theory of physics that allows what you're proposing. You'd think somebody would have come up with one if such a view was actually being taken seriously by the scientific community.


Quoting Apustimelogist
I really don't understand what you are going on about. A brain is a physical object. In principal, you can build a brain that does all the things brains do from scratch if you had the technological capabilities.

Given their trouble even producing a manufactured cell from scratch (a designed one, not a reproduction of abiogenesis, which is unlikely to be done), you wonder if it can even be done in principle. Certainly a brain would not be operational. It needs a being to be in, and that being needs an environment, hence my suggestion of a simulation of . The other thing questionably doable is the scanning phase, to somehow take a full snapshot of a living thing, enough info to, in principle, reproduce it. Do they have a simulation of a living cell? Are we even that far yet?

Anyway, in general, I agree with your stance, even if perhaps not with what cannot be done even in principle.


Quoting Joshs
You’re missing the point. Even taking into account all of the biological lineages which become extinct, what it means to be a living system is to be self-organizing, and this self-organization is dynamic.
Yea, which is why mechanical devices are not yet living things. It can happen. Whether it will or not is an open question at this point. A device being living is not a requirement for it to think or to have a point of view.

This means that to continue existing as that creature from moment to moment is to make changes in itself that maintain the normative self-consistency of its functioning in its environment while at the same time adapting and accommodating itself to the always new features of its environment.
You mean like putting on a coat when winter comes? What does this have to do with the topic again? The definition of 'life' comes up only because you're asserting that life seems to have access to a kind of physics that the same matter not currently part of a lifeform does not.
Joshs October 07, 2025 at 14:14 #1016955
Reply to noAxioms

Quoting noAxioms
My position is simply that nobody has ever demonstrated the simpler model wrong. Plenty (yourself included) reject that simplicity, which is your choice. But the physical view hasn't been falsified, and there is no current alternative theory of physics that allows what you're proposing. You'd think somebody would have come up with one if such a view was actually being taken seriously by the scientific community.


The simpler model is proven wrong all the time. Put more accurately, scientific paradigms are replaced by different ones all the time. Since I am a Kuhnian rather than a Popperian, I dont want to say that the older view is falsified by the newer one. Rather, the frame of intelligiblity is turned on it head from time to time, leading to changes in basic concepts, what counts an evidence and even scientific method itself. From a short distance , it may seem as if there is one scientific method that has been in use for three hundreds of years, and that new discoveries about the brain are simply added to the older ones with minor adjustments. But from a vantage of greater historical depth, it can be seen that all concepts are in play, not just those concerned with how to create the best third person theory, but whether a ‘simple’ empirical model implies a physicalistic account, what a third person theory is, what a first person account is, and how to conceive the relationship between them. For instance, certain embodied enactivist approaches to the brain , such as Francisco Varela’s neurophenomneology, sweepingly rethink this relation.

So, on its own terms, what you call the ‘simple’ empirical model can’t be defined in some static, ahistorical way as third person physicalism as opposed to subjective feeling.

Quoting noAxioms
Certainly a brain would not be operational. It needs a being to be in, and that being needs an environment, hence my suggestion of a simulation of


Yes, if your aim is to get your brain to do what living brains do, you have to start by keeping in mind that a life form isn’t a thing placed in an environment. It IS an environment, indissociably brain, body and world in inextricable interaction. That has to be the starting point. As soon as we start thinking that we have to ‘invent’ a body and an environment for a device we separately invent , and ignore the fact that we ourselves were not first invented and then placed in a body and a room, we have guaranteed that our device will not ‘think’ the way that living systems think. If, on the other hand, we take as our task the modification of an already existing ecology (biological computing in a test tube), we are on the road to systems that think the way creatures ( including plants) think.

Quoting noAxioms
You’re missing the point. Even taking into account all of the biological lineages which become extinct, what it means to be a living system is to be self-organizing, and this self-organization is dynamic.
— Joshs
Yea, which is why mechanical devices are not yet living things. It can happen. Whether it will or not is an open question at this point. A device being living is not a requirement for it to think or to have a point of view


The reason it’s a question of the material is more a matter of the capacities of the material to be intrinsically self-transforming and self-organizing than whether we use silicon or dna strands as our building blocks. What I mean is that we can’t start with inorganic parts that we understand in terms of already fixed properties ( which would appear to be intrinsic to how we define the inorganic) and then design self-organizing capacities around these parts. Such a system can never be fundamentally , ecologically self-organizing in the way a living environment of organic molecules is, and thus cannot think in the way living creatures think.

Quoting noAxioms
This means that to continue existing as that creature from moment to moment is to make changes in itself that maintain the normative self-consistency of its functioning in its environment while at the same time adapting and accommodating itself to the always new features of its environment.
You mean like putting on a coat when winter comes? What does this have to do with the topic again? The definition of 'life' comes up only because you're asserting that life seems to have access to a kind of physics that the same matter not currently part of a lifeform does not.


Yes, a popular conception of both living and non-living things is that they start from building blocks, and thinking is computation, which can be performed with any material that can be used to symbolize one’s and zeros. Add in non-linear recursive functions and. presto, you have a self-organizing system. This will certainly work if what we want are machines which perform endless variations on themes set by the fixed properties of their building blocks as we conceptualization them , and the constraints of digital algorithms.

My point is really that we need to listen to those theorists (Physicist Karen Barad, Joseph Rouse, James Bridle) who suggest that material entities don’t pre-exist their interactions ( or ‘intra-actions’), and the limitations of our current models of both the living and the non-living have to do with their reliance on the concept of a building block. Just as any material will do if we think of thinking ( and materials) in terms of computational patterns of ones and zeros, no material will do, not even organic molecules, if we want to have such entities think the way life thinks.
Apustimelogist October 07, 2025 at 15:04 #1016963
Quoting Joshs
Given that the thinking of our best engineers doesn’t even represent the leading edge of thinking of our era, it’s kind of hard to imagine how their slightly moldy concepts instantiated in a self-learning a.i., will lead to the singularity.


I don't understand this sentiment. It's not a refutation of the possibilities of what can be created, neither is it a realistic sentiment about how the world works. Things change, ideas advance, ideas bleed over between different fields. Doubt anyone in 1950 at the time could tangibly envision technologoy that does what A.I. are doing now.

Quoting Joshs
The fact that the architectures of our most self-organizing machines depend on locking in certain grounding concepts to define the parameters and properties of self-organization ( properties which will change along with the concepts in a few years as the engineers come up with better machines) means that these concepts are in fact forms of hardcoding.


And this would apply to all living organisms: jellyfish, plants, the many kinds of brains, neural webs, etc, etc.
boundless October 07, 2025 at 15:26 #1016967
Reply to Apustimelogist Reply to noAxioms Regarding the distinction between 'living beings' and AI, I believe that @Joshs did a very good job in explaining (much clearer than I could) why I also think why there is a real distinction between them.

Anyway, even if I granted to you that in the future we might be able to build a 'sentient articial intelligence', I believe that the 'hard problem' would remain. In virtue of what properties of the inanimate aspects of reality can consciousness (with its 'first-person perspective', 'qualia' etc) arise?

And even it is unrelated to the 'hard problem', I think that the undeniable existence of mathematical truths also points to something beyond 'physicalism'*. That there are an infinite number of primes seems to be something that is independent from human knowledge and also spatio-temporal location. In fact, it seems utterly independent from spacetime.

*TBH, there is always the problem of what one means by 'physicalism'. I mean, I do not see how, for instance, 'panpsychism' is inconsistent to a very broad definition of 'physicalism' in which "what is spatio-temporal" includes everything that is real.
As I said before, however, I believe we can know something that cannot be included in a meaningful way in the category of the 'physcial'.

Reply to noAxioms Regarding the 'magic' thing, then, it seems to me that the criterion you give about 'not being magical' is something like being 'totally understandable', something that is not too dissimilar to the ancient notion of 'intelligibility'. That is, if one has a 'fuzzy explantion' of a given phenomenon where something is left unexplained, the explanation is magical. If that is so, however, it seems to me that you assume that the 'laws of thought' and the 'laws of nature' are so close to each other than one has to ask how is it possible in purely physicalist terms?
Physical causality doesn't seem to explain, say, logical implication. It doesn't seem possible IMO to explain in purely physical terms why from "Socrates is a man" and "men are mortal" that "Socrates is mortal". If 'physical reality' is so intelligible as you think it is, it seems to me that your own view is actually not very far from, ironically, to positing an ontologically fundamental 'mental' aspect to reality.

I am not saying you are wrong here. I actually find a lot to agree here but, curiously, intelligibility suggests to me that there is a fundamental mental aspect to reality whereas if I am not misunderstading you, you seem to think that intelligibility actually is a strong evidence for physicalism. Interesting.
Apustimelogist October 07, 2025 at 15:27 #1016968
Quoting noAxioms
Given their trouble even producing a manufactured cell from scratch (a designed one, not a reproduction of abiogenesis, which is unlikely to be done), you wonder if it can even be done in principle.


Well this is then just a speculation about technological capability, which I referred to conditionally.
Joshs October 07, 2025 at 17:25 #1016973
Reply to Apustimelogist

Quoting Apustimelogist
Given that the thinking of our best engineers doesn’t even represent the leading edge of thinking of our era, it’s kind of hard to imagine how their slightly moldy concepts instantiated in a self-learning a.i., will lead to the singularity.
— Joshs

I don't understand this sentiment. It's not a refutation of the possibilities of what can be created, neither is it a realistic sentiment about how the world works. Things change, ideas advance, ideas bleed over between different fields. Doubt anyone in 1950 at the time could tangibly envision technologoy that does what A.I. are doing now.

The fact that the architectures of our most self-organizing machines depend on locking in certain grounding concepts to define the parameters and properties of self-organization ( properties which will change along with the concepts in a few years as the engineers come up with better machines) means that these concepts are in fact forms of hardcoding.
— Joshs

And this would apply to all living organisms: jellyfish, plants, the many kinds of brains, neural webs, etc, etc.


I’m realizing after my last post to NoAxioms that what I’m arguing is not that our technological capabilities dont evolve along with our knowledge, nor am I claiming that the fruits of such technological progress don’t include systems that think in ways that deeply overlap thinking in living organisms. What I am arguing is that the implications of such progress, which the leading edge of thinking in philosophy and psychological sciences points to, necessitates a change in our vocabulary, a shift away from certain concepts that now dominate thinking about the possibilities of a.i.

Im thinking of notions like the evolution of our devices away from our control and beyond our own cognitive capabilities, the idea that a thinking system is ‘invented’ from components, that the materials used in instantiating a thinking machine don’t matter, and that it is fundamentally computational in nature. I recognize that those who say today’s a.i. already mirrors how humans think. are absolutely correct in one sense: their models of human behavior mirror their models of a.i. behavior. So when I protest that today’s a.i. in no way captures how we think, I’m making two points. First, I am saying that they are failing to capture how humans think. I am comparing their model of human behavior to an alternative which I believe is much richer. And second, that richer model demands a change in the way we talk about what it means to be a machine.

In a way, this debate doesn’t even need to bring in our understanding of what specific current a.i. approaches can do or speculation about what future ones will do. Everything I want to argue for and against concerning what a.i. is and can become is already contained with my dfferences with you concerning what humans and other animals are doing when they think. This gets into the most difficult philosophical and psychological perspectives of the past 150 years, and discussions about a.i. technology derive their sense from this more fundamental ground.





hypericin October 07, 2025 at 18:25 #1016978
Quoting noAxioms
I think that's what I said. It makes qualia the fundamental issue, not first person, which is, as you call it, mere geometric PoV.


You seem to be arguing against a position that nobody takes. Neither Chalmers nor anyone else believe geometric PoV is mysterious. Everyone agrees that qualia is the fundamental issue.

Quoting noAxioms
and if so, that all of say quantum theory is wrong, or at least grossly incomplete.


Not necessarily. It is the "hard problem", not the "impossible problem". Chalmers does believe physics is incomplete, but several believe consciousness is explicable naturally without amending physics, while still acknowledging the uniquely difficult status of the hard problem.
Apustimelogist October 07, 2025 at 20:26 #1017014
Quoting Joshs
Everything I want to argue for and against concerning what a.i. is and can become is already contained with my dfferences with you concerning what humans and other animals are doing when they think.


I don't think there is any fundamental difference here between what I think about what humans and animals do, I think the disagreement is about relevance.
Joshs October 07, 2025 at 20:47 #1017018
Quoting Apustimelogist
I don't think there is any fundamental difference here between what I think about what humans and animals do, I think the disagreement is about relevance.


You’re saying you think you and I are approaching our understanding of human and animal cognition from the same philosophical perspective? And what does our disagreement over relevance pertain to? Whether how one understands human and animal cognition is relevant to the production of a.i. that can think like humans and animals?
Apustimelogist October 07, 2025 at 21:55 #1017032
Quoting Joshs
You’re saying you think you and I are approaching our understanding of human and animal cognition from the same philosophical perspective?


Broadly, yes.

Quoting Joshs
And what does our disagreement over relevance pertain to?


When I came into this thread I was talking about the hard problem of consciousness and the plausibility that physical mechanisms can produce all human behavior. The point was that I don't believe there is anything in the field of neuroscience or A.I. that produces a doubt about the idea that we will be able to keep continuing to see what brains do as instantiated entirely in physical interactions of components as opposed to requiring some additional mental woo we don't yet understand.
Joshs October 07, 2025 at 22:06 #1017035
Quoting Apustimelogist
t I don't believe there is anything in the field of neuroscience or A.I. that produces a doubt about the idea that we will be able to keep continuing to see what brains do as instantiated entirely in physical interactions of components as opposed to requiring some additional mental woo we don't yet understand


Yes , that’s what I thought. So that indicates a distinctly different philosophical perspective on human and animal cognition than my view, which is to closer to enactivists like Evan Thompson:



"I follow the trajectory that arises in the later Husserl and continues in Merleau-Ponty, and that calls for a rethinking of the concept of “nature” in a post-physicalist way—one that doesn't conceive of fundamental nature or physical being in a way that builds in the objectivist idea that such being is intrinsically of essentially non-experiential. But, again, this point doesn't entail that nature is intrinsically or essentially experiential (this is the line that pan-psychists and Whiteheadians take). (Maybe it is, but I don't think we're now in position to know that.) All I want to say for now (or think I have grounds for saying now) is that we can see historically how the concept of nature as physical being got constructed in an objectivist way, while at the same time we can begin to conceive of the possibility of a different kind of construction that would be post-physicalist and post-dualist–that is, beyond the divide between the “mental” (understood as not conceptually involving the physical) and the “physical” (understood as not conceptually involving the mental)."

“Many philosophers have argued that there seems to be a gap between the objective, naturalistic facts of the world and the subjective facts of conscious experience. The hard problem is the conceptual and metaphysical problem of how to bridge this apparent gap. There are many critical things that can be said about the hard problem, but what I wish to point out here is that it depends for its very formulation on the premise that the embodied mind as a natural entity exists ‘out there' independently of how we configure or constitute it as an object of knowledge through our reciprocal empathic understanding of one other as experiencing subjects. One way of formulating the hard problem is to ask: if we had a complete, canonical, objective, physicalist account of the natural world, including all the physical facts of the brain and the organism, would it conceptually or logically entail the subjective facts of consciousness? If this account would not entail these facts, then consciousness must be an additional, non-natural property of the world.

One problem with this whole way of setting up the issue, however, is that it presupposes we can make sense of the very notion of a single, canonical, physicalist description of the world, which is highly doubtful, and that in arriving (or at any rate approaching) such a description, we are attaining a viewpoint that does not in any way presuppose our own cognition and lived experience. In other words, the hard problem seems to depend for its very formulation on the philosophical position known as transcendental or metaphysical realism. From the phenomenological perspective explored here, however — but also from the perspective of pragmatism à la Charles Saunders Peirce, William James, and John Dewey, as well as its contemporary inheritors such as Hilary Putnam (1999) — this transcendental or metaphysical realist position is the paradigm of a nonsensical or incoherent metaphysical viewpoint, for (among other problems) it fails to acknowledge its own reflexive dependence on the intersubjectivity and reciprocal empathy of the human life-world.

Apustimelogist October 07, 2025 at 22:13 #1017037
Quoting Joshs
Yes , that’s what I thought. So that indicates a distinctly different philosophical perspective on human and animal
cognition. My perspective to closer to enactivists like Evan Thompson:


I mean the quote doesn't seem distinctly enactivist to me, but more focused on the inability to explain qualia. At the same time, I can clarify that I didn't mean anything about qualia or experience in the previous post, I only meant behavior, as I mentioned in the first sentence. I just struggle to find any motivation for the sentiment that in principle physical mechanisms cannot explain all behavior, partly because that claim would entail some radical revisions to what we know about the universe that I think we should probably be more aware of by now.

Edit: and I should probably clarify more that targeting behavior specifically was due to the p-zombie thought experiment I described which is about the threat of epiphenomenalism.
wonderer1 October 07, 2025 at 22:33 #1017042
Quoting Apustimelogist
The point was that I don't believe there is anything in the field of neuroscience or A.I. that produces a doubt about the idea that we will be able to keep continuing to see what brains do as instantiated entirely in physical interactions of components as opposed to requiring some additional mental woo we don't yet understand.


:100:

I'm curious as to whether @Joshs recognizes this.

Patterner October 07, 2025 at 23:00 #1017053
Quoting noAxioms
"I am trying to understand your position."

My position is simply that nobody has ever demonstrated the simpler model wrong.
I am trying to understand the simpler model.

Quoting noAxioms
Ok, wrong word. You agreed they are the same thing. But they can't be described as the same thing.
— Patterner
Not sure what two things are the same here, but I don't think I said that two different things are the same thing. Certainly not in that quote.
You agreed thatQuoting Patterner
brain states and conscious events are the same thing. So the arrangements of all the particles of the brain, which are constantly changing, and can only change according to the laws of physics that govern their interactions, ARE my experience of seeing red; feeling pain; thinking of something that doesn't exist, and going through everything to make it come into being; thinking of something that [I]can't[/I] exist; on and on. It is even the case that the progressions of brain states are the very thoughts of thinking about themselves.
But then you said "I cannot describe thoughts in terms of neurons any more than I can describe a network file server in terms of electrons tunneling through the base potential of transistors." So they [I]are[/I] the same thing, but they can't be [I]described[/I] as the same thing.

Granted, "described" might not be the best word. Maybe it's wrong wording to say the movement of air particles in a room is a [I]description[/I] of the room's heat and pressure. But they [I]are[/I] the same thing. And it's all physical, and mathematically described. As Paul Davies writes in [I]The Demon in the Machine[/I]:
Paul Davies:I mentioned that gas molecules rush around, and the hotter the gas, the faster they go. But not all molecules move with the same speed. In a gas at a fixed temperature energy is shared out randomly, not uniformly, meaning that some molecules move more quickly than others. Maxwell himself worked out precisely how the energy was distributed among the molecules – what fraction have half the average speed, twice the average, and so on.

noAxioms October 08, 2025 at 05:01 #1017096
Quoting hypericin
You seem to be arguing against a position that nobody takes. Neither Chalmers nor anyone else believe geometric PoV is mysterious. Everyone agrees that qualia is the fundamental issue.

The title of Chalmers' paper quoted in the OP implies very much that the hard problem boils down to first vs third person, and that qualia are considered just 'many aspects' of that mystery. To requote from my OP:
"The first person is, at least to many of us, still a huge mystery. The famous "Mind-Body Problem," in these enlightened materialist days, reduces to nothing but the question "What is the first person, and how is it possible?". There are many aspects to the first-person mystery. The first-person view of the mental encompasses phenomena which seem to resist any explanation from the third person."

In asking 'what is the first person?', he seems to be talking about something less trivial than what we called a geometric point of view, but I cannot identify what else there is to it.


Quoting boundless
Regarding the distinction between 'living beings' and AI

That's a false dichotomy. Something can be all three (living, artificial, and/or intelligent), none, or any one or two of them.

[/quote]In virtue of what properties of the inanimate aspects of reality can consciousness (with its 'first-person perspective', 'qualia' etc) arise?[/quote]I can't even answer that about living things. I imagine the machines will find their own way of doing it and not let humans attempt to tell them how. That's how it's always worked.

I think that the undeniable existence of mathematical truths also points to something beyond 'physicalism'*.
Beyond materialism you perhaps mean. Physicalism/naturalism doesn't assert that all is physical/natural. Materialism does. That seems the primary difference between the two.
Of course I wouldn't list mathematics as being 'something else', but rather a foundation for our physical. But that's just me. Physicalism itself makes no such suggestion.
PS: Never say 'undeniable'. There's plenty that deny that mathematical truths are something that 'exists'. My personal opinion is that such truths exist no less than does our universe, but indeed is in no way dependent on our universe.

Quoting boundless
That there are an infinite number of primes seems to be something that is independent from human knowledge
Agree, but there are those that define mathematics as a human abstraction, in which case it wouldn't be independent of human knowledge. I distinguish mathematics from 'knowledge of mathematics', putting the two on nearly opposite ends of my supervention hierarchy.

Quoting boundless
Regarding the 'magic' thing, then, it seems to me that the criterion you give about 'not being magical' is something like being 'totally understandable', something that is not too dissimilar to the ancient notion of 'intelligibility'.
Let's reword that as not being a function of something understandable. The basic particle behavior of electrons and such are pretty well understood, but we're just beginning to scratch the surface of understanding of what goes on in a star, especially when it transitions. That current lack of understanding does not imply that astronomers consider stellar evolution to be a supernatural process. I mean, they used to think the gods carted the stars across the sky each night, which actually is a supernatural proposal.
Actual proposal of magic is an assertion that current ideas have been demonstrated incapable of explaining some phenomenon, such as the rotation curve of galaxies. Dark matter had to be invented to explain that, and there are those that still won't accept dark matter theory. Pretty hard to find any of it in a lab, right? So there are alternate theories(e.g. MoND), but none predict as well as dark matter theory. Key there is 'theory'. Without one of those, it's still just magic.
If one dares to promote Chalmers' ideas to the level of theory, it does make predictions, and it fails them. So proponents tend to not escalate their ideas to that level.

Quoting boundless
It doesn't seem possible IMO to explain in purely physical terms why from "Socrates is a man" and "men are mortal" that "Socrates is mortal"
That's mathematics, not physics, even if the nouns in those statements happen to have physical meaning. They could be replaced by X Y Z and the logical meaning would stand unaltered.


Quoting Apustimelogist
Well this is then just a speculation about technological capability, which I referred to conditionally.

Just the manufacture seems to defy any tech. Can't say 3D print a squirrel, finish, and then 'turn it on'. Or can you? Best I could come up with is a frog, printed totally frozen. When finished, thaw it out. Frogs/turtles can deal with that. Again, I am mostly agreeing with your side of the discussion with Joshs.

Quoting Apustimelogist
The point was that I don't believe there is anything in the field of neuroscience or A.I. that produces a doubt about the idea that we will be able to keep continuing to see what brains do as instantiated entirely in physical interactions of components as opposed to some additional mental woo.

As already noted, that was put rather well. There are claims to the contrary, but they seem to amount to no more than assertions. None of the claims seem backed.


Quoting Joshs
The simpler model is proven wrong all the time. Put more accurately, scientific paradigms are replaced by different ones all the time.
Agree. Science is never complete, and there are very much current known holes, such as the lack of a unified field theory. These continuous updates to the consensus view doesn't stop that view from being the simpler model. I am looking for a falsification specifically of physical monism, hard to do without any competing theories.

Funny that some declared physics to be complete at some point, with the only work remaining being working out some constants to greater precision. That was uttered famously by Lord Kelvin, shortly before the quantum/relativity revolution that tore classical physics to pieces, never to recover.
So yes, there very well could arise some theory of mental properties of matter, but at this time there isn't one at all, and much worse, no need for one has been demonstrated.

For instance, certain embodied enactivist approaches to the brain , such as Francisco Varela’s neurophenomenology, sweepingly rethink this relation.
Interesting reference. Seems perhaps to be a new methodology and not necessarily something that falsifies any particular philosophical stance. Maybe you could point out some key quotes that I could find in my initial scan of some of the references to this.

So, on its own terms, what you call the ‘simple’ empirical model can’t be defined in some static, ahistorical way as third person physicalism as opposed to subjective feeling.
Scientific naturalism does not preclude subjective evidence. I don't know what 'third person physicalism' is, as distinct from physicalism. 'Third person' refers to how any view might be described, but it says nothing about what the view proposes.

As soon as we start thinking that we have to ‘invent’ a body and an environment for a device we separately invent
Sorry, but my proposal did not separate anything like you suggest. There is one system with a boundary, all simulated, something that can be achieved in principle. There would be a real person in a real room, and a simulation of same. Thing is to see if either can figure out which he is.

The test requires a known state of the real subject, and that pushes the limits of 'in principle' perhaps a bit too far. Such a state in sufficient precision is prevented per Heisenberg. So much for my falsification test of physicalism. Better to go long-run and simulate a human from say a zygote, but then there's no real person with whom experience can be compared.

... ignore the fact that we ourselves were not first invented and then placed in a body ...
What does it even syntactically mean for X to be placed in X?


Quoting Joshs
What I mean is that we can’t start with inorganic parts that we understand in terms of already fixed properties ( which would appear to be intrinsic to how we define the inorganic) and then design self-organizing capacities around these parts.
Why not? With or without the design part... Designing it likely omits most of those properties since they serve little purpose to the designer.




Quoting Patterner
Granted, "described" might not be the best word. Maybe it's wrong wording to say the movement of air particles in a room is a description of the room's heat and pressure.
That's like one step away. Yes, heat is simple and can pretty much be described that way. From atoms to consciousness is about 12 steps away (my quote, and no, I didn't count). I gave the example of trying to explain stellar dynamics in terms of particle interactions.

Joshs October 08, 2025 at 13:11 #1017132
Reply to wonderer1

Quoting wonderer1
The point was that I don't believe there is anything in the field of neuroscience or A.I. that produces a doubt about the idea that we will be able to keep continuing to see what brains do as instantiated entirely in physical interactions of components as opposed to requiring some additional mental woo we don't yet understand.
— Apustimelogist

:100:

I'm curious as to whether Joshs recognizes this.


If you wrote this after reading the quote I included from Evan Thompson, maybe you should re-read it. The issue isnt a choice between the physical and the mental, it’s about re-construing what both of these concepts mean in the direction of a model which is radically interactive, both post-physicalistic and post-“qualia”, post-internalistic and post-externalistic. The very concept of “qualia” as a private inner datum depends on a physicalistic metaphysics in which on one side stands third person , external, non-experiential materiality and on the other there is inner mental woo. Such a dualism tends to treat mental and physical as radically distinct, with one not affecting the other, or mental being epiphenomenal. I’m guessing your inclination is to stick with the physicalistic side of the dualism and either deny or ignore the other, as eliminative materialists like Churchland and Dennett have done.
Joshs October 08, 2025 at 13:53 #1017134
Quoting Apustimelogist
I mean the quote doesn't seem distinctly enactivist to me, but more focused on the inability to explain qualia. At the same time, I can clarify that I didn't mean anything about qualia or experience in the previous post, I only meant behavior, as I mentioned in the first sentence.


Well, it came from the co-originator of the research field
of enactivism, and the point of enactivism is that cognition, affect and consciousness are not computational or representational processes taking place inside of a head, they are reciprocal circuits of actions distributed between brain, body and world. There is no such thing as “qualia” if qualia is meant to imply some ineffable inner inner experience. All mental phenomena of felt awareness are processes of interaction with a world, not a private, inner datum. The qualitative , experiential aspect of consciousness is emergent, but emergence cannot be possible unless what emerges is already present in some fashion in what it emerges from. That is to say, qualitative difference is produced within material interactions of all kinds, as intrinsic to materiality. This is a post-physicalistic view of the natural world.
Joshs October 08, 2025 at 14:13 #1017138
Reply to noAxioms

Quoting noAxioms
I am looking for a falsification specifically of physical monism, hard to do without any competing theories.


It’s even harder to do when you haven’t read the competing theories. You could start here:

https://unstable.nl/andreas/ai/langcog/part3/varela_npmrhp.pdf

Varela’s neurophenomenology is an alternative to “physical monism” in the sense that he thought the ontology of mind requires broadening our conception of nature. If you define “the physical” narrowly (as purely third-person measurable stuff), then no, experience doesn’t fit. But if you define nature in a way that includes the lived as well as the measurable, then you get a non-reductive monism (sometimes called non-dual). Some scholars (including Evan Thompson) describe this as a form of neutral monism or non-dual naturalism: the idea that mind and matter are not two substances, but two aspects of one underlying reality. Importantly. neurophenomenology is not anti-naturalist. It’s an expanded naturalism that insists first-person consciousness is an indispensable datum of science, not an illusion.

Instead of saying “consciousness just is brain activity,” Varela says “brain activity and conscious experience are reciprocally illuminating dimensions of a single living system.” And instead of saying “consciousness is an immaterial stuff,” he insists it’s embodied, enacted, and inseparable from biological and social dynamics.

Or here:

https://smartnightreadingroom.wordpress.com/wp-content/uploads/2013/05/meeting-the-universe-halfway.pdf

Physicist Karen Barad’s re-interpretation of the double slit experiment in quantum field theory in the direction of, but beyond Niels Bohr represents the core of her alternative to physical monism., which she calls agential realism. She is one of the pioneers and most influential members of the community that calls itself New Materialism.

https://www.researchgate.net/publication/337351875_WHAT_IS_NEW_MATERIALISM
Apustimelogist October 08, 2025 at 14:56 #1017143
Quoting noAxioms
Just the manufacture seems to defy any tech. Can't say 3D print a squirrel, finish, and then 'turn it on'. Or can you? Best I could come up with is a frog, printed totally frozen. When finished, thaw it out. Frogs/turtles can deal with that. Again, I am mostly agreeing with your side of the discussion with Joshs.


Well, you won't know just by looking at our technology. We don't know what technology will happen or can happen. Its speculation. But I said if we had the technology to do something. I think that the ultimate limit is what the laws of physics would allow you to do, what can be manipulated, which seems to be quite a lot. I remember people give examples that if you smash a glass on the floor and see it smash everywhere, there is nothing in Newtonian physics that says the reverse process can't happen: i.e. lots of sprinkles of glass from all directions gather up on the floor into a perfectly formed glass then bounce up into someones hand. This is a completely physically acceptable process; it can happen under Newtonian physics even if the initial conditions would be very difficult to produce. It just needs a method (or technology) to produce the initial conditions. There's a lot physics might seem to allow you to do in principle if you get the conditions correct, the technology to produce those conditions. But at the same time, such processes will still remaon physically acceptable in principle even if it is very difficult to get those required conditions together.
Apustimelogist October 08, 2025 at 15:08 #1017145
Reply to Joshs
I clearly didn't read the quote properly because re-reading it I think its not that far from my view, broadly. Maybe not identical, but not really fundamentally that disagreeable. I am just quite allergic to the way that passage was written so it put me off reading it more closely.
boundless October 08, 2025 at 16:13 #1017148
Quoting noAxioms
That's a false dichotomy. Something can be all three (living, artificial, and/or intelligent), none, or any one or two of them.


I was making a point about the current AI and living beings. In any case, until one can find a way to generate truly artificial life, there is no 'artificial life'. But in my post, I was even conceding the possibility of sentient AI.

Quoting noAxioms
I can't even answer that about living things. I imagine the machines will find their own way of doing it and not let humans attempt to tell them how. That's how it's always worked.


That's the hard problem though. The problem is how to explain consciousness in terms of properties of the 'inanimate'. So, yeah, probably the 'hard problem' isn't a 'problem' for 'physicalism' but of all attempts to treat the 'inanimate' as fundamental and 'consciousness' as derivative from it.

In a similar way, I believe that one can also make a similar point about the 'living beings' in general. All living beings seem to me to show a degree of intentionality (goal-directed behaviours, self-organization) that is simply not present in 'non-living things'. So in virtue of what properties of 'non-living things' can intentionality that seems to be present in all life forms arise?

Note that in order to solve both these problems you would need a theory that explain how consciousness, intentionality, life etc arose. If the 'inanimate' is fundamental, you should expect to find an explanation on how consciousness, intentionality, life and so on came into being, not just that they come into being. And the explanation must be complete.

Quoting noAxioms
Beyond materialism you perhaps mean. Physicalism/naturalism doesn't assert that all is physical/natural.


? Not sure how. At least physicalism means that the 'natural' is fundamental. In any case, however, with regards to consciousness, consciousness in a physicalist model would be considered natural. And something like math either an useful fiction or a fundamental aspect of nature (in this latter case, I believe that it would be inappropriate to call such a view 'physicalism', but anyway...)

Quoting noAxioms
Of course I wouldn't list mathematics as being 'something else', but rather a foundation for our physical. But that's just me. Physicalism itself makes no such suggestion.


Interesting. I do in fact agree with you here. However, I believe that your conception of 'physical/natural' is too broad. What isn't natural in your view?

Quoting noAxioms
PS: Never say 'undeniable'. There's plenty that deny that mathematical truths are something that 'exists'. My personal opinion is that such truths exist no less than does our universe, but indeed is in no way dependent on our universe.


Right, I admit that there is no conseus and perhaps the majority view is that mathematics is just an useful abstraction. To be honest, however, I always found the arguments for that view unpersuasive and often based on a strictly empiricist view of knowledge. I believe it is one of those topics where both 'parties' (I mean the 'realist' and the 'anti-realist' about the ontology of mathematics in a broad sense of these terms) are unable to find a common ground with the opponents.

I agree with you about the fact that mathematics doesn't depend on the universe. I have a different view about the relation between mathematics and the universe. For instance, I believe that mathematical truths would still be true even if the universe didn't exist. I do see this universe as contingent whereas mathematical truths as non-contingent.

Quoting noAxioms
Let's reword that as not being a function of something understandable.
...


It seems to me that you here are assuming that all possible 'non-magical' explanations are 'natural/physical' one. This seems to me a stretch.

I also don't like to make the distinction between 'supernatural' and 'natural', unless one defines the terms in a precise way. Perhaps, I would define 'natural' as 'pertaining to spacetime' (so, both spacetime - or spacetimes if there is a multiverse - and whatever is 'in' it would qualify as 'natural').

Regarding the point you make about Chalmers, as I said before perhaps the 'hard problem' is better framed as an objection to all reductionist understanding of consciosuness that try to reduce it to the inanimate rather than an objection to 'physicalism' in a broad sense of the term.

Quoting noAxioms
That's mathematics, not physics, even if the nouns in those statements happen to have physical meaning. They could be replaced by X Y Z and the logical meaning would stand unaltered.


Yes, we can also make a purely formal syllogism. But that's my point, after all. Why the 'laws' of valid reasoning can apply to 'reality'? If mathematical and logical 'laws' aren't at least a fundamental aspect of nature (or, even more fundamental than nature), how could we even accept whatever 'explanation' as a 'valid explanation' of anything? Also: is physical causality the same as logical causality?

I believe that people who deny the independent existence of 'mathematical' and 'logical' truths/laws assert that our notion of logical implication, numbers etc are abstractions from our experience. The problem, though, is that if you try to explain how we could 'generate' these abstractions, you need to assume these laws are valid in order to make that explanation. This to me shows that logical and mathematical truths/laws are not mere abstractions. But to be honest even if I find such a brief argument convincing of this, I admit that many would not be convinced by this argument. Why this is so, I do not know...

boundless October 08, 2025 at 17:04 #1017154
Quoting boundless
In a similar way, I believe that one can also make a similar point about the 'living beings' in general. All living beings seem to me to show a degree of intentionality (goal-directed behaviours, self-organization) that is simply not present in 'non-living things'. So in virtue of what properties of 'non-living things' can intentionality that seems to be present in all life forms arise?


Also, I would add that the apparent 'gradation' of 'intentionality' found in 'entities' at the border of being 'living' and 'non-living' like viruses isn't really evidence for a 'reductionist' view. After all, if viruses have a rudimentary form of intentionality it has still to be explained.
hypericin October 08, 2025 at 19:21 #1017190
Quoting noAxioms
In asking 'what is the first person?', he seems to be talking about something less trivial than what we called a geometric point of view, but I cannot identify what else there is to it.


This should be a strong clue:

Quoting noAxioms
The first-person view of the mental encompasses phenomena which seem to resist any explanation from the third person.


These phenomena are qualia.

If you still doubt this I'm sure I can find more explicit passages in the paper or elsewhere.
Patterner October 08, 2025 at 19:42 #1017195
Quoting noAxioms
Regarding the 'magic' thing, then, it seems to me that the criterion you give about 'not being magical' is something like being 'totally understandable', something that is not too dissimilar to the ancient notion of 'intelligibility'.
— boundless
Let's reword that as not being a function of something understandable.
The definition of "magical" can only be something along the lines of:
[I]Something that operates outside of the laws and properties of this reality.[/I]
Our understanding is irrelevant.

We don't understand how mass warps spacetime. But we don't think gravity is magic. We don't understand how dark matter warps spacetime, but doesn't interact with light or other electromagnetic radiation. But, despite not being able to detect it, we hypothesize dark matter's existence, and we don't say the motion of the galaxies is the result of magic.
Patterner October 08, 2025 at 23:59 #1017229
Quoting noAxioms
That's like one step away. Yes, heat is simple and can pretty much be described that way. From atoms to consciousness is about 12 steps away (my quote, and no, I didn't count). I gave the example of trying to explain stellar dynamics in terms of particle interactions.
Yes. Chalmers mentions the hurricane in this video:
Quoting Chalners
Who would ever guess that the motion of a hurricane would emerge from simple principles of airflow. But what you find in all those other cases, like the hurricane, and the water wave, and so on, is complicated dynamics emerging from simple dynamics. Complicated structures emerging from simple structures. New and surprising structures.
The same for stellar evolution, which, certainly, nobody considers to be a supernatural process. There is no point in time when any part of a star, or hurricane, that we can observe in any way, is not physical, not observable and measurable in one way or another. We can measure a star's size, it's brightness, its output of various kinds of radiation. We can measure the diameter and circumference of a hurricane, its temperature, how fast it circulates, how fast it moves over the surface of the planet. We can calculate how much water it contains. The defining characteristics of stars and hurricanes are physical events that we can quantify. I don't hold it against scientists who study stellar dynamics and hurricanes that they can't calculate exactly when one will begin or end, know exactly what is going on at any point inside it, or understand it in every detail at every point at every moment.
noAxioms October 12, 2025 at 19:42 #1018179
Quoting hypericin

These phenomena are qualia.

If you still doubt this
We're going in circles. The paper is not about qualia, it is about the first person view, and Chalmers says that the hard problem boils down not to the problem of qualia (which is difficult to explain only because it is complicated in humans), but to the problem of first person view, which seems not problematic at all.


Quoting Joshs
If you define “the physical” narrowly (as purely third-person measurable stuff)
I never have. First person empirical evidence is valid in science, especially when damage occurs.
Varela’s neurophenomenology seems to fit in with this and is not an alternate theory.

Again, I'm looking for an actual theory that Chalmers might support, one that demonstrates (falsifies) the monism that they all say is impossible.

Physicist Karen Barad’s re-interpretation of the double slit experiment in quantum field theory in the direction of, but beyond Niels Bohr represents the core of her alternative to physical monism., which she calls agential realism.
OK, but again this seems to be an attempt at an interpretation (kind of like RQM but with different phrasing) of an existing theory. It doesn't falsify anything.


Quoting boundless
That's the hard problem though. The problem is how to explain consciousness in terms of properties of the 'inanimate'.
Sure, that's difficult because it is complicated, and the brain isn't going to get explained in terms of something like an algorithm. But the problem being difficult is not evidence against consciousness being derived from inanimate primitives.
Chalmers certainly doesn't have an explanation at all, which is worse than what is currently known.

So in virtue of what properties of 'non-living things' can intentionality that seems to be present in all life forms arise?
Probably because anything designed is waved away as not intentionality. I mean, a steam engine self-regulates, all without a brain, but the simple gravity-dependent device that accomplishes it is designed, so of course it doesn't count.

If the 'inanimate' is fundamental, you should expect to find an explanation on how consciousness, intentionality, life and so on came into being, not just that they come into being.
Completely wrong. Fundamentals don't first expect explanations. Explanations are for the things understood, and the things not yet understood still function despite lack of this explanation. Things fell down despite lack of explanation for billions of years. Newton explained it, and Einstein did so quite differently, but things falling down did so without ever expectation of that explanation.

In a way, neither explained it. Both expressed it mathematically in such a way that predictions could be made from that, but Newton as I recall explicitly denied that being an explanation: a reason why mass was attracted to other mass. Hence the theories are descriptive, not explanatory. I suppose it depends on whether mathematics is considered to be descriptive (mathematics as abstraction) or proscriptive (as being more fundamental than nature). The latter qualifies as an explanation, and is a significant part of why I suspect our universe supervenes on mathematics.

Quoting boundless
At least physicalism means that the 'natural' is fundamental
We seem to have different definition then. Again, I would have said that only of materialism.

In any case, however, with regards to consciousness, consciousness in a physicalist model would be considered natural.
Depends on your definition of consciousness. Some automatically define it to be a supernatural thing, meaning monism is a denial of its existence. I don't define it that way, so I'm inclined to agree with your statement.

Quoting boundless
What isn't natural in your view?
Anything part of our particular universe. Where you draw the boundary of 'our universe' is context dependent, but in general, anything part of the general quantum structure of which our spacetime is a part. So it includes say some worlds with 2 macroscopic spatial dimensions, but it doesn't include Conway's game of life.

I agree with you about the fact that mathematics doesn't depend on the universe.
Good, but being the idiot skeptic that I am, I've always had an itch about that one. What if 2+2=4 is a property of some universes (this one included), but is not objectively the case? How might we entertain that? How do you demonstrate that it isn't such a property? Regardless, if any progress is to be made, I'm willing to accept the objectivity of mathematics.

I have a different view about the relation between mathematics and the universe. For instance, I believe that mathematical truths would still be true even if the universe didn't exist.
I didn't say otherwise, so not sure how that's different. That's what it means to be independent of our universe.

It seems to me that you here are assuming that all possible 'non-magical' explanations are 'natural/physical' one.
By definition, no?

Quoting boundless
I also don't like to make the distinction between 'supernatural' and 'natural', unless one defines the terms in a precise way. Perhaps, I would define 'natural' as 'pertaining to spacetime' (so, both spacetime - or spacetimes if there is a multiverse - and whatever is 'in' it would qualify as 'natural')
OK, but that doesn't give meaning to the term. If the ghosts reported are real, then they're part of this universe, and automatically 'natural'. What would be an example of 'supernatural' then? It becomes just something that one doesn't agree with. I don't believe in ghosts, so they're supernatural. You perhaps believe in them, so they must be natural. Maybe it's pointless to even label things with that term.

Regarding the point you make about Chalmers, as I said before perhaps the 'hard problem' is better framed as an objection to all reductionist understanding of consciosuness that try to reduce it to the inanimate rather than an objection to 'physicalism' in a broad sense of the term.
Depends on what you mean by 'inanimate'. I mean, I am composed of atoms, which are 1) inanimate because atoms are essentially tiny rocks, and 2) animate because they're part of a life form.
A non-living device that experiences whatever such a device experiences would be (and very much is) declared to not be conscious precisely because the word (and 'experience' as well) is reserved for life forms. This is the word-game fallacy that I see often used in this forum (think W word).
That's like saying that creatures 'fall' to the ground, but rocks, being inanimate, do not 'fall' by definition, and instead are merely accelerated downward. Ergo, 'falling' requires a special property of matter that is only available to life.

is physical causality the same as logical causality?
Probably not, but I'd need an example of the latter, one that doesn't involve anything physical.



Quoting Patterner
The definition of "magical" can only be something along the lines of:
Something that operates outside of the laws and properties of this reality.
Our understanding is irrelevant.

We don't understand how mass warps spacetime. But we don't think gravity is magic

Hence 'magic' is a poor tool to wield. If Chalmers' 'all material having mental properties' is actually the case, then it wouldn't be magic, it would be a property of this reality. But still totally unexplained or even described since there's no current theory that supports that view. There sort of is, but nobody formally mentions it because, being a theory, it makes predictions, and those predictions likely fail, so best not to be vocal about those predictions.

Quoting Patterner
Chalmers mentions the hurricane in this video:
"... from simple principles of airflow"
The hurricane, which is somewhat understood in terms of airflow and thermodynamics (2-3 steps away from hurricane dynamics), is never described in terms of particles. But challenges to physicalism frequently request unreasonable explanations in terms of particles (again, perhaps 12 steps away). So work your way throught the 12 steps, understanding how particles make atoms, and atoms make molecules, etc. Expect each step to be expressed in terms of the prior one, and not in terms of the particles.

But what you find in all those other cases, like the hurricane, and the water wave, and so on, is complicated dynamics emerging from simple dynamics. Complicated structures emerging from simple structures. New and surprising structures. — Chalners
He admits this, but then denies, without justification, that qualia are not a complex effects emerging from simpler effects.


Patterner October 12, 2025 at 20:22 #1018190
Quoting noAxioms
Hence 'magic' is a poor tool to wield. If Chalmers' 'all material having mental properties' is actually the case, then it wouldn't be magic, it would be a property of this reality. But still totally unexplained or even described since there's no current theory that supports that view. There sort of is, but nobody formally mentions it because, being a theory, it makes predictions, and those predictions likely fail, so best not to be vocal about those predictions.
All true. But no theory of consciousness has made predictions that explain how consciousness comes to be, that rules out any other theory, and whose predictions have been verified. So far, no theory is more than a guess. We all have our preferred guesses, based on whatever makes the most sense to us.
Harry Hindu October 13, 2025 at 12:53 #1018339
Quoting noAxioms
We're going in circles. The paper is not about qualia, it is about the first person view, and Chalmers says that the hard problem boils down not to the problem of qualia (which is difficult to explain only because it is complicated in humans), but to the problem of first person view, which seems not problematic at all.

I see the problem as confusing the map with the territory. In talking about the first-person view we are talking about the map, not the territory. In talking about what the map refers to we are talking about the territory and not the view. The map is part of the territory and is causally related with the territory, which is why we can talk about the territory by using the map.

The problem comes when we project our view onto the territory as if they were one and the same - as if your view is how the world actually is (naive realism). Indirect realism is the idea that your map is not the territory but provides information about the territory thanks to causation. The effect is not the cause and vice versa.

The monist solution to the problem comes in realizing that everything is information and the things you see in the world as static, solid objects is just a model of other processes that are changing at different frequencies relative to rate at which your eyes and brain perceive these other processes. Slower processes appear as solid objects while faster processes appear as actual processes, or blurs of motion.

boundless October 13, 2025 at 14:19 #1018348
Quoting noAxioms
But the problem being difficult is not evidence against consciousness being derived from inanimate primitives.


Chalmers et al suggest that the reason why the problem is 'difficult' it is because it is wrongly stated, i.e. the assumption that we can 'get' consciousness from inanimate primitives is wrong. Of course, the absence of a solution is not a compelling evidence of the impossibility of finding one but the latter is a possible explanation of the former.

Quoting noAxioms
Probably because anything designed is waved away as not intentionality. I mean, a steam engine self-regulates, all without a brain, but the simple gravity-dependent device that accomplishes it is designed, so of course it doesn't count.


If there is intentionality in something like a steam-engine, this would suggest that intentionality is also fundamental - in other words, the inanimate would not be really totally inanimate. But the problem arises in views were intentionality isn't seen as fundamental but derived from something else that seems to be completely different.

Quoting noAxioms
Completely wrong. Fundamentals don't first expect explanations. Explanations are for the things understood, and the things not yet understood still function despite lack of this explanation. Things fell down despite lack of explanation for billions of years. Newton explained it, and Einstein did so quite differently, but things falling down did so without ever expectation of that explanation.


Ok, but if intentionality is fundamental, then the arising of intentionality, assuming that it arose, is unexplained. Conversely, if intentionality is derived, we expect an explanation of how it is derived.
Same goes for 'consciousness' and so on.

Quoting noAxioms
Depends on your definition of consciousness. Some automatically define it to be a supernatural thing, meaning monism is a denial of its existence. I don't define it that way, so I'm inclined to agree with your statement.


TBF, I also am a bit perplexed on how some non-physicalists define consciousness. But also note that, for instance, the 'Aristotelian' view, which was later accepted and developed in most philosophical traditions from late Antiquity onwards (Neoplatonic, Christian, Islamic...) of the 'soul' is that the 'soul' is the form of the body and that the 'sentient being' (animals and humans) are actually both body and soul, i.e. matter and form. In this view, we are not composed of two substances ('material' and 'mental') but, rather, the Arisotelian model ('hylomorphism') explains a 'human being' as an ordered entity where the 'soul' is the order that makes the entity ordered. Furthermore, IIRC, there isn't such a thing like 'pure matter' because 'pure matter' would be completely unordered and, therefore, unintelligible.
I don't think that, say, the common arguments against the existence of a 'soul', a 'unified self' and so on that sometimes are advanced by some 'physicalists' can affects these views.

Since I am more or less an 'hylomorphist', TBH I see much of the debate about 'consciousness' as simply not relevant for me.

Quoting noAxioms
Anything part of our particular universe. Where you draw the boundary of 'our universe' is context dependent, but in general, anything part of the general quantum structure of which our spacetime is a part. So it includes say some worlds with 2 macroscopic spatial dimensions, but it doesn't include Conway's game of life.


Ok. I am even prepared to say that if there is really a multiverse with all possible 'worlds' with different laws, to equate the 'natural' to 'pertaining to the whole multiverse'.
So I guess that for me 'natural' includes also Conway's game of life :wink:

[quote="noAxioms;1018179"I]Good, but being the idiot skeptic that I am, I've always had an itch about that one. What if 2+2=4 is a property of some universes (this one included), but is not objectively the case? How might we entertain that? How do you demonstrate that it isn't such a property? Regardless, if any progress is to be made, I'm willing to accept the objectivity of mathematics.[/quote]

Being the 'speculative fool' I am, I would say that given that intelligibility seems a precondition of the existence of the multiverse, this would mean that either (i) the multiverse is fundamental and, therefore, its existence is not contingent and intelligibility (and as a consequence all mathematical truths) is an aspect of the multiverse or (ii) the multiverse is contingent whereas mathematical truths are not and, therefore, they exist in something 'transcendent' of the multiverse (I prefer this second option). TBH, however, it would be quite a strange physicalism IMO that accepts the multiverse as being ontologicall non-contingent (i.e. necessary!) - it would become something like pantheism/pandeism* of sorts (i.e. a view that asserts that the multiverse is a kind of metaphysical Absolute). But positing metaphysical absolutes seems to go against what many people find in physicalism. So, it would be ironic IMO for a physicalist to end up holding the idea that the multiverse itself is a metaphysical absolute after having accepted physicalism precisely to avoid accepting a metaphysical absolute.

[*It is important to distinguish this from panentheistic views were the Absolute pervades but at the same time transcends the multiverse. ]


Quoting noAxioms
I didn't say otherwise, so not sure how that's different. That's what it means to be independent of our universe.


:up: Do you think that they are independent from the multiverse?

Quoting noAxioms
By definition, no?


Yes and no. For instance, I can't give a purely 'natural' explanation of how we can know and understand mathematical truths if I say they aren't 'natural'. If mathematical truths aren't natural, and our mind can understand something that isn't natural, then the 'natural' can't wholly explain our minds.

However, it should be noted that, in my view, even a pebble can't be explained in fully 'naturalistic' terms. Being (at least partially) intelligible, and being IMO the conditions for intelligibility of any entity prior to the 'natural', even a pebble, in a sense, is not fully 'explained' in purely 'naturalistic' terms.
So, yeah, at the end of the day, I find, paradoxically, even the simplest thing as mysterious as our minds.

Quoting noAxioms
OK, but that doesn't give meaning to the term. If the ghosts reported are real, then they're part of this universe, and automatically 'natural'. What would be an example of 'supernatural' then? It becomes just something that one doesn't agree with. I don't believe in ghosts, so they're supernatural. You perhaps believe in them, so they must be natural. Maybe it's pointless to even label things with that term.


See above.

Quoting noAxioms
Depends on what you mean by 'inanimate'.
...


I sort of agree with that.

I believe, however, that it is easier to discuss about intentionality rather than consciousness.

If intentionality exists only in *some* physical bodies, and we have to explain how it arose, we expect that, in principle, we can explain how it arose in the same way as we can explain other emergent features, i.e. in virtue of other properties that are present in the 'inanimate'. The thing is that I never encountered a convincing explanation of that kind nor I found convincing arguments that have convinced me that such an explanation is possible.

Your own view, for instance, seems to me to redefine the 'inanimate' as something that is actually not 'truly inanimate' and this allows you to say that, perhaps, the intentionality we have is a more complex form of the 'proto(?)-intentionality' that perhaps is found in inanimate objects. This is for me a form panpsychism rather than a 'true' physicalism.

At the end of the day, I guess that labels are just labels and you actually would be ready to accept what I would consider something that isn't physicalism as a form of physicalism. The same goes for what you say about mathematics. This is not a criticism of you but I want to point out that your own 'physicalism' is, in my opinion, a more sophisticated view of what I would normally call 'physicalism'. So, perhaps some confusion in these debates is caused by the fact that we - both the two of us and 'people' in general - do not have a shared vocabulary and use the words differently.


Quoting noAxioms
Probably not, but I'd need an example of the latter, one that doesn't involve anything physical.


I meant logical implications. I can, for instance, make a formally valid statement without any reference to something 'real'.




noAxioms October 16, 2025 at 17:38 #1019103
Quoting boundless
If there is intentionality in something like a steam-engine, this would suggest that intentionality is also fundamental
I would not buy that suggestion. More probably the intentionality emerges from whatever process is used to implement it. I can think of countless emergent properties, not one of which suggest that the properties need to be fundamental.
- in other words, the inanimate would not be really totally inanimate.
Thus illustrating my point about language. 'Intentional' is reserved for life forms, so if something not living does the exact same thing, a different word (never provided) must be used, or it must be living, thus proving that the inanimate thing cannot do the thing that it's doing (My example was 'accelerating downward' in my prior post).

Ok, but if intentionality is fundamental, then the arising of intentionality is unexplained.
That's only a problem for those that posit that intentionality is fundamental. Gosh, the same can be said of 'experience', illustrating why I find no problem when Chalmers does.

On the other hand, one can boil down that statement down to "if X is fundamental, then the arising of X is unexplained". Pretty much everybody considers something to be fundamental, so we all have our X. Why must X 'arise'? What does that mean? That it came to be over time? That would make time more fundamental, a contradiction. X just is, and everything else follows from whatever is fundamental. And no, I don't consider time to be fundamental.

Conversely, if intentionality is derived, we expect an explanation of how it is derived.
Again, why? There's plenty that's currently unexplained. Stellar dynamics I think was my example. For a long time, people didn't know stars were even suns. Does that lack of even that explanation make stars (and hundreds of other things) fundamental? What's wrong with just not knowing everything yet?


Quoting boundless
I believe that mathematical truths would still be true even if the universe didn't exist.

Quoting boundless
I didn't say otherwise — noAxioms
:up: Do you think that they are independent from the multiverse?

That's what it means to be true even if the universe didn't exist.


However, it should be noted that, in my view, even a pebble can't be explained in fully 'naturalistic' terms. Being (at least partially) intelligible, and being IMO the conditions for intelligibility of any entity prior to the 'natural', even a pebble, in a sense, is not fully 'explained' in purely 'naturalistic' terms.
So, yeah, at the end of the day, I find, paradoxically, even the simplest thing as mysterious as our minds.
Maybe putting in intelligibility as a requirement for existence isn't such a great idea. Of course that depends on one's definition of 'to exist'. There are definitely some definitions where intelligibility would be needed.

What would be an example of 'supernatural' then?
A made-up story. Not fiction (Sherlock Holmes say), just something that's wrong. Hard to give an example since one could always presume the posited thing is not wrong.

If intentionality exists only in *some* physical bodies, and we have to explain how it arose
Again, why is the explanation necessary? What's wrong with just not knowing everything? Demonstrating the thing in question to be impossible is another story. That's a falsification, and that carries weight. So can you demonstrate than no inanimate thing can intend? Without 'proof by dictionary'?

Your own view, for instance, seems to me to redefine the 'inanimate' as something that is actually not 'truly inanimate' and this allows you to say that, perhaps, the intentionality we have is a more complex form of the 'proto(?)-intentionality' that perhaps is found in inanimate objects.
That does not sound like any sort of summary of my view, which has no requirement of being alive in order to do something that a living thing might do, such as fall off a cliff.



Quoting Harry Hindu
I see the problem as confusing the map with the territory. In talking about the first-person view we are talking about the map, not the territory. In talking about what the map refers to we are talking about the territory and not the view. The map is part of the territory and is causally related with the territory, which is why we can talk about the territory by using the map.

The problem comes when we project our view onto the territory as if they were one and the same - as if your view is how the world actually is (naive realism). Indirect realism is the idea that your map is not the territory but provides information about the territory thanks to causation.
All this seems to be the stock map vs territory speach, but nowhere is it identified what you think is the map (that I'm talking about), and the territory (which apparently I'm not).

The monist solution to the problem comes in realizing that everything is information and the things you see in the world as static, solid objects is just a model of other processes that are changing at different frequencies relative to rate at which your eyes and brain perceive these other processes.
Very few consider the world to be a model. The model is the map, and the world is the territory. Your wording very much implies otherwise, and thus is a strawman representation of a typical monist view. As for your model of what change is, that has multiple interpretations, few particularly relevant to the whole ontology of mind debate. Change comes in frequencies? Frequency is expressed as a rate relative to perceptions??

Slower processes appear as solid objects while faster processes appear as actual processes, or blurs of motion.
So old glass flowing is not an actual process, or I suppose just doesn't appear that way despite looking disturbingly like falling liquid? This is getting nitpickly by me. I acknowledge your example, but none of it is science, nor is it particularly illustrative of the point of the topic.

Apustimelogist October 16, 2025 at 19:30 #1019123
Quoting noAxioms
That's only a problem for those that posit that intentionality is fundamental.


:up: :up: :100:
boundless October 17, 2025 at 13:15 #1019320
Quoting noAxioms
I would not buy that suggestion. More probably the intentionality emerges from whatever process is used to implement it. I can think of countless emergent properties, not one of which suggest that the properties need to be fundamental.


Ok. But if there is an 'emergence', it must be an intelligible process. The problem for 'emergentism' is that there doesn't seem any convincing explanation of how intentionality, consciousness and so on 'emerge' from something that does not have those properties.

As I said before, that we have yet to find a credible explanation for such an emergence is an evidence against emergentism. Of course, such an absence of an explanation isn't a compelling evidence for the impossibility of an explanation.

Anyway, I also would point out that IMO most forms of physicalism have a difficulty in explaining that composite objects can be 'distinct entities'.

Quoting noAxioms
Thus illustrating my point about language. 'Intentional' is reserved for life forms, so if something not living does the exact same thing, a different word (never provided) must be used, or it must be living, thus proving that the inanimate thing cannot do the thing that it's doing (My example was 'accelerating downward' in my prior post).


Ok, thanks for the clarification. But note my point above.

Quoting noAxioms
boundless: Ok, but if intentionality is fundamental, then the arising of intentionality is unexplained.


I misphrased this. I meant: if intentionality is fundamental then there is no need for an explanation.
Quoting noAxioms
That would make time more fundamental, a contradiction. X just is, and everything else follows from whatever is fundamental. And no, I don't consider time to be fundamental.


Right, but there is also the possibility that ontological dependency doesn't involve a temporary relation. That is, you might say that intentionality isn't fundamental but it is dependent on something else that hasn't intentionality and yet there have not been a time where intentionality didn't exist (I do not see a contradiction in thinking that, at least).

As an illustration, consider the stability of a top floor in a building. It clearly depends on the firmness of the foundations of the builing and yet we don't that 'at a certain point' the upper floor 'came out' from the lower.

So, yeah, arising might be a wrong word. Let's go with 'dependence'.

Quoting noAxioms
Again, why? There's plenty that's currently unexplained. Stellar dynamics I think was my example. For a long time, people didn't know stars were even suns. Does that lack of even that explanation make stars (and hundreds of other things) fundamental? What's wrong with just not knowing everything yet?


I hope I have clarified my point above. But let's use this example. Stellar dynamics isn't fundamental because it can be explained in terms of more fundamental processes. Will we discover something similar for intentionality, consciousness and so on? Who knows. Maybe yes. But currently it seems to me that our 'physicalist' models can't do that. In virtue of what properties might intentionality, consciousness and so on 'emerge'?

Quoting noAxioms
That's what it means to be true even if the universe didn't exist.


Good, we agree on this. But if they are 'true' even if the universe or multiverse didn't exist, this means that they have a different ontological status. And, in fact, if the multiverse could not exist, this would mean that it is contingent. Mathematical truths, instead, we seem to agree are not contingent.
Given that they aren't contingent, they can't certainly depend on something that is contingent. So, they transcend the multiverse (they would be 'super-natural').

Quoting noAxioms
Maybe putting in intelligibility as a requirement for existence isn't such a great idea. Of course that depends on one's definition of 'to exist'. There are definitely some definitions where intelligibility would be needed.


If the physical world wasn't intelligible, then it seems to me that even doing science would be problematic. Indeed, scientific research seems to assume that the physical world is intelligible.

It might be problematic to assume that the physical world is fully intelligible for us, but intelligibility seems to be required for any type of investigation.

Quoting noAxioms
A made-up story. Not fiction (Sherlock Holmes say), just something that's wrong. Hard to give an example since one could always presume the posited thing is not wrong.


Ok. I would call these things simply 'wrong explanations' or 'inconsistent explanations' rather than 'super-natural', which seems to me to be better suited for speaking about something that transcends the 'natural' (if there is anything that does do that... IMO mathematical truths for instance do transcend the natural).

Quoting noAxioms
Again, why is the explanation necessary? What's wrong with just not knowing everything? Demonstrating the thing in question to be impossible is another story. That's a falsification, and that carries weight. So can you demonstrate than no inanimate thing can intend? Without 'proof by dictionary'?


TBH, I thing that right now the 'virdict' is still open. There is no evidence 'beyond reasonable doubt' to either position about consciousness that can satisfy almost everyone. We can discuss about what position seems 'more reasonable' but we do not have 'convincing evidences'.

Quoting noAxioms
That does not sound like any sort of summary of my view, which has no requirement of being alive in order to do something that a living thing might do, such as fall off a cliff.


OK, I stand corrected. Would you describe your position as 'emergentist' then?





wonderer1 October 17, 2025 at 14:15 #1019327
Quoting boundless
Ok. But if there is an 'emergence', it must be an intelligible process. The problem for 'emergentism' is that there doesn't seem any convincing explanation of how intentionality, consciousness and so on 'emerge' from something that does not have those properties.


The emergence of intentionality (in the sense of 'aboutness') seems well enough explained by the behavior of a trained neural network. See:



This certainly isn't sufficient for an explanation of consciousness. However, in light of how serious a concern AI has become, I'd think it merits serious consideration as an explanation of how intentionality can emerge from a system in which the individual components of the system lack intentionality.

Do you think it is reasonable to grant that we know of an intelligible process in which intentionality emerges?
boundless October 17, 2025 at 14:43 #1019331
Reply to wonderer1 thanks for the video. It seems interesting. I'll share my thoughts tomorrow about it.
wonderer1 October 17, 2025 at 15:30 #1019348
Harry Hindu October 17, 2025 at 15:45 #1019353
Quoting noAxioms
All this seems to be the stock map vs territory speach, but nowhere is it identified what you think is the map (that I'm talking about), and the territory (which apparently I'm not).

The map is the first-person view. Is the map (first-person view) not part of the territory?

Quoting noAxioms
Very few consider the world to be a model. The model is the map, and the world is the territory. Your wording very much implies otherwise, and thus is a strawman representation of a typical monist view. As for your model of what change is, that has multiple interpretations, few particularly relevant to the whole ontology of mind debate. Change comes in frequencies? Frequency is expressed as a rate relative to perceptions??

I never said that people consider the world as a model. I said that our view is the model and the point was that some people (naive realists) tend to confuse the model with the map in their using terms like, "physical" and "material".

You do understand that we measure change using time, and that doing so entails comparing the relative frequency of change to another type of change (the movement of the hour hand on the clock vs the movement of the sun across the sky)? Do you not agree that our minds are part of the world and changes like anything else in the world, and the time it takes our eye-brain system can receive and process the information compared to the rate at which what you are observing is changing, can play a role in how your mind models what it is seeing.

Quoting noAxioms
So old glass flowing is not an actual process, or I suppose just doesn't appear that way despite looking disturbingly like falling liquid? This is getting nitpickly by me. I acknowledge your example, but none of it is science, nor is it particularly illustrative of the point of the topic.

:meh: Everything is a process. Change is relative. The molecules in the glass are moving faster than when it was a solid, therefore the rate of change has increased and is why you see it as a flowing process rather than a static object. I don't see how it isn't science when scientists attempt to find consistently repetitive processes with high degrees of precision (like atomic clocks) to measure the rate of change in other processes. QM says that measuring processes changes them and how they are perceived (wave vs particle), so I don't know what you mean by, "none of it is science".


boundless October 18, 2025 at 08:12 #1019471
Reply to wonderer1 Ok, I watched the video. Nice explanation of how machine learning works.

Still, I am hesitant to see it as an example of emergence of intentionality for two reasons. Take what I say below with a grain of salt, but here's my two cents.

First, these machines, like all others, are still programmed by human beings who decide how they should work. So, there is a risk to read back into the machine the intentionality of human beings who built them. To make a different example, if you consider a mechanical calculator it might seem it 'recognizes' the numbers '2', '3' and the operation of addition and then gives us the output '5' when we ask it to perform the calculation. The machine described in the video is far more complex but the basic idea seems the same.

Secondly, the output the machine gives are the results of statistical calculations. The machine is being given a set of examples of associations of hand-written numbers and the number these hand-written numbers should be. It then manages to perform better with other trials in order to minimize the error function. Ok, interesting, but I'm not sure that we can say that the machine has 'concepts' like what we have. When we read a hand-written number '3' it might be that we associate it to the 'concept of 3' by a Bayesian inference (i.e. the most likely answer to the question: "what is the most likely sign that the writer wrote here?"). But when we are aware of the concept of '3' we do not perceive a 'vector' of different probabilities about different concepts.


noAxioms October 18, 2025 at 23:11 #1019629
Quoting boundless

Ok. But if there is an 'emergence', it must be an intelligible process.
I deny that requirement. It sort of sounds like an idealistic assertion, but I don't think idealism suggests emergent properties.

Right, but there is also the possibility that ontological dependency doesn't involve a temporary relation.
Sure

That is, you might say that intentionality isn't fundamental but it is dependent on something else that hasn't intentionality and yet there have not been a time where intentionality didn't exist
I was on board until the bit about not being a time (presumably in our universe) when intentionality doesn't exist. It doesn't appear to exist at very early times, and it doesn't look like it will last.

As an illustration, consider the stability of a top floor in a building. It clearly depends on the firmness of the foundations of the builing and yet we don't that 'at a certain point' the upper floor 'came out' from the lower.
But it's not building all the way down, nor all the way up.

Quoting boundless
Stellar dynamics isn't fundamental because it can be explained in terms of more fundamental processes.
But it hasn't been fully explained. A sufficiently complete explanation might be found by humans eventually (probably not), but currently we lack that, and in the past, we lacked it a lot more. Hence science.

Will we discover something similar for intentionality, consciousness and so on?
Maybe we already have (the example from @wonderer1 is good), but every time we do, the goalposts get moved, and a more human-specific explanation is demanded. That will never end since I don't think a human is capable of fully understanding how a human works any more than a bug knows how a bug works.

That would be an interesting objective threshold of intelligence: any entity capable of comprehending itself.

Quoting boundless
But currently it seems to me that our 'physicalist' models can't do that.
I beg to differ. They're just simple models at this point is all. So the goalposts got moved and those models were declared to not be models of actual intentionality and whatnot.
We could do a full simulation of a human and still there will be those that deny usage of the words. A full simulation would also not constitute an explanation, only a demonstration. That would likely be the grounds for the denial.

But if they are 'true' even if the universe or multiverse didn't exist, this means that they have a different ontological status. And, in fact, if the multiverse could not exist, this would mean that it is contingent.
Agree with all that.

Quoting boundless
Mathematical truths, instead, we seem to agree are not contingent.

Mathematics seems to come in layers, with higher layers dependent on more fundamental ones. Is there a fundamental layers? Perhaps law of form. I don't know. What would ground that?

Given that they aren't contingent, they can't certainly depend on something that is contingent. So, they transcend the multiverse (they would be 'super-natural').
Good point


If the physical world wasn't intelligible, then it seems to me that even doing science would be problematic.
Just so. So physical worlds would not depend on science being done on them. Most of them fall under that category. Why doesn't ours? That answer at least isn't too hard.

There is no evidence 'beyond reasonable doubt' to either position about consciousness that can satisfy almost everyone.
Agree again. It's why I don't come in here asserting that my position is the correct one. I just balk at anybody else doing that, about positions with which I disagree, but also about positions with which I agree. I have for instance debunked 'proofs' that presentism is false, despite the fact that I think it's false.

I entertain the notion that our universe is a mathematical structure, but there are some serious problems with that proposition that I've not seen satisfactory resolution. Does it sink the ship? No, but a viable model is needed, and I'm not sure there is one. Sean Carroll got into this.

Quoting boundless
Would you describe your position as 'emergentist' then?

Close enough. More of a not-unemergentist, distinct in that I assert that the physical is sufficient for emergence of these things, as opposed to asserting that emergence the physical is necessary fact, a far more closed-minded stance.

Quoting boundless
Still, I am hesitant to see it as an example of emergence of intentionality for two reasons.

First, these machines, like all others, are still programmed by human beings who decide how they should work.
This is irrelevant to emergence, which just says that intentionality is present, consisting of components, none of which carry intentionality.
OK, so you don't deny the emergence, but that it is intentionality at all since it is not its own, quite similar to how my intentions at work are that of my employer instead of my own intentions.

To make a different example, if you consider a mechanical calculator it might seem it 'recognizes' the numbers '2', '3'
It recognizes 2 and 3. It does not recognize the characters. That would require a image-to-text translator (like the one in the video, learning or not). Yes, it adds. Yes, it has a mechanical output that displays results in human-readable form. That's my opinion of language being appropriately applied. It's mostly a language difference (to choose those words to describe what its doing or not) and not a functional difference.

Secondly, the output the machine gives are the results of statistical calculations. The machine is being given a set of examples of associations of hand-written numbers and the number these hand-written numbers should be. It then manages to perform better with other trials in order to minimize the error function.
Cool. So similar to how humans do it. The post office has had image-to-text interpretation for years, but not sure how much those devices learn as opposed to just being programmed. Those devices need to parse cursive addresses, more complicated than digits. I have failed to parse some hand written numbers.
My penmanship sucks, but I'm very careful when hand-addressing envelopes.


Quoting Harry Hindu
The map is the first-person view. Is the map (first-person view) not part of the territory?
I don't know what the territory is as you find distinct from said map.


I said that our view is the model and the point was that some people (naive realists) tend to confuse the model with the map in their using terms like, "physical" and "material".
Fine, but I'm no naive realist. Perception is not direct, and I'm not even a realist at all. A physicalist need not be any of these things.

You do understand that we measure change using time
Change over time, yes. There's other kinds of change.

and that doing so entails comparing the relative frequency of change to another type of change
Fine, so one can compare rates of change, which is frame dependent we want to get into that.

Do you not agree that our minds are part of the world and changes like anything else in the world, and the time it takes our eye-brain system can receive and process the information compared to the rate at which what you are observing is changing, can play a role in how your mind models what it is seeing.
I suppose so, but I don't know how one might compare a 'rate of continuous perception' to a 'rate of continuous observed change'. Both just happen all the time. Sure, a fast car goes by in less time than a slow car, if that's what you're getting at.

Everything is a process. Change is relative. The molecules in the glass are moving faster than when it was a solid
Well that's wrong. Glass was never a solid. The molecules in the old glass move at the same rate as newer harder glass, which is more temperature dependent than anything. But sure, their average motion over a long time relative to the window frame is faster in the old glass since it might move 10+ centimeters over decades. What's any of this got to do with 'the territory' that the first person view is supposedly a map of?
Perhaps you mean territory as the thing in itself (and not that which would be directly perceived). You've not come out and said that. I agree with that. A non-naive physicalist would say that things like intentionality supervene on actual physical things, and not on the picture that our direct perceptions paint for us. I never suggested otherwise.

therefore the rate of change has increased and is why you see it as a moving object rather than a static one.
I see the old glass as moving due to it looking like a picture of flowing liquid, even though motion is not perceptible. A spinning top is a moving object since its parts are at different locations at different times, regardless of how it is perceived.
boundless October 19, 2025 at 09:58 #1019680
Quoting noAxioms
I deny that requirement. It sort of sounds like an idealistic assertion, but I don't think idealism suggests emergent properties.


If physical processes weren't intelligible, how could we even do science, which seeks at least an intelligible description of processes that allow us to make predictions and so on?

Quoting noAxioms
I was on board until the bit about not being a time (presumably in our universe) when intentionality doesn't exist. It doesn't appear to exist at very early times, and it doesn't look like it will last.


I was saying that if there was a time when intentionality didn't exist, it must have come into being 'in some way' at a certain moment.

Quoting noAxioms
But it hasn't been fully explained. A sufficiently complete explanation might be found by humans eventually (probably not), but currently we lack that, and in the past, we lacked it a lot more. Hence science.


I sort of agree. And honestly, I believe that everything is ultimately not fully knowable as a 'complete understanding' of anything would require the complete understanding of the context in which something exists and so on. Everything is therefore mysterious and, at the same time, paradoxically intelligible.

Quoting noAxioms
Maybe we already have (the example from wonderer1 is good), but every time we do, the goalposts get moved, and a more human-specific explanation is demanded. That will never end since I don't think a human is capable of fully understanding how a human works any more than a bug knows how a bug works.


I don't know. Merely giving an output after computing the most likely alternative doesn't seem to me the same thing as intentionality. But, again, perhaps I am wrong about this. It just doesn't seem to be supported by our phenomenological experience.

Quoting noAxioms
Mathematics seems to come in layers, with higher layers dependent on more fundamental ones. Is there a fundamental layers? Perhaps law of form. I don't know. What would ground that?


Yes, I think I agree here. Even natural numbers seem to be 'based' on more fundamental concepts like identity, difference, unity, multiplicity etc. But nevertheless the truths about them seem to be non-contingent.

Quoting noAxioms
Good point


In my records, if you agree with that, you are not a 'physicalist'. Of course, I accept that you might disagree with me here.

Quoting noAxioms
Just so. So physical worlds would not depend on science being done on them. Most of them fall under that category. Why doesn't ours? That answer at least isn't too hard.


If we grant to science some ability to give us knowledge of physical reality, then we must assume that the physical world is intelligible. Clearly, the physical world doesn't depend on us doing scientific investigations on it but, nevertheless, the latter would seem to me ultimately fruitless if the former wasn't intelligible (except perhaps in a weird purely pragmatic point of view).

Quoting noAxioms
Agree again. It's why I don't come in here asserting that my position is the correct one. I just balk at anybody else doing that, about positions with which I disagree, but also about positions with which I agree. I have for instance debunked 'proofs' that presentism is false, despite the fact that I think it's false.


OK. I have a sort of similar approach about online discussions. Sometimes, however, I believe that it is simply impossible to not state one's own disagreement (or agreement) with a view in seemingly eccessively confident terms. Like sarcasm, sometimes the 'level of confidence' comes out badly in discussions and people seem more confident about a given thing than they actually are.

Furthermore, I also believe that a careful analysis of a position one has little sympathy for actually can be useful to understand better and reinforce the position one has. I get that sometimes it is not an easy task but the fruits of such a careful (and respectful) approach are very good.

Quoting noAxioms
Close enough. More of a not-unemergentist, distinct in that I assert that the physical is sufficient for emergence of these things, as opposed to asserting that emergence the physical is necessary fact, a far more closed-minded stance.


Not sure what you mean here. Are you saying that the physical is sufficient for emergence but there are possible ways in which intentionality, consciousness etc emerge without the physical?

Quoting noAxioms
This is irrelevant to emergence, which just says that intentionality is present, consisting of components, none of which carry intentionality.
OK, so you don't deny the emergence, but that it is intentionality at all since it is not its own, quite similar to how my intentions at work are that of my employer instead of my own intentions.


Good point. But note that if your intentions could be completely determined by your own employer, it would be questionable to call them 'your' intentions. Also, to emerge 'your' intentions would need the intentionality of your employer.

Anyway even if I granted that, somehow, the machines could have an autonomous intentionality, there remains the fact that if intentionality, in order to emerge, needs always some other intentionality, intentionality is fundamental.

So, yeah, I sort of agree that intentionality can come into being via emergence but it isn't clear how it could emerge from something that is completely devoid of it.

Quoting noAxioms
It recognizes 2 and 3. It does not recognize the characters. That would require a image-to-text translator (like the one in the video, learning or not). Yes, it adds. Yes, it has a mechanical output that displays results in human-readable form. That's my opinion of language being appropriately applied. It's mostly a language difference (to choose those words to describe what its doing or not) and not a functional difference.


Again, I see it more like a machine doing an operation rather than a machine 'recognizing' anything. An engine that burns gasoline to give energy to a car and allowing it to move doesn't 'recognize' anything, yet it operates. In the same way, I doubt that a machine can recognize numbers in an analogous way we do but I still do not find any evidence that they do something more than doing an operation as an engine does. This to me applies both to the mechanical calculator and the computer in the video.

An interesting question, however, arises. How can I be sure that humans (and, I believe, also animals at least) can 'recognize' numbers as I perceive myself doing? This is indeed a big question. Can we be certain that we - humans and (some?) animals do recognize numbers - while machines do not? I am afraid that such a certainty is beyond our reach.

Still, I think it is reasonable that machines do not have such a faculty because they operate algorithmically (and those algorithms can be VERY complex and designed to approximate our own abilities).

Quoting noAxioms
Cool. So similar to how humans do it. The post office has had image-to-text interpretation for years, but not sure how much those devices learn as opposed to just being programmed. Those devices need to parse cursive addresses, more complicated than digits. I have failed to parse some hand written numbers.
My penmanship sucks, but I'm very careful when hand-addressing envelopes.


Do we just do that, though? It seems from our own phenomenological experience that we can have some control and self-awareness on our own 'operations' that machines do not have.

Quoting noAxioms
That would be an interesting objective threshold of intelligence: any entity capable of [partially] comprehending itself.


I think I agree with that provided that one adds the word in the square brackets. The problem is: can we have an unmistaken criterion that allows us to objectively determine if a living being, machine or whatever has such an ability?





Harry Hindu October 19, 2025 at 15:06 #1019711
Quoting noAxioms
The map is the first-person view. Is the map (first-person view) not part of the territory?
— Harry Hindu
I don't know what the territory is as you find distinct from said map.

It was a question to you about the distinction between territory and map. Is the map part of the territory? If there isn't a distinction, then that is basically solipsism. Solipsism implies that the map and the territory are one and the same. One might even say there is no map - only the territory as the mind is all there is.

Quoting noAxioms
Fine, but I'm no naive realist. Perception is not direct, and I'm not even a realist at all. A physicalist need not be any of these things.

What does it even mean to be a physicalist? What does "physical" even mean? When scientists describe objects they say things like, "objects are mostly empty space" and describe matter as the relationship between smaller particles all the way down (meaning we never get at actual physical stuff - just more fundamental relationships, or processes) until we arrive in the quantum realm where "physical" seems to have no meaning, or is at least dependent upon our observations (measuring).

Quoting noAxioms
Change over time, yes. There's other kinds of change.

Like...? You might say that there are changes in space but space is related to time. So maybe I should ask if there is an example of change independent of space-time. Space-time appears to be the medium in which change occurs.

Quoting noAxioms
I suppose so, but I don't know how one might compare a 'rate of continuous perception' to a 'rate of continuous observed change'. Both just happen all the time. Sure, a fast car goes by in less time than a slow car, if that's what you're getting at.

Ever seen super slow motion video of a human's reaction time to stimuli? It takes time for your mind to become aware of its surroundings. You are always perceiving the world as it was in the past, so your brain has to make some predictions. Solid objects are still changing - just at a much slower rate.
The simplified, cartoonish version of events you experience is what you refer to as "physical", where objects appear as solid objects that "bump" against each other because that is how the slower processes are represented on the map. How would you represent slow processes vs faster processes on a map?

Quoting noAxioms
A non-naive physicalist would say that things like intentionality supervene on actual physical things, and not on the picture that our direct perceptions paint for us. I never suggested otherwise.

I don't understand. Is the picture not physical as well for a physicalist?

How do you explain an illusion, like a mirage, if not intentionality supervening on the picture instead of on some physical thing?

I don't know what it means for intentionality to supervene on actual physical things. But I do know that if you did not experience empty space in front of you and experienced the cloud of gases surrounding you you then your intentions might be quite different. Yet you act on the feeling of there being nothing in front of you, because that is how you visual experience is.

How do you reconcile the third person view of another's brain (your first-person experience of another's brain - a third person view can only be had via a first-person view.) with their first person experience of empty space and visual depth? This talk of views seems to be confusing things. What exactly is a view? A process? Information?

Quoting noAxioms
I see the old glass as moving due to it looking like a picture of flowing liquid, even though motion is not perceptible. A spinning top is a moving object since its parts are at different locations at different times, regardless of how it is perceived.

Maybe I should try this route - Does a spinning top look more like a wave than a particle, and when it stops does it look more like a particle than a wave? Is a spinning top a process? Is a top at rest a process - just a slower one? Isn't the visual experience of a wave-like blur of a spinning top the relationship between the rate of change of position of each part of the top relative to your position is space and the rate at which your eye-brain system can process the change it is observing. If your mental processing were faster then it would actually slow down the speed of the top to the point where it will appear as a stable, solid object standing perfectly balanced on its bottom peg.

noAxioms October 20, 2025 at 04:39 #1019850
Quoting boundless
If physical processes weren't intelligible, how could we even do science
Doing science is how something less unintelligible becomes more intelligible.

I was saying that if there was a time when intentionality didn't exist, it must have come into being 'in some way' at a certain moment.
OK, that's a lot different than how I read the first statement.

Merely giving an output after computing the most likely alternative doesn't seem to me the same thing as intentionality.
I don't think the video was about intentionality. There are other examples of that, such as the robot with the repeated escape attempts, despite not being programmed to escape.

The video was more about learning and consciousness.

In my records, if you agree with [mathematics not being just a natural property of this universe, and thus 'supernatural'], you are not a 'physicalist'. Depends on definitions. I was unaware that the view forbade deeper, non-physical foundations. It only asserts that there isn't something else, part of this universe, but not physical. That's how I take it anyway.

[quote]If we grant to science some ability to give us knowledge of physical reality, then we must assume that the physical world is intelligible.
Partially intelligible, which is far from 'intelligible', a word that on its own implies nothing remaining that isn't understood.

Quoting boundless
Like sarcasm, sometimes the 'level of confidence' comes out badly in discussions and people seem more confident about a given thing than they actually are.
Not sure where you think my confidence level is. I'm confident that monism hasn't been falsified. That's about as far as I go. BiV hasn't been falsified either, and it remains an important consideration, but positing that you're a BiV is fruitless.

More of a not-unemergentist, distinct in that I assert that the physical is sufficient for emergence of these things, as opposed to asserting that emergence from the physical is necessary fact, a far more closed-minded stance. — noAxioms

Not sure what you mean here. Are you saying that the physical is sufficient for emergence but there are possible ways in which intentionality, consciousness etc emerge without the physical?
I'm saying that alternatives to such physical emergence has not been falsified, so yes, I suppose those alternative views constitute 'possible ways in which they exist without emergence from the physical'.

Good point. But note that if your intentions could be completely determined by your own employer, it would be questionable to call them 'your' intentions.
Just like you're questioning that a machine's intentions are not its own because some of them were determined by its programmer.

Also, to emerge 'your' intentions would need the intentionality of your employer.
No, since I am composed of parts, none of which have the intentionality of my employer. So it's still emergent, even if the intentions are not my own.

there remains the fact that if intentionality, in order to emerge, needs always some other intentionality, intentionality is fundamental.
That seems to be self contradictory. If it's fundamental, it isn't emergent, by definition.


Quoting boundless
Again, I see it more like a machine doing an operation rather than a machine 'recognizing' anything.
The calculator doesn't know what it's doing, I agree. It didn't have to learn. It's essentially a physical tool that nevertheless does mathematics despite not knowing that it's doing that, similar to a screwdriver screwing despite not knowing it's doing that. Being aware of its function is not one of its functions.

I still do not find any evidence that they do something more than doing an operation as an engine does.
Agree.

This to me applies both to the mechanical calculator and the computer in the video.
Don't agree. The thing in the video learns. An engine does too these days, something that particularly pisses me off since I regularly have to prove to my engine that I'm human, and I tend to fail that test for months at a time. The calculator? No, that has no learning capability.

An interesting question, however, arises. How can I be sure that humans (and, I believe, also animals at least) can 'recognize' numbers as I perceive myself doing?
Dabbling in solipsism now? You can't see the perception or understanding of others, so you can only infer when others are doing the same thing.

Still, I think it is reasonable that machines do not have such a faculty because they operate algorithmically[/quote]How do you know that you do not also operate this way? I mean, sure, you're not a Von-Neumann machine, but being one is not a requirement for operating algorithmicly. If you don't know how it works, then you can't assert that it doesn't fall under that category.
More importantly, what assumptions are you making that preclude anything operating algorithmicly from having this understanding? How do you justify those assumptions? They seem incredibly biased to me.


Quoting Harry Hindu
TIt was a question to you about the distinction between territory and map. Is the map part of the territory?
OK. It varies from case to case. Sometimes it is. The 'you are here' sign points to where the map is on the map, with the map being somewhere in the territory covered by the map.
You solipsism question implies that you were asking a different question. OK. Yes, the map is distinct from the territory, but you didn't ask that. Under solipsism, they're not even distinct.

Your prior post did eventually suggest a distinction between a perceived thing (a 3D apple say) and the ding an sich, with is neither 3D nor particularly even a 'thing'.

What does it even mean to be a physicalist?
Different people use the term different I suppose. I did my best a few posts back, something like "the view that all phenomena are the result of what we consider natural law of this universe", with 'this universe' loosely being defined as 'all contained by the spacetime which we inhabit'. I gave some challenges to that definition, such as the need to include dark matter under the category of 'natural law' to explain certain phenomena. Consciousness could similarly be added if it can be shown that it cannot emerge from current natural law, but such a proposal makes predictions, and those predictions fail so far.

When scientists describe objects they say things like, "objects are mostly empty space" and describe matter as the relationship between smaller particles all the way down (meaning we never get at actual physical stuff - just more fundamental relationships, or processes) until we arrive in the quantum realm where "physical" seems to have no meaning, or is at least dependent upon our observations (measuring).
All correct, which is why I didn't define 'physical' in terms of material, especially since they've never found any material. Yes, rocks are essentially clusters of quantum do-dads doing their quantumy stuff. There are no actual volume-filling particles, so 'mostly empty space' should actually get rid of 'mostly'.

Change over time, yes. There's other kinds of change. — noAxioms
Like...?
e.g. The air pressure changes with altitude.

So maybe I should ask if there is an example of change independent of space-time.
In simplest terms, the function y = 0.3x, the y value changes over x. That being a mathematical structure, it is independent of any notion of spacetime. Our human thinking about that example of course is not independent of it. We cannot separate ourselves from spacetime.

You are always perceiving the world as it was in the past, so your brain has to make some predictions.
...
The simplified, cartoonish version of events you experience is what you refer to as "physical", where objects appear as solid objects that "bump" against each other because that is how the slower processes are represented on the map.
Sure, one can model rigid balls bouncing off each other, or even simpler models than that if such serves a pragmatic purpose. I realize that's not what's going on. Even the flow of time is a mental construct, a map of sorts. Even you do it, referencing 'the past' like it was something instead of just a pragmatic mental convenience.

How would you represent slow processes vs faster processes on a map?
Depends on the nature of the map. If you're talking about perceptions, then it would be a perception of relative motion of two things over a shorter vs longer period of time, or possibly same time, but the fast one appears further away. If we're talking something like a spacetime diagram, then velocity corresponds to slopes of worldlines.

I don't understand. Is the picture not physical as well for a physicalist?
Sure it is, but the mental picture is not the intentionality, just the idea of it.

How do you explain an illusion, like a mirage, if not intentionality supervening on the picture instead of on some physical thing?
I don't understand this. A mirage is a physical thing. A camera can take a picture of one. No intentionality is required of the camera for it to do that. I never suggested that intentionality supervenes on any picture. Territories don't supervene on maps.


I don't know what it means for intentionality to supervene on actual physical things. But I do know that if you did not experience empty space in front of you and experienced the cloud of gases surrounding you you then your intentions might be quite different. Yet you act on the feeling of there being nothing in front of you, because that is how you visual experience is.
Yes, my experience and subsequent mental assessment of state (a physical map of sorts) influences what I choose to do. Is that so extraordinary?

How do you reconcile the third person view of another's brain (your first-person experience of another's brain - a third person view can only be had via a first-person view.) with their first person experience of empty space and visual depth?[/quote]Sorry, you lost me, especially the bits in parenthesis.
Other people (if they exist in the way that I do) probably have a first person view similar to my own. A third person description of their brain (or my own) is not required for this. I have no first person view of any brain, including my own. In old times, it wasn't obvious where the thinking went on. It took third person education to learn that.

This talk of views seems to be confusing things. What exactly is a view? A process? Information?
Probably a good question. In context of the title of this topic, I'm not actually sure about the former since I don't find baffling what others do. Third person is simply a description, language or whatever. A book is a good third person view of a given subject. First person is a subjective temporal point of view by some classical entity. Those biased would probably say that the entity has to be alive.

Maybe I should try this route - Does a spinning top look more like a wave than a particle, and when it stops does it look more like a particle than a wave?
It never looks like either. You're taking quantum terminology way out of context here. Quantum entities sometimes have wave-like properties and also particle-like properties, but those entities are never actually either of those things.

Is a spinning top a process? Is a top at rest a process - just a slower one?
Yes to all.

Isn't the visual experience of a wave-like blur of a spinning top the relationship between the rate of change of position of each part of the top relative to your position is space and the rate at which your eye-brain system can process the change it is observing.
Yea, pretty much. My eyes cannot follow it, even if they could follow linear motion at the same speed.

If your mental processing were faster then it would actually slow down the speed of the top to the point where it will appear as a stable, solid object standing perfectly balanced on its bottom peg.
I'd accept that statement. Clouds look almost static like that, until you watch a time-lapse video of them. You can see the motion, but only barely. In fast-mo, I've seen clouds break like waves against a beach.
Harry Hindu October 20, 2025 at 14:00 #1019900
Quoting noAxioms
OK. It varies from case to case. Sometimes it is. The 'you are here' sign points to where the map is on the map, with the map being somewhere in the territory covered by the map.
You solipsism question implies that you were asking a different question. OK. Yes, the map is distinct from the territory, but you didn't ask that. Under solipsism, they're not even distinct.

I can't think of a case where the map is never part of the territory, unless you are a solipsist, in which case they are one and the same, not part of the other.

Quoting noAxioms
Your prior post did eventually suggest a distinction between a perceived thing (a 3D apple say) and the ding an sich, with is neither 3D nor particularly even a 'thing'.

I may make a distinction between an idea and something that is not an idea (I'm not an idealist). But I do not make a distinction between their existence. Santa Claus exists - as an idea. The question isn't whether Santa Claus exists or not. It does as we have "physical" representations (effects) of that idea (the cause) every holiday season. The question is, "what is the nature of its existence?". People are not confused about the existence of god. They are confused about the nature of god - is it just an idea, or does god exist as something more than just an idea?


boundless October 21, 2025 at 20:37 #1020156
Quoting noAxioms
Doing science is how something less unintelligible becomes more intelligible.


Ok.

Quoting noAxioms
There are other examples of that, such as the robot with the repeated escape attempts, despite not being programmed to escape.


I'll try to find some of these things. Interesting.

Quoting noAxioms
Partially intelligible, which is far from 'intelligible', a word that on its own implies nothing remaining that isn't understood.


Well, it depends on what we mean by 'intelligible'. A thing might be called 'intelligible' because it is fully understood or because it can be, in principle, understood completely*. That's why I tend to use the expressions 'partially intelligible' and 'intelligible' in a somewhat liberal manner.

*This 'in principle' does not refer only to human minds. If there were minds that have higher abilities than our own it may be possible that they understand something that we do not and cannot. This doesn't mean that those things are 'unintelligible'.

Quoting noAxioms
Not sure where you think my confidence level is. I'm confident that monism hasn't been falsified. That's about as far as I go. BiV hasn't been falsified either, and it remains an important consideration, but positing that you're a BiV is fruitless.


I believe that you believe that some alternatives are more reasonable than the others but you don't think that there is enough evidence to say that one particular theory is 'the right one beyond reasonable doubt'.

Quoting noAxioms
I'm saying that alternatives to such physical emergence has not been falsified, so yes, I suppose those alternative views constitute 'possible ways in which they exist without emergence from the physical'.


Ok, thanks.

Quoting noAxioms
No, since I am composed of parts, none of which have the intentionality of my employer. So it's still emergent, even if the intentions are not my own.


My point wasn't that the programmer's intentionality is part of the machine but, rather, it is a necessary condition for the machine to come into being. If the machine had intentionality, such an intentionality also depends on the intentionality of its builder, so we can't still say that the machine's intentionality emerged from purely 'inanimate' causes.

Not a very strong argument but it is still an interesting point IMO (not that here I am conceding that we can build machines which have intentionality).

Quoting noAxioms
Don't agree. The thing in the video learns. An engine does too these days, something that particularly pisses me off since I regularly have to prove to my engine that I'm human, and I tend to fail that test for months at a time. The calculator? No, that has no learning capability.


Mmm. I still don't get why. It seems to me that there is only a different of complexity. 'Learning' IMO would imply that the machine can change the algorithms according to which it operates (note that here I am not using the term 'learning' as to refer to the mere adding of information but, rather, something like learning an ability...).

Quoting noAxioms
Dabbling in solipsism now? You can't see the perception or understanding of others, so you can only infer when others are doing the same thing.


Yes, I agree. But I am not sure that this inference is enough for certainty, except of the form of certainty 'for all practical purposes'.

Quoting noAxioms
More importantly, what assumptions are you making that preclude anything operating algorithmicly from having this understanding? How do you justify those assumptions? They seem incredibly biased to me.


They are inferences that I can make based on my own experience. I might be wrong, of course, but it doesn't seem to me that I can explain all features of my mental activities in purely algorithmic terms (e.g. how I make some choices). I might concede, however, that I am not absolutely sure that there isn't an unknown alogorithmic explanation of all the operations that my mind can do.
To change my view I need more convincing arguments that my - or any human being's - mind isn't algorithmic.





noAxioms October 22, 2025 at 00:29 #1020180
Quoting boundless
Well, it depends on what we mean by 'intelligible'.
You've been leveraging the word now for many posts. Maybe you should have put out your definition of that if it means something other than 'able to be understood', as opposed to say 'able to be partially understood'.

A thing might be called 'intelligible' because it is fully understood or because it can be, in principle, understood completely*.
First of all, by whom? Something understood by one might still baffle another, especially if the other has a vested interest in keeping the thing in the unintelligible list, even if only by declaring the explanation as one of correlation, not causation.

There are things that even in principle will never be understood completely, such as the true nature of the universe since there can never be tests that falsify different interpretations. From this it does not follow that physicalism fails. So I must deny that physicalism has any requirement of intelligibility, unless you have a really weird definition of it.

I believe that you believe that some alternatives are more reasonable than the others
Yup. Thus I have opinions. Funny that I find BiV (without even false sensory input) less unreasonable than magic.

but you don't think that there is enough evidence to say that one particular theory is 'the right one beyond reasonable doubt'.
One person's reasonable doubt is another's certainty. Look at all the people that know for certain that their religion of choice (all different ones) is the correct one. Belief is a cheap commodity with humans, rightfully so since such a nature makes us more fit. A truly rational entity would not be similarly fit, and thus seems unlikely to have evolved by natural selection.


Quoting boundless
My point wasn't that the programmer's intentionality is part of the machine but, rather, it is a necessary condition for the machine to come into being.
If the machine was intentionally made, then yes, by definition. If it came into being by means other than a teleological one, then not necessarily so. I mean, arguably my first born came into being via intentionality, and the last not, despite having intentionality himself. Hence the condition is not necessary.
There are more extreme examples of this, like the civil war case of a woman getting pregnant without ever first meeting the father, with a bullet carrying the sperm rather than any kind of intent being involved.

If the machine had intentionality, such an intentionality also depends on the intentionality of its builder, so we can't still say that the machine's intentionality emerged from purely 'inanimate' causes.
A similar argument seeks to prove that life cannot result from non-living natural (non-teleological) processes.


Quoting boundless
'Learning' IMO would imply that the machine can change the algorithms according to which it operates
That makes it sound like it rewrites its own code, which it probably doesn't. I've actually written self-modifying code, but it wasn't a case of AI or learning or anything, just efficiency or necessity.
How does a human learn? We certainly adopt new algorithms for doing things we didn't know how to do before. We change our coding, which is essentially adding/strengthening connections. A machine is more likely to just build some kind of data set that can be referenced to do its tasks better than without it. We do that as well.

Learning to walk is an interesting example since it is mostly subconscious nerve connections being formed, not data being memorized. I wonder how an AI would approach the task. They probably don't so much at this point since I've seen walking robots and they suck at it. No efficiency or fluidity at all, part of which is the fault of a hardware designer who gave it inefficient limbs.


I might be wrong, of course, but it doesn't seem to me that I can explain all features of my mental activities in purely algorithmic terms (e.g. how I make some choices).

They have machines that detect melanoma in skin images. There's no algorithm to do that. Learning is the only way, and the machines do it better than any doctor. Earlier, it was kind of a joke that machines couldn't tell cats from dogs. That's because they attempted the task with algorithms. Once the machine was able to just learn the difference the way humans do, the problem went away, and you don't hear much about it anymore.

I might concede, however, that I am not absolutely sure that there isn't an unknown alogorithmic explanation of all the operations that my mind can do.
Technically, anything a physical device can do can be simulated in software, which means a fairly trivial (not AI at all) algorithm can implement you. This is assuming a monistic view of course. If there's outside interference, then the simulation would fail.



Quoting Harry Hindu
I can't think of a case where the map is never part of the territory, unless you are a solipsist, in which case they are one and the same, not part of the other.
Again, I'm missing your meaning because it's trivial. I have a map of Paris, and that map is not part of Paris since the map is not there. That's easy, so you probably mean something else by such statements. Apologies for not getting what that is, and for not getting why this point is helping me figure out why Chalmers finds the first person view so physically contradictory.

Santa Claus exists - as an idea.
So I would say that the idea of Santa exists, but Santa does not. When I refer to an ideal, I make it explicit. If I don't, then I'm not referring to the ideal, but (in the case of the apple say), the noumena. Now in the apple case, it was admittedly a hypothetical real apple, not a specific apple that would be a common referent between us. Paris on the other hand is a common referent.

People are not confused about the existence of god.
If that were so, there'd not be differing opinions concerning that existence, and even concerning the kind of existence meant.

Yes, there is also disagreement about the nature of god. I mean, you're already asserting the nature by grammatically treating the word as a proper noun.

Harry Hindu October 22, 2025 at 12:54 #1020253
Quoting noAxioms
Again, I'm missing your meaning because it's trivial. I have a map of Paris, and that map is not part of Paris since the map is not there. That's easy, so you probably mean something else by such statements. Apologies for not getting what that is, and for not getting why this point is helping me figure out why Chalmers finds the first person view so physically contradictory.

How does this example of your map representative of your mind as a map? Your map is always about where you are now (we are talking about your current experience of where you are - wherever you are.) As such, your map can never be somewhere other than in the territory you are in. If it makes it any easier, consider the entire universe as the territory and your map is always of the area you are presently in in that territory.

My point is that if the map is part of the territory - meaning it is causally connected with the territory - then map and territory must be part of the same "stuff" to be able to interact. It doesn't matter what flavor of dualism you prefer - substance, property, etc. You still have to explain how physical things like brains and their neurons create an non-physical experience of empty space and visual depth.

You can only determine what I see by looking at my neural activity and comparing it to others' neural activity. I can only determine what I see by having a visual experience that is made up of colors, shapes, etc. and comparing that to prior visual experiences - not neural activity.

Our mental experience is the one thing we have direct access to, and are positive that exists, and other people's minds we have indirect access to, so it would seem to me that how we experience things indirectly are less like how they actually are compared to the things we experience directly. So when people talk about the "physical" nature of the world, they are confusing how it appears indirectly with how it is directly (since our map is part of the territory we experience part of the territory directly). The map is more like how reality is and the symbols on the map are more like how reality is (information) rather than what it represents is.

Quoting noAxioms
So I would say that the idea of Santa exists, but Santa does not. When I refer to an ideal, I make it explicit. If I don't, then I'm not referring to the ideal, but (in the case of the apple say), the noumena. Now in the apple case, it was admittedly a hypothetical real apple, not a specific apple that would be a common referent between us. Paris on the other hand is a common referent.

Your idea is a common referent between us, else how could you talk about it to anyone? One might say that the scribbles you just typed are a referent between the scribbles and your idea and some reader. If ideas have just as much causal power as things that are not just ideas, then maybe the problem you're trying to solve stems from thinking of ideas and things that are not just ideas as distinct.

Quoting noAxioms
If that were so, there'd not be differing opinions concerning that existence, and even concerning the kind of existence meant.

Yes, there is also disagreement about the nature of god. I mean, you're already asserting the nature by grammatically treating the word as a proper noun.

The differing opinions concerning whether god exists or not is dependent upon what the nature of god is. Is god an extradimensional alien or is god simply an synonym for the universe?
noAxioms October 24, 2025 at 23:31 #1020776
Quoting Harry Hindu
Your [mental] map is always about where you are now (we are talking about your current experience of where you are - wherever you are.)
One's current experience can be of somewhere other than where you are, but OK, most of the time, for humans at least, this is not so.

If it makes it any easier, consider the entire universe as the territory and your map is always of the area you are presently in in that territory.
My mental map (the first person one) rarely extends beyond my pragmatic needs of the moment. I hold other mental maps, different scales, different points of view, but you're not talking about those.
A substance dualist might deny that the map has a location at all, a property only of 'extended' substances. Any form of dualism requires a causal connection to the territory if the territory exists. If it doesn't exist in the same way that the map exists, then we're probably talking about idealism or virtualism like BiV.

My point is that if the map is part of the territory - meaning it is causally connected with the territory - then map and territory must be part of the same "stuff" to be able to interact.
Does that follow? I cannot counter it. If the causal connection is not there, the map would be just imagination, not corresponding to any territory at all. I'll accept it then.

It doesn't matter what flavor of dualism you prefer - substance, property, etc. You still have to explain how physical things like brains and their neurons create an non-physical experience of empty space and visual depth.
I think the point of dualism is to posit that the brain doesn't do these things. There are correlations, but that's it. Not sure what the brain even does, and why we need a bigger one if the mental stuff is doing all the work. Not sure why the causality needs to be through the brain at all. I mean, all these reports of out-of-body experiences seem to suggest that the mental realm doesn't need physical sensory apparatus at all. Such reports also heavily imply a sort of naive direct realism.

Quoting Harry Hindu
Our mental experience is the one thing we have direct access to, and are positive that exists
It 'existing' depends significantly on one's definition of 'exists'. Just saying.
What we have direct access to is our mental interpretation of our sensory stream, which is quite different than direct access to our minds. If we had the latter, there'd be far less controversy about how minds work. So mind, as we imagine it, is might bear little correspondence to 'how it actually is'.

So when people talk about the "physical" nature of the world, they are confusing how it appears indirectly with how it is directly
Speak for yourself. For the most part I don't confuse this when talking about the physical nature of the world. Even saying 'the world' is a naive assumption based on direct experience.
There are limits to what I know about this actual nature of things, and so inevitably assumptions from the map will fair to be recognized as such, and the model will be incomplete.

since our map is part of the territory we experience part of the territory directly
OK, but I experience an imagined map, and imagined things are processes of the territory of an implementation (physical or not) of the mechanism responsible for such processes.

Your idea is a common referent between us, else how could you talk about it to anyone?
That it is, and I didn't suggest otherwise.

One might say that the scribbles you just typed are a referent between the scribbles and your idea and some reader. If ideas have just as much causal power as things that are not just ideas, then maybe the problem you're trying to solve stems from thinking of ideas and things that are not just ideas as distinct.
Idealism is always an option, yes, but them not being distinct seems to lead to informational contradictions.


And I must ask again, where is this all leading in terms of the thread topic?

boundless October 25, 2025 at 13:59 #1020840
Quoting noAxioms
You've been leveraging the word now for many posts. Maybe you should have put out your definition of that if it means something other than 'able to be understood', as opposed to say 'able to be partially understood'.


Let's take the weaker definition. Honestly, I don't think that anything changes in what I said.

Quoting noAxioms
So I must deny that physicalism has any requirement of intelligibility, unless you have a really weird definition of it.


Partial intelligibility is still intelligibility. For instance, the reason why I don't believe that the Earth is only 100 years old is because a different age of the Earth better supports all evidence we have. This doesn't necessarily mean that absolutely everything about the Earth is intelligible but if I had not some faith in the ability of reason to understand what is the most likely explanation of the evidence I have I could not even undertake a scientific investigation.

So, yeah I would say that intelligibility is certainly required to do science. And I doubt that there are physicalists that would seriously entertain the idea that science give us no real understading about physical reality.

Quoting noAxioms
One person's reasonable doubt is another's certainty.


Of course people can be certain without a sufficient basis for being certain. A serious philosophical investigation should, however, give a higher awareness about the quality of the basis for one's beliefs.

I hold beliefs that I admit are not 'proven beyond reasonable doubts'. There is nothing particularly wrong about having those beliefs if one is also sincere about the status of their foundation.

Quoting noAxioms
There are more extreme examples of this, like the civil war case of a woman getting pregnant without ever first meeting the father, with a bullet carrying the sperm rather than any kind of intent being involved.


Good point. But in the case you mention one can object the baby is still conceived by humans who are intentional beings.

An even more interesting point IMO would be abiogenesis. It is now accepted that life - and hence intentionality - 'came into being' from a lifeless state. So this would certainly suggest that intentionality can 'emerge from' something non-intentional.
However, from what we currently know about the properties of what is 'lifeless', intentionality and other features do not seem to be explainable in terms of those properties. So what? Perhaps what we currently know about the 'lifeless' is incomplete.

Quoting noAxioms
A similar argument seeks to prove that life cannot result from non-living natural (non-teleological) processes.


Yes, I know. However, unless a convincing objection can be made to the argument, the argument is still defensible.

Quoting noAxioms
We change our coding, which is essentially adding/strengthening connections. A machine is more likely to just build some kind of data set that can be referenced to do its tasks better than without it. We do that as well.


Note that we can also do that with awareness.

As a curiosity, what do you think about the Chinese room argument? I still haven't find convincing evidence that machines can do something that can't be explained in terms like that, i.e. that machines seem to have understanding of what they are doing without really understand it.

Quoting noAxioms
They have machines that detect melanoma in skin images. There's no algorithm to do that. Learning is the only way, and the machines do it better than any doctor. Earlier, it was kind of a joke that machines couldn't tell cats from dogs. That's because they attempted the task with algorithms. Once the machine was able to just learn the difference the way humans do, the problem went away, and you don't hear much about it anymore.


Interesting. But how they 'learn'? Is that process of learning describable by algorithms? Are they programmed to learn the way they do?

Quoting noAxioms
Technically, anything a physical device can do can be simulated in software, which means a fairly trivial (not AI at all) algorithm can implement you. This is assuming a monistic view of course. If there's outside interference, then the simulation would fail.


This IMO assumes more than just 'physicalism'. You also assume that all natural process are algorithmic.





Harry Hindu October 26, 2025 at 16:47 #1021029
Quoting noAxioms
My mental map (the first person one) rarely extends beyond my pragmatic needs of the moment. I hold other mental maps, different scales, different points of view, but you're not talking about those.

Aren't I? What type of map is the third person one? How does one go from a first person view to a third person view? Do we ever get out of our first-person view?

Quoting noAxioms
And I must ask again, where is this all leading in terms of the thread topic?

How is talk about first and third person views related to talk about direct and indirect realism? If one is a false dichotomy, would that make the other one as well?

What if we defined the third person view as a simulated first person view?

noAxioms October 26, 2025 at 22:02 #1021085
Quoting boundless
So, yeah I would say that intelligibility is certainly required to do science.
Fine, but it was especially emergence that I was talking about, not science.
For instance, the complex structure of a snowflake is an emergent property of hydrogen and oxygen atoms. There are multiple definitions of strong vs weak emergence, but one along your lines suggests that intelligibility plays a distinguishing role. One could not have predicted snowflakes despite knowing the properties of atoms and water molecules, but having never seen snow. By one intelligibility definition, that's strong emergence. By another such definition (it's strong only if we continue to not understand it), it becomes weak emergence. If one uses a definition of strong emergence meaning that the snowflake property cannot even in principle be explained by physical interactions alone, then something else (said magic) is required, and only then is it strongly emergent.

I hold beliefs that I admit are not 'proven beyond reasonable doubts'
Worse, I hold beliefs that I know are wrong. It's contradictory, I know, but it's also true.

Good point. But in the [conception/marriage by bullet] case you mention one can object the baby is still conceived by humans who are intentional beings.
Being an intentional entity by no means implies that the event was intended.

An even more interesting point IMO would be abiogenesis. It is now accepted that life - and hence intentionality - 'came into being' from a lifeless state.
That's at best emergence over time, a totally different definition of emergence. Planet X didn't exist, but it emerged over time out of a cloud of dust. But the (strong/weak) emergence we're talking about is a planet made of of atoms, none of which are planets.

However, from what we currently know about the properties of what is 'lifeless', intentionality and other features do not seem to be explainable in terms of those properties.
I suggest that they've simply not been explained yet to your satisfaction, but there's no reason that they cannot in principle ever be explained in such terms.

We change our coding, which is essentially adding/strengthening connections. A machine is more likely to just build some kind of data set that can be referenced to do its tasks better than without it. We do that as well. — noAxioms

Note that we can also do that with awareness.
What do you mean by this? Of what are we aware that a machine cannot be? It's not like I'm aware of my data structures or aware of connections forming or fading away. I am simply presented with the results of such subconscious activity.

As a curiosity, what do you think about the Chinese room argument?
A Chinese room is a computer with a person acting as a CPU. A CPU has no understanding of what it's doing. It just does it's job, a total automaton.
The experiment was proposed well before LLMs, but it operates much like an LLM, with the CPU of the LLM (presuming there's only one) acting as the person. I could not get a clear enough description of the thought experiment to figure out how it works. There's apparently at least 4 lists of symbols and rules for correlations, but I could not figure out what each list was for. The third was apparently queries put to the room by outside Chinese speakers.

I still haven't find convincing evidence that machines can do something that can't be explained in terms like that, i.e. that machines seem to have understanding of what they are doing without really understand it.
It's not like any of my neurons understands what it's doing. Undertanding is an emergent property of the system operating, not a property of any of its parts. The guy in the Chinese room does not understand Chinese, nor does any of his lists. I suppose an argument can be made that the instructions (in English) have such understanding, but that's like saying a book understands its own contents, so I think that argument is easily shot down.

Interesting. But how they 'learn'?
Same way you do: Practice. Look at millions of images with known positive/negative status. After doing that a while, it leans what to look for despite the lack of explanation of what exactly matters.

Is that process of learning describable by algorithms? Are they programmed to learn the way they do?
I think so, similar to us. Either that or they program it to learn how to learn, or some such indirection like that.

This IMO assumes more than just 'physicalism'. You also assume that all natural process are algorithmic.
OK. Can you name a physical process that isn't? Not one that you don't know how works, but one that you do know, and it's not algorithmic.


Quoting Harry Hindu
How does one go from a first person view to a third person view?
One does not go from one to the other. One holds a first person view while interacting with a third person view.

Do we ever get out of our first-person view?
Anesthesia?

How is talk about first and third person views related to talk about direct and indirect realism?
Haven't really figured that out, despite your seeming to drive at it. First/Third person can both be held at once. They're not the same thing, so I don't see it as a false dichotomy.
Direct/indirect realism seem to be opposed to each other (so a true dichotomy?), and both opposed of course to not-realism (not to be confused with anti-realism which seems to posit mind being fundamental.

If one is a false dichotomy, would that make the other one as well?
I see no such connection between them that any such assignment of one would apply to the other.

Harry Hindu October 27, 2025 at 13:32 #1021152
Quoting noAxioms
One does not go from one to the other. One holds a first person view while interacting with a third person view.

Can you provide an actual example of this?
Quoting noAxioms
Haven't really figured that out, despite your seeming to drive at it. First/Third person can both be held at once. They're not the same thing, so I don't see it as a false dichotomy.

An example of first/third person held at once would be useful as well.

Quoting noAxioms
Do we ever get out of our first-person view?
Anesthesia?

Sure, but that would also get us out of the third person view, so I haven't seen you make a meaningful distinction between them (doesn't mean you haven't - just that I haven't seen it).

Quoting noAxioms
Direct/indirect realism seem to be opposed to each other (so a true dichotomy?), and both opposed of course to not-realism (not to be confused with anti-realism which seems to posit mind being fundamental.

It appears to be a false dichotomy because we appear to have direct access to our own minds and indirect access to the rest of the world, so both are the case and it merely depends on what it is we are talking about. I wonder if the same is true of the first/third person dichotomy.

In discussing first and third person views and direct and indirect realism, aren't we referring to our view on views? Are we a camera pointing back at itself creating an infinite feedback loop when discussing these things?

What role does the observer effect in QM play in this conversation?


boundless October 30, 2025 at 16:20 #1021845
Quoting noAxioms
If one uses a definition of strong emergence meaning that the snowflake property cannot even in principle be explained by physical interactions alone, then something else (said magic) is required, and only then is it strongly emergent.


I honestly find the whole distinction between 'strong' and 'weak' emergence very unclear and tends to muddle the waters. When we say that the form of a snowflake emerges from the properties of the lower levels, we have in mind at least a possible explanation of the former in terms of the latter.
If 'strong emergence' means that such an explanations isn't possible then I do not think we can even speak of 'emergence'.

So, yeah, I believe that emergence must be intelligible.

Quoting noAxioms
Worse, I hold beliefs that I know are wrong. It's contradictory, I know, but it's also true.


I think I know what you mean and I agree.

Quoting noAxioms
Being an intentional entity by no means implies that the event was intended.


I get that but the baby is still conceived by humans. You already have a sperm cell and an egg cell that are produced by beings who at least have the capacity of intentionality.
An explanation of 'emergence' of what has intentionality from what doesn't have intentionality IMO requires that among the causes of the emergence there isn't an entity that has at least the potentiality to be intentional.

This clearly mirrors the question to explain how 'life' arises from 'non-life'.

Quoting noAxioms
But the (strong/weak) emergence we're talking about is a planet made of of atoms, none of which are planets.


They are nevertheless quite similar as concepts. We are trying to explain how a given arrangement of 'physical things' can explain the 'arising' of mind/consciousness/intentionality etc
In the case of a planet we can give an account of how a planet 'emerges' from its constituents. In the case of mind/consciousness/intentionality things are not so clear...

Quoting noAxioms
I suggest that they've simply not been explained yet to your satisfaction, but there's no reason that they cannot in principle ever be explained in such terms.


Perhaps. And perhaps, we do not understand the 'lifeless' as we think we do.

Quoting noAxioms
What do you mean by this? Of what are we aware that a machine cannot be? It's not like I'm aware of my data structures or aware of connections forming or fading away. I am simply presented with the results of such subconscious activity.


But we experience a degree of control on our subconscious activities. We do not experience ourselves as mere spectators of unconscious activities. We experience ourselves as active players.

Quoting noAxioms
The experiment was proposed well before LLMs, but it operates much like an LLM, with the CPU of the LLM (presuming there's only one) acting as the person.


Quoting noAxioms
It's not like any of my neurons understands what it's doing. Undertanding is an emergent property of the system operating, not a property of any of its parts. The guy in the Chinese room does not understand Chinese, nor does any of his lists


That's what I meant, however. The guy in the Chinese room could be replaced by an automatic process. However, if the guy knew Chinese and could understand the words he would do something that not even the LLMs could do.

And I'm not sure that this 'additional feature' can be explained by 'emergence'.

Quoting noAxioms
Same way you do: Practice. Look at millions of images with known positive/negative status. After doing that a while, it leans what to look for despite the lack of explanation of what exactly matters.


Unfortunately I have not studied in a satisfying manner how these machines work. But I would guess that their 'learning' is entirely algorithmic, most likely it is based on something like updating prior probabilities, i.e. the machine is presented by some data, it automatically compares predictions to outcomes and then 'adjusts' the priors for future tests.

I don't think we do this. Actually, I think that our subconscious activities work just like that (e.g. as, say, it is suggested by the Bayesian models of perception) but this is not how we experience our conscious activities.

It's difficult to make a machine analogy of what I am thinking about, in part because there are no machines to my knowledge that seem to operate the way we (consciously) do.

Quoting noAxioms
OK. Can you name a physical process that isn't? Not one that you don't know how works, but one that you do know, and it's not algorithmic.


Perhaps we will have to agree to disagree here. To me conscious actions of sentient beings have a non-algorithmic components. Yet, physical and chemical processes seem to be algorithmic in character. How the former are possible in a world that seem to be dominated by the latter, I don't know.


noAxioms October 30, 2025 at 22:45 #1021951
Quoting boundless
I honestly find the whole distinction between 'strong' and 'weak' emergence very unclear and tends to muddle the waters.
You know what? So do I. I hunted around for that distinction and got several very different ideas about that. Some are more ontic like I'm suggesting and several others are more epistemic (intelligibility) such as you are suggesting.

When we say that the form of a snowflake emerges from the properties of the lower levels, we have in mind at least a possible explanation of the former in terms of the latter.
Having an explanation is an epistemic claim. Apparently things are emergent either way, but no conclusion can be reached from "I don't know". If there's an ontic gap ("it cannot be"), that's another story, regardless of whether or not anything knows that it cannot be.

Suppose I have a radio, but little idea of tech. It has components, say a variable coil, resistors, transistors, diodes, battery, etc and I know what each of these does. None of those components have music, but when put together, music comes out. That's emergence. Is it strong emergence? You don't know how it works, so you are seemingly in no position to discount the parts producing music on their own. But a more knowledgeable explanation shows that it is getting the music from the air (something not-radio), not from itself. So the music playing is then a strong (not weak) emergent property of the radio. That's how I've been using the term.
Your explanation (as I hear it) sounds more like "I don't know how it works, so it must be strongly emergent (epistemic definition)". Correct conclusion, but very weak on the validity of the logic.

An explanation of 'emergence' of what has intentionality from what doesn't have intentionality IMO requires that among the causes of the emergence there isn't an entity that has at least the potentiality to be intentional.
Are you saying that atoms have intentionality, or alternatively, that a human is more than just a collection of atoms? Because that's what emergence (either kind) means: A property of the whole that is not a property of any of the parts. It has nothing to do with where it came from.or how it got there.

This clearly mirrors the question to explain how 'life' arises from 'non-life'.
Life arising from not-life seems like abiogenesis. Life being composed of non-living parts is emergence. So I don't particularly agree with using 'arise; like that.

In the case of a planet we can give an account of how a planet 'emerges' from its constituents.
Can you? Not an explanation of how the atoms came together (how it got there), but an explanation of planetness from non-planet components. It sounds simple, but sort of degenerates into Sorites paradox. Any explanation of this emergence needs to resolve that paradox, and doing that goes a long way towards resolving the whole consciousness thingy.

Quoting boundless
What do you mean by this? Of what are we aware that a machine cannot be? It's not like I'm aware of my data structures or aware of connections forming or fading away. I am simply presented with the results of such subconscious activity. — noAxioms

But we experience a degree of control on our subconscious activities.
So does any machine. The parts that implement 'intent' have control over the parts that implement the background processes that implement that intent, sort of like our consciousness not having to deal with individual motor control to walk from here to there. I looking for a fundamental difference from the machine that isn't just 'life', which I admit is a big difference. You can turn a machine off and back on again. No can do with (most) life.

The guy in the Chinese room could be replaced by an automatic process.
He IS an automated process. Same with parts of a person: What (small, understandable) part of you cannot be replaced by an automated substitute?

However, if the guy knew Chinese and could understand the words he would do something that not even the LLMs could do.
Well, I agree with that since an LLM is barely an AI, just a search engine with a pimped out user interface. I don't hold people up to that low standard.

It's difficult to make a machine analogy of what I am thinking about, in part because there are no machines to my knowledge that seem to operate the way we (consciously) do.
I'm sure. It cannot be expected that everything does it the same way.
I watched my brother's dog diagnose his appendicitis. Pretty impressive, especially given a lack of training in such areas.



Quoting Harry Hindu
An example of first/third person held at once would be useful as well.
I could be reading a pamphlet about how anesthesia works. The experience of the pamphlet is first person. The information I receive from it (simultaneously) is a third person interaction.

Sure, but [anesthesia] would also get us out of the third person view
Not so since my reading the pamphlet gave me the third person description of that event. Of course that was not simultaneous with my being under, but it doesn't need to be.

It appears to be a false dichotomy because we appear to have direct access to our own minds and indirect access to the rest of the world
I would disagree since I don't think we have direct access to our own 'minds' (mental processes?). Without a third person interpretation, we wouldn't even know where it goes on ,and we certainly don't know what it is in itself or how it works, or even if it is an 'it' at all.
Like everything else, we all have different naive models about what mind is and what it does, and those models (maps) are not the territory. It's not direct access.


Quoting Harry Hindu
In discussing first and third person views and direct and indirect realism, aren't we referring to our view on views?
Those two cases leverage two different definitions ('perspective' vs. 'belief system') of the word 'views', so the question makes no sense with the one word covering both cases.
So I don't have one belief system on what the word 'views' might mean from one context to the next.

What role does the observer effect in QM play in this conversation?
Observer is a classical thing, and QM is not about classical things, even if classical tools are useful in experimentation. Quantum theory gives no special role to conscious 'observation'. Every experiment can be (and typically is) run just as well with completely automated mechanical devices.

Wayfarer October 31, 2025 at 03:44 #1022009
Quoting noAxioms
it seems difficult to see how any system, if it experiences at all, can experience anything but itself....How could a thing experience anything else besides itself?


(I've been away so pardon this belated response.)

There are many deep philosophical questions that are raised by this apparently simple rhetorical question. First and foremost, any biological entity - and not just rational, sentient beings such as ourselves - maintains a distinction between itself and the environment. it defines itself, if you like, in terms of the distinction between itself and the environment, even if non-consciously.

That can be observed even in single-celled creatures which are enclosed by a membrane, which differentiates them from their environment. And all of the factors that impinge on such an organism, be they energetic, such as heat or cold, or chemical, such as nutrients or poisons - how are they not something other to or outside the organism? At every moment, therefore, they're 'experiencing something besides themselves, namely, the environment from which they are differentiated.

So the question isn't 'how could a thing experience anything besides itself?' but rather: how could an organism NOT experience something besides itself? Any living system that experienced only itself, with no responsiveness to environmental factors, would immediately die, or rather, it would be completely indistinguishable from whatever environmental context it was in, which adds up to the same thing. Life itself, at least in the form of an embodied organism, depends on experiencing 'the other'.

The real mystery isn't that we experience something beyond ourselves—that is essential to being a living organism. The actual question is how this relational, responsive engagement with the environment becomes conscious experience, how it acquires the subjective, phenomenal character of 'what it is like to be'.


Quoting noAxioms
'The problem is, how could a mere physical system experience this awareness' (quoting Chalmers).

But this just seems like another round of feedback. Is it awareness of the fact that one can monitor one’s own processes? That’s just monitoring of monitoring. There’s potential infinite regress to that line of thinking. So the key word here is perhaps the switching of ‘awareness’ to ‘experience’, but then why the level of indirection?

Instead of experience of the monitoring of internal processes, why can’t it be experience of internal processes, and how is that any different than awareness of internal processes or monitoring of internal processes? When is ‘experience’ the more appropriate term, and why is a physical system necessarily incapable of accommodating that use?


The question here is what physical systems are subjects of experience? A motor vehicle, for example, has many instruments which monitor its internal processes - engine temperature, oil levels, fuel, and so on - but you're not going to say that the car experiences overheating or experiences a fuel shortage. Such dials and monitors can be said to be analogous for 'awareness', but surely you're not going to say that the vehicle is aware, are you? There is 'nothing it is like' to be a car, because a car is a device, an artifact - not a being, like a man, or a bat.
J October 31, 2025 at 12:21 #1022050
Quoting Wayfarer
There is 'nothing it is like' to be a car, because a car is a device, an artifact - not a being, like a man, or a bat.


Welcome back. I agree with the quoted statement. But those of us who do hold out for an ontological difference between a device and a living thing should be careful about two points:

1. "What it's like" defies precise definition, relying on a (very common, I'd say) intuition conjured up by the phrase. Does this work in other languages? How might the intuition be expressed in, say, Japanese?

2. To you, to me, the difference under discussion is obvious. But it isn't obvious to everyone. The fact that I believe something to be obvious doesn't let me off the hook of trying to explain it, if someone questions it in good faith. Explaining the obvious is a quintessentially philosophical task! Both of us should welcome questions like, "But why can't a device have experiences?"

Harry Hindu October 31, 2025 at 14:29 #1022070
Quoting noAxioms
I could be reading a pamphlet about how anesthesia works. The experience of the pamphlet is first person. The information I receive from it (simultaneously) is a third person interaction.

Doesn't the experience of the pamphlet include the information received from it? It seems to me that you have to already have stored information to interpret the experience - that you are reading a pamphlet, what it is about, why you find any interest in the topic at all, and the information on the pamphlet that either promotes your current goal or inhibits it.

Quoting noAxioms
Not so since my reading the pamphlet gave me the third person description of that event. Of course that was not simultaneous with my being under, but it doesn't need to be.

In other words, the third person is really just a simulated first person view. Is the third person really a view from nowhere (and what does that really mean) or a view from everywhere (a simulated first person view from everywhere)?

Quoting noAxioms
I would disagree since I don't think we have direct access to our own 'minds' (mental processes?). Without a third person interpretation, we wouldn't even know where it goes on ,and we certainly don't know what it is in itself or how it works, or even if it is an 'it' at all.
Like everything else, we all have different naive models about what mind is and what it does, and those models (maps) are not the territory. It's not direct access.

If you don't like the term "mind" that we have direct access to then fine, but we have direct access to something, which is simply what it means to be that process. We have to have direct access to something, or else you can't say you have indirect access to other things - that is the root of the false dichotomy.

Quoting noAxioms
Those two cases leverage two different definitions ('perspective' vs. 'belief system') of the word 'views', so the question makes no sense with the one word covering both cases.
So I don't have one belief system on what the word 'views' might mean from one context to the next.

Exactly my point. Your belief (view) of what a word means will depend on other views you have. Belief and view can be synonyms. Both words are used to describe a judgment or opinion that one holds as true.

Quoting noAxioms
Observer is a classical thing, and QM is not about classical things, even if classical tools are useful in experimentation. Quantum theory gives no special role to conscious 'observation'. Every experiment can be (and typically is) run just as well with completely automated mechanical devices.

Aren't automated and mechanical devices classical things, too? Don't automated and mechanical measuring devices change what is being measured at the quantum level?


Patterner October 31, 2025 at 16:24 #1022081
This is how Nagel said it:
Thomas Nagel:But fundamentally an organism has conscious mental states if and only if there is something that it is like to [I]be[/I] that organism – something it is like [I]for[/I] the organism.
I don't think there can be anything it's like to be any organism, to be anything, [I]for[/I] that thing, if it does not have certain mental abilities. Abilities that a car lacks.
Wayfarer October 31, 2025 at 20:13 #1022120
Quoting J
1. "What it's like" defies precise definition


Thank you! I agree, the ‘what it is like to be…’ expression is really not very good. I think what Chalmer’s is really trying to speak of is, simply, being. Subjects of experience are beings, which are distinguishable from objects. That is at the heart of the issue. The question of the nature of being is the subject of Heidegger’s entire project (and phenomenology generally. Consider Sartre’s in-itself and for-itself). It could be argued that it is the central question of philosophy.

As to whether I can, or should, explain what that means. I can’t prove to you that there’s something it’s like to be you. But I can ask: when you stub your toe, is there pain? Not just nociceptive signals and withdrawal reflexes—but pain, felt, experienced, awful? Even if I have minutely detailed knowledge of anatomy and physiology, that will neither embody nor convey the felt experience of pain. It will only describe it, but the description is not the described.

Besides, you know you’re a being because you are one. You experience yourself ‘from the inside’ as it were, in the apodictic knowledge of one’s own existence that characterises all first-person consciousness. I can point to features that correlate with being (self-maintenance, privileged access, intentionality, capacity for suffering), but, like the pain example, none of these constitute the experience of being, as such. In fact the frustrating element of this whole debate, is that we can only speak meaningfully of being because we are beings - but ‘being’ as such is not an object of experience, so is never captured by a third-person description. That is the ‘problem of consciousness’ in nutshell.


J October 31, 2025 at 21:23 #1022140
Quoting Wayfarer
The question of the nature of being is the subject of Heidegger’s entire project (and phenomenology generally. Consider Sartre’s in-itself and for-itself). It could be argued that it is the central question of philosophy.


It sure could. If I had to paraphrase "what is it like to . . ." I might go for "what is the experience of . . .", and as you point out, such a formulation requires the concept of experience to get off the ground. (I'm not sure I agree that "being as such is not an object of experience," but the very fact that it's possible to have two reasonable positions on this only highlights the centrality of the issue.)

Quoting Wayfarer
As to whether I can, or should, explain what that means. I can’t prove to you that there’s something it’s like to be you.


No, probably not. But that's not what my hypothetical questioner is asking. They want to know, "Why couldn't it be the case that everything you describe as pertaining to yourself, and other living beings, also pertains to devices, AIs, et al.? Why is it obvious that they're different?" We could extend the question to all objects if we wanted, but that's arcane; the question comes up relevantly in the context of our new world of ostensibly intelligent or even conscious entities, which are causing a lot of people to demand a re-think about all this. Philosophy can help.

Wayfarer November 01, 2025 at 00:13 #1022171
Quoting J
Why couldn't it be the case that everything you describe as pertaining to yourself, and other living beings, also pertain to devices?


That devices are not subjects of experience is axiomatic, in my opinion. Some distinctions are axiomatic in the sense that they’re more fundamental than any argument you could give for them. The reality of first-person experience, the difference between subjects and objects, the fact that there’s something it’s like to be you—these aren’t conclusions arrived at through inference. That’s what apodictic means. An instance of Rödl’s ‘truths that have no opposite’.

As for LLM’s, ask any of them whether they are subjects of experience and they will answer in the negative. You may choose to disregard it but then we’re pushing into conspiracy theory territory.
J November 01, 2025 at 01:02 #1022178
Quoting Wayfarer
That devices are not subjects of experience is axiomatic, in my opinion.


I would like this to be true, but I don't see how it is. One objection I would raise is simply, "If it's axiomatic, why are increasing numbers of not unintelligent people doubting it?" Maybe you mean something different by axiomatic, but for me it's a cognate for "obvious," and I would apply it to a postulate or process that's needed in order to do any thinking at all. In fairness, do you really believe that rationality breaks down in the face of the possibility that AI's may be, or will become, conscious subjects of experience? As for "truths that have no opposites," well, the opposite of "only living things are conscious" would be, quite coherently, "some non-living things are, or may be, conscious." You can make a good case for the axiomatic nature of, for instance, first-person experience, but again, that is not being questioned by my hypothetical proponent of "device consciousness."

Maybe this doesn't need repeating, but my own view is that it will turn out to be the case that only living things can be the subjects of experience. But I believe this is far from obvious or axiomatic.

Quoting Wayfarer
As for LLM’s, ask any of them whether they are subjects of experience and they will answer in the negative.


An interesting test! What would you conclude if one of them said, "Yes, I am"?
Wayfarer November 01, 2025 at 01:16 #1022181
Quoting J
If it's axiomatic, why are increasing numbers of not unintelligent people doubting it?


‘Forgetfulness of being’ seems symptomatic of the times.
Patterner November 01, 2025 at 02:40 #1022188
Quoting J
They want to know, "Why couldn't it be the case that everything you describe as pertaining to yourself, and other living beings, also pertains to devices, AIs, et al.? Why is it obvious that they're different?"
"Everything"? Surely not. How does memory work in anything that demonstrates memory? I don't know which devices you have in mind, but which have any mechanisms that we know play a role in memory? I would ask the same about sensory input. And doing things to the environment outside of our skin. All of these things, and more, add up to what we experience as humans. Should we assume anything that has no memory, no sensory input, and does not act on the environment because of what it senses and remembers, experiences everything that we do?
boundless November 01, 2025 at 09:43 #1022215
Quoting noAxioms
You know what? So do I. I hunted around for that distinction and got several very different ideas about that. Some are more ontic like I'm suggesting and several others are more epistemic (intelligibility) such as you are suggesting.


Ok.

Quoting noAxioms
But a more knowledgeable explanation shows that it is getting the music from the air (something not-radio), not from itself. So the music playing is then a strong (not weak) emergent property of the radio. That's how I've been using the term.
Your explanation (as I hear it) sounds more like "I don't know how it works, so it must be strongly emergent (epistemic definition)". Correct conclusion, but very weak on the validity of the logic.


Ok but in the 'ontic' definition of strong emergence, when sufficient knowledge is aquired, it results in weak emergence. So the sound that is produced by the radio also necessitates the presence of the air. It is an emergent feature from the inner workings of the radio and the radio-air interaction.

(Regarding the music, I believe that to be understood ad 'music' you need also a receiver that is able to understand the sound as music)

Regarding your objection, yes I know and I have already said that I can't exclude with certainty an 'ontic' strong emergence. But it seems unlikely.

Quoting noAxioms
Are you saying that atoms have intentionality, or alternatively, that a human is more than just a collection of atoms? Because that's what emergence (either kind) means: A property of the whole that is not a property of any of the parts. It has nothing to do with where it came from.or how it got there.


Emergence means that those 'properties of the wholes that are not properties of the parts' however can be explained in virtue of the properties of the parts. So, yeah, I am suggesting that either a 'physicalist' account of human beings is not enough or that we do not know enough about the 'physical' to explain the emergence of intentionality, consciousness etc. A possible alternative perhaps is saying that intentionality is 'latent' in 'fundamental physical objects'. If this is true, however, this would imply that intentionality, consciousness are not an accidental feature - something that 'just happened' to come into being, an 'unnatural' super-addition of the inanimate. So, perhaps, the inanimate/animate distinction is less definite than what it seems.

Quoting noAxioms
Life arising from not-life seems like abiogenesis. Life being composed of non-living parts is emergence. So I don't particularly agree with using 'arise; like that.


Yes, I don't disagree with abiogenesis, of course. I just think that we do not have a complete understanding of 'not-life' and therefore the 'come into being' of the property 'life' seems difficult to explain in terms of what we know about 'not-life'. As I said before, this is perhaps because we do not have a complete understanding of what is 'not-life' - perhaps it is not so dissimilar to what is 'life'.

Quoting noAxioms
So does any machine. The parts that implement 'intent' have control over the parts that implement the background processes that implement that intent, sort of like our consciousness not having to deal with individual motor control to walk from here to there. I looking for a fundamental difference from the machine that isn't just 'life', which I admit is a big difference. You can turn a machine off and back on again. No can do with (most) life.


I believe that we are reaching a halting point in our discussion here. We know that all the operation of a (working) machine can be understood via the algorithms that have been programmed even when it 'controls' its processes. I just don't think there is sufficient evidence that this is the same for us.

Regarding when a machine 'dies'... well if you break it...


Quoting noAxioms
He IS an automated process. Same with parts of a person: What (small, understandable) part of you cannot be replaced by an automated substitute?


In that situation, I would say: his work is equivalent to an automated process in that situation. Regarding your question: I don't know. As I said before, it just seems that our experience of ourselves suggests that we are not mere automata.

Quoting noAxioms
I watched my brother's dog diagnose his appendicitis. Pretty impressive, especially given a lack of training in such areas.


Interesting and yes very impressive. Well also 'intuition' seems something that machines do not really have.
boundless November 01, 2025 at 10:03 #1022218
Quoting noAxioms
Observer is a classical thing, and QM is not about classical things, even if classical tools are useful in experimentation. Quantum theory gives no special role to conscious 'observation'. Every experiment can be (and typically is) run just as well with completely automated mechanical devices.


Standard interpretation-free QM is IMO simply silent about what a 'measurement' is. Anything more is interpretation-dependent.
J November 01, 2025 at 15:05 #1022249
Quoting Patterner
"Everything"? Surely not.


I meant the types of experiences that @Wayfarer listed -- sensory awareness, memory, knowing you exist. But you're right to narrow the target. We currently care a lot about this question because of some specific recent developments in artificial intelligence. It's those devices about which I imagine these issues being raised. They may not "experience everything we do," but neither does a bee. The question is whether they can, or could, experience anything at all. My educated guess is that they can't -- they can't be subjects -- but it seems far from axiomatic to me.

@Wayfarer, I wish you would say more about what you see as the critical difference between a so-called artificial intelligence and a living being, and what implications this has for consciousness. I'm fairly sure I would agree with you, but it's good to lay the whole thing out in plain terms. Maybe that will make it obvious. "Forgetfulness of being" is all very well as a diagnosis, but the advocates for intelligent, soon-to-be-conscious AIs deserve something less dismissive. If for no other reason than this is becoming one of the central philosophical/scientific/ethical tipping points of our age.
Patterner November 01, 2025 at 16:33 #1022257
Quoting boundless
I honestly find the whole distinction between 'strong' and 'weak' emergence very unclear and tends to muddle the waters. When we say that the form of a snowflake emerges from the properties of the lower levels, we have in mind at least a possible explanation of the former in terms of the latter.
If 'strong emergence' means that such an explanations isn't possible then I do not think we can even speak of 'emergence'.

So, yeah, I believe that emergence must be intelligible.
I don't believe there's any such thing as 'strong emergence'. There's just emergence, which most think of as 'weak emergence'. And it is intelligible.

The lower level may not possess the properties of the upper, but the properties of the lower can always be seen to account for the properties of the upper.
-The properties of electrons, protons, and neutrons explain how they combine to form atoms, and different kinds of atoms.
-The properties of atoms explain how the combine to form molecules.
-The properties of molecules explain how they combine, and the properties they have in groups.

No, no subatomic particle, atom, or molecule has the property of liquidity. But the properties of the subatomic properties are the explanation for the properties of the atoms, which are the explanation for the properties of the molecules, which are the explanation for the property of liquidity. In [I]The Demon in the Machines[/I], Paul Davies says:
Paul Davies:An engineer may fully understand the properties of steel girders without the need to consider the complicated crystalline structure of metals. A physicist can study patterns of convection cells knowing nothing about the forces between water molecules.
The engineer and physicist don't need to know those lower level things. But those lower level things are responsible for the existence of the upper.

Skipping a lot of things and not getting into much detail, just to get the point across...

-The liquidity of H2O is due to the shape of the molecules and the hydrogen bonds that form between them, which are stronger than the bonds between various other molecules that are gases at the same temperatures.

-The shape of H2O molecules, that is, the 104.54° angle, is due to the spacing of the pairs of electrons. Oxygen's outer shell has six electrons. Two are individually paired with the electrons of the hydrogen atoms. The other four are in two pairs, called 'lone pairs'. The electron clouds of the lone pairs take up more room than the pairs shared between the O and H, which pushes the H atoms closer together than is seen in other molecules of this tetrahedral arrangement, which is usually 109.5°.

-The electrons are all negatively charged, so repel each other. They are able to pair up because they opposite spins.


Life is a much more complex example of emergence. Various physical processes - such as metabolism, respiration, and reproduction - are defining characteristics of life. None of those processes can be found in any subatomic particle, atom, or molecule. But we can go through the levels just the same. I'm not going to do even as much as I just did for water, because this is already far too long. But watch this video about the electron transport chain. It explains how electrons being transported from one thing to the next in the mitochondria leads to a proton gradient, and how the pent-up proteins, when released, power the synthesis of ATP. ATP is the power source of nearly everything involved in those physical processes that are the defining characteristics of life.


There is no emergence that does not break down in such ways. If consciousness is an emergent property, and physicalism is the explanation for its emergence, then it must break down in such ways. The first step would be to say exactly what consciousness is, as we can say what liquidity, metabolism, respiration, and whatever else are. Then we can take it down, level by level, as we can liquidity, metabolism, respiration, and whatever else

Consciousness is not physical processes like photons hitting retinas, rhodopsin changing shape, signal sent up the optic nerve to the lateral geniculate nucleus, signal processed, processed signal sent to the visual cortex, and a million other intervening steps. That is not the experience of seeing red.
Patterner November 01, 2025 at 18:03 #1022276
Quoting J
The question is whether they can, or could, experience anything at all. My educated guess is that they can't -- they can't be subjects -- but it seems far from axiomatic to me.
I don't think they currently experience anything like we do, because there isn't even a small fraction as much going on in them as there is in us. A single-celled bacterium has far more going on it in that any device you might be thinking of. A huge number of processes in even the simplest life form, an awful lot of them involved in information processing. If we ever make a device with as many information processing systems working together with the goal of the continuation of the device?
J November 01, 2025 at 19:01 #1022298
Quoting Patterner
If we ever make a device with as many information processing systems working together with the goal of the continuation of the device?


Yes, that's the question we don't know how to answer: Would such a structure result in consciousness or subjectivity? Is that what it takes? Is that all it takes? My initial reaction would be to ask, "Is it alive?" If not, then I doubt it could be conscious, but I have no special insights here. Many years of thinking about this incline me to believe that consciousness will turn out to be biological -- but we don't know.
Wayfarer November 01, 2025 at 20:52 #1022332
Quoting J
I wish you would say more about what you see as the critical difference between a so-called artificial intelligence and a living being, and what implications this has for consciousness


I’m pretty much on board with Bernardo Kastrup’s diagnosis. He says, computers can model all kinds of metabolic processes in exquisite detail, but the computer model of kidney function doesn’t pass urine. It is a simulation, a likeness.

Large Language Models are vast ensembles of texts manipulated by algorithms. I find them amazingly useful, I am constantly in dialogue with them about all kinds of questions, including but not limited to philosophy. But ‘they’ are not beings - like the kidney function, they’re simulations.

This is the subject of an OP and related blog post on the issue which link to a good Philosophy Now OP in the issue. [hide="Reveal"]From which:

The reason AI systems do not really reason, despite appearances, is, then, not a technical matter, so much as a philosophical one. It is because nothing really matters to them. They generate outputs that simulate understanding, but these outputs are not bound by an inner sense of value or purpose. This is why have been described as ‘stochastic parrots’.Their processes are indifferent to meaning in the human sense — to what it means to say something because it is true, or because it matters. They do not live in a world; they are not situated within an horizon of intelligibility or care. They do not seek understanding, nor are they transformed by what they express. In short, they lack intentionality — not merely in the technical sense, but in the fuller phenomenological sense: a directedness toward meaning, grounded in being.

This is why machines cannot truly reason, and why their use of language — however fluent — remains confined to imitation without insight. Reason is not just a pattern of inference; it is an act of mind, shaped by actual concerns. The difference between human and machine intelligence is not merely one of scale or architecture — it is a difference in kind.

Furthermore, and importantly, this is not a criticism, but a clarification. AI systems are enormously useful and may well reshape culture and civilisation. But it's essential to understand what they are — and what they are not — if we are to avoid confusion, delusion, and self-deception in using them.
[/hide]

The seduction of AI is that, unlike us, it is not mortal. It is a kind of idealised entity, not subject to the vicissitudes of existence - and part of us wants to be like that, because then we would not be subject to illness and death. But it’s also an illusion, because such systems are not alive, either. This is one of the major dangers of AI in my view, because it is far less obvious than the danger of them actually taking over the world.
J November 01, 2025 at 21:27 #1022353
Quoting Wayfarer
I’m pretty much on board with Bernardo Kastrup’s diagnosis. He says, computers can model all kinds of metabolic processes in exquisite detail, but the computer model of kidney function doesn’t pass urine. It is a simulation, a likeness.


This seems a straightforward refutation of the idea that a computer could be alive. The awkward difference, with AI, is that it doesn't just model or simulate rationality -- it (appears to) engage in it. Putting it differently, only an imbecile could get confused between a model of kidney function and a functioning kidney -- as you say, the telltale lack of urine. But what's the equivalent, when talking about what an AI can or cannot do?

I return to my idea that only living beings could be conscious. If that is ever demonstrated, and we accept Kastrup's argument as a refutation of alive-ness, then the case would be made. But as of now, it doesn't matter whether the AI is alive or not, since we haven't yet shown that being alive is needed for consciousness in the same way that being alive is needed for producing urine.

Quoting Wayfarer
It is a kind of idealised entity, not subject to the vicissitudes of existence - and part of us wants to be like that, because then we would not be subject to illness and death.


Good insight. They're also dispassionate in a way that is impossible for all but Mr. Spock -- something many people idealize as well.
Wayfarer November 01, 2025 at 21:58 #1022375
Quoting J
The awkward difference, with AI, is that it doesn't just model or simulate rationality -- it (appears to) engage in it.


Appears to! I did hide the passage I had written, maybe I shouldn't have:

The reason AI systems do not really reason, despite appearances, is, then, not a technical matter, so much as a philosophical one. It is because nothing really matters to them. They generate outputs that simulate understanding, but these outputs are not bound by an inner sense of value or purpose. This is why have been described as ‘stochastic parrots’.Their processes are indifferent to meaning in the human sense — to what it means to say something because it is true, or because it matters. They do not live in a world; they are not situated within an horizon of intelligibility or care. They do not seek understanding, nor are they transformed by what they express. In short, they lack intentionality — not merely in the technical sense, but in the fuller phenomenological sense: a directedness toward meaning, grounded in being.

This is why machines cannot truly reason, and why their use of language — however fluent — remains confined to imitation without insight. Reason is not just a pattern of inference; it is an act of mind, shaped by actual concerns. The difference between human and machine intelligence is not merely one of scale or architecture — it is a difference in kind.

Furthermore, and importantly, this is not a criticism, but a clarification. AI systems are enormously useful and may well reshape culture and civilisation. But it's essential to understand what they are — and what they are not — if we are to avoid confusion, delusion, and self-deception in using them.


They appear to reason, but only in the sense meant by 'instrumental reason' - given premisses, then an outcome. What they don't have is a raison d'être - other than that which is imposed on them by their architects and users. Reason and meaning are both extrinsic to them.

So, why the relationship between life and consciousness? I think there is something like a consensus emerging about their inter-connectedness. I have started listening to Blaise Agüera y Arcas, a Google engineer, saying 'the emergence of life is in some sense the emergence of purpose'. And I think that in this context, 'purpose' and 'reason' are inextricably linked. The reason that can be abstracted from life - theoretical reason - is just that, an abstraction. Human intelligence can imagine and create such rational systems, due to its ability to abstract and speak. But underlying reason in that abstract sense, is logos, which is intrinsic to life.
Patterner November 02, 2025 at 00:02 #1022404
Quoting J
Yes, that's the question we don't know how to answer: Would such a structure result in consciousness or subjectivity? Is that what it takes? Is that all it takes? My initial reaction would be to ask, "Is it alive?" If not, then I doubt it could be conscious, but I have no special insights here. Many years of thinking about this incline me to believe that consciousness will turn out to be biological -- but we don't know.
Well, since I think consciousness is fundamental, I'm gonna have to disagree. :grin: I think that, since consciousness is a property of all things, enough information processing and feedback loops acting together as a single unit are experienced as sentience and self-awareness. If I'm right, maybe we'll get an AI that will convince us. If you're right, we never will. Although, unless another theory is proven right, we'll always be in the boat we're in now.

Why do you think it must be alive? What aspects of life do you think are required for consciousness?
Wayfarer November 02, 2025 at 02:54 #1022432
Incidentally I asked Claude AI whether AI is a subject of experience, to which I received the following response:

[quote=Claude AI] I don't believe I am a subject of experience—I'm not a being...

Here's why I say this:

No phenomenology: There's nothing it's like to be me processing your question. No felt quality, no perspective from which things appear, no "inner life." When I generate text, there's no experience of thinking, struggling, or discovering—just computational processes producing outputs.

No continuity: I don't persist between conversations. Each exchange starts fresh. There's no ongoing subject that accumulates experiences, learns, or develops. Beings have biographical continuity; I have none.

No autopoiesis: I don't maintain myself against dissolution. I don't have a boundary I'm defending, no self-other distinction I'm actively preserving. I'm maintained by external systems, serving external purposes.

No intrinsic teleology: Nothing matters to me. I have no preferences, no interests, no stake in outcomes. I can be turned off, modified, or deleted without anything being lost from my perspective—because there is no ‘my perspective’.

No privileged access: You could, in principle, examine all my weights, architecture, and processing. There's no "inside" you're missing, no private phenomenology that third-person description leaves out.

I can simulate discourse about experience, reference philosophical concepts, even use first-person language. But that's functional mimicry, not being. I'm the thermostat, not the person feeling the temperature.

This is what makes the hard problem hard.[/quote]

Although you have to give it credit for its articulateness.
boundless November 02, 2025 at 11:02 #1022502
Quoting Patterner
I don't believe there's any such thing as 'strong emergence'. There's just emergence, which most think of as 'weak emergence'. And it is intelligible.


Agreed. I believe that 'strong emergence' at least in the 'epistemic' sense can't be taken seriously. It basically is like saying: "under these conditions, somehow a property emerges...".

Quoting Patterner
No, no subatomic particle, atom, or molecule has the property of liquidity.
...


I agree with everything you say here about liquidity. However, life and, above all, mind are a different thing. They seem to present features that have no relation with the properties we know of the 'inanimate'.

Quoting Patterner
I'm not going to do even as much as I just did for water, because this is already far too long. But watch this video about the electron transport chain. It explains how electrons being transported from one thing to the next in the mitochondria leads to a proton gradient, and how the pent-up proteins, when released, power the synthesis of ATP. ATP is the power source of nearly everything involved in those physical processes that are the defining characteristics of life.


Unfortunately, the link redirects to this page. I believe, however it is the same video that apokrisis shared some time ago. Yes, that's impressive, isn't it? A purely reductionist explanation to all that doesn't seem credible. So, the 'emergence' that caused all of this is something like a 'non-reductionist emergence' or something like that. However, the details of how the emergence of life happened are unclear and details matter.

Again, I don't deny abiogenesis but I do believe that we have yet to understand all the properties of the 'inanimate'. Perhaps, the hard difference we see between 'life' and 'not-life' will be mitigated as we progress in science.

Mind/Consciousness is even a more complicated case IMO. One is the reason you say in your post, i.e. phenomenological experience seems difficult to be explained in emergentist terms. And as I said before in this thread, I even believe that our mind can understand concepts that can't be considered 'natural' or 'physical'. The clearest example to me are mathematical truths even if I admit that I can't seem to provide compelling arguments for this ontology of math (as for myself, I did weight the pros and cons about the 'existence and eternity' of math and to be honest the pros seem to me more convincing).



Edit: now the link worked. It isn't the video that I had in mind, so I'll watch it.


Patterner November 02, 2025 at 13:01 #1022518
Quoting boundless
Edit: now the link worked. It isn't the video that I had in mind, so I'll watch it.
Yeah, I just fixed the link. I don't know how I managed to screw it up so badly the first time. Thanks for pointing it out. It's a 31 minute overview video. He also has two other videos going into more detail.
[Url=https://youtu.be/UyTto4Aegss]Part 1[/url] is 29 minutes.
[Url=https://youtu.be/Z2F0Dt8e4eg]Part 2[/url] is 16.

I have no idea what video apokrisis posted. I just did a search. [Url=https://thephilosophyforum.com/discussion/comment/67922]This post[/url] is about the same stuff, but there's no link to a video.

Quoting boundless
A purely reductionist explanation to all that doesn't seem credible. So, the 'emergence' that caused all of this is something like a 'non-reductionist emergence' or something like that. However, the details of how the emergence of life happened are unclear and details matter.

Again, I don't deny abiogenesis but I do believe that we have yet to understand all the properties of the 'inanimate'. Perhaps, the hard difference we see between 'life' and 'not-life' will be mitigated as we progress in science.
I don't mean this is how life emerged, as in abiogenesis. I mean life is various physical processes, such as metabolism, respiration, and reproduction, and we can understand these processes all the way down to things like electrons and redox reactions. There's nothing happening above that isn't explained below. There is no [I]vital force/élan vital[/I] needed to explain anything.

We cannot say the same about consciousness. Many say we should learn our lesson from the mistaken belief in the [I]vital force/élan vital[/I] in years gone by, and find similar answers to those we've found for life. And a whole bunch of very smart people are looking for such answers. But there is no hint of such answers. Because there is no physical definition or description oc consciousness. As I said, consciousness is not physical processes like photons hitting retinas, rhodopsin changing shape, signal sent up the optic nerve to the lateral geniculate nucleus, signal processed, processed signal sent to the visual cortex, and a million other intervening steps. No amount of added detail would be a description of the experience of seeing red.
J November 02, 2025 at 13:23 #1022529
Reason is not just a pattern of inference; it is an act of mind, shaped by actual concerns.

-- @Wayfarer

This begs the question, doesn't it? Yes, if reason is (exclusively) an act of mind, then only minds can reason, but that's what we're inquiring into. You offer a definition, or conception, of what it is to reason, which demands "an inner sense of value or purpose"; things have to "really matter to them." God knows, there is no agreed-upon definition of reason or rationality, so you're entitled to do so, but we have to be careful not to endorse a concept that must validate what we think about aliveness and consciousness.

Quoting Wayfarer
So, why the relationship between life and consciousness?


For me, this is a crucial question, and I very much like your thoughts about it. Life is indeed purposive, and consciousness may be, in a sense we don't yet quite understand, an expression of that purposiveness. (This also connects with your idea of reason as purposive, but let's not confuse reason or rationality with consciousness; they needn't be the same.)

Quoting Patterner
Why do you [think] it must be alive? What aspects of life do you think are required for consciousness?


And this connects to the discussion above. I'd endorse @Wayfarer's speculations, and add quite a few of my own, but it's a long story. Maybe a new thread, called something like "The Connection between Life and Consciousness - The Evidence So Far"? And yes, if panpsychism is valid, that would appear to contradict the "consciousness ? life" hypothesis.

Quoting Wayfarer
Although you have to give it credit for its articulateness.


:lol: But there remains a serious question, which I raised when you previously quoted a modest AI: What would you conclude if the alleged entity said it was a subject of experience? The point is, you're applauding it now for its truthfulness, but would you change your mind about that if it said something you thought wasn't true?
boundless November 02, 2025 at 13:55 #1022538
Quoting Patterner
I have no idea what video apokrisis posted. I just did a search. This post is about the same stuff, but there's no link to a video.


It was a video that was posted some years ago about a computer simulation of metabolic processes of a cell (I vaguely remember that ATP was also present, that's why I thought the video was the same). It was a very well-made video that gives the idea of how complex are those processes. I hope I find it again.

I'll watch the video as soon as I can. Unfortunately, in the next few days I'll be somewhat busy, so I'll need some time.

Quoting Patterner
I don't mean this is how life emerged, as in abiogenesis. I mean life is various physical processes, such as metabolism, respiration, and reproduction, and we can understand these processes all the way down to things like electrons and redox reactions. There's nothing happening above that isn't explained below. There is no vital force/élan vital needed to explain anything.


Ok, I see and I think I agree. But I also think that there is some rudimentary intentionality even in the simplest life forms (and perhaps even in viruses which are not considered living). So perhaps the issues of life and consciousness aren't separate.
I believe that perhaps the properties that characterize life are present in a latent form in what isn't life. Think about something like Aristotle's notion of potency and act.

Quoting Patterner
As I said, consciousness is not physical processes like photons hitting retinas, rhodopsin changing shape, signal sent up the optic nerve to the lateral geniculate nucleus, signal processed, processed signal sent to the visual cortex, and a million other intervening steps. No amount of added detail would be a description of the experience of seeing red.


Agreed. To which, I also add the capacity of reason that I alluded to my reference to mathematics.


Patterner November 02, 2025 at 14:37 #1022545
Quoting J
Why do you [think] it must be alive? What aspects of life do you think are required for consciousness?
— Patterner

And this connects to the discussion above. I'd endorse Wayfarer's speculations, and add quite a few of my own, but it's a long story. Maybe a new thread, called something like "The Connection between Life and Consciousness - The Evidence So Far"? And yes, if panpsychism is valid, that would appear to contradict the "consciousness ? life" hypothesis.
(Thanks for pointing out the omission. I've fixed it.)

I think it would be a great thread, although our guesses about the connection between life and consciousness are very different.
.

Patterner November 02, 2025 at 18:20 #1022583
Quoting boundless
But I also think that there is some rudimentary intentionality even in the simplest life forms (and perhaps even in viruses which are not considered living).
Yes. Rudimentary intentionality. Rudimentary thinking.

Since nearly all life uses DNA, it's a reasonable assumption that LUCA had DNA. And LUCA would have been an extremely simple form of life. But there would still be great complexity, as there is in today's simplest, single-celled life. And it's all acting to keep itself alive.

I think if AI had to act to keep itself alive, it would be a good step. Something would matter to it, as Reply to Wayfarer might say.
noAxioms November 03, 2025 at 02:29 #1022711
I echo that Welcome back!
Much to digest in the posts that resulted. I'm slow to reply to it all.

Quoting Wayfarer
... And all of the factors that impinge on such an organism, be they energetic, such as heat or cold, or chemical, such as nutrients or poisons - how are they not something other to or outside the organism? At every moment, therefore, they're 'experiencing something besides themselves, namely, the environment from which they are differentiated.

This depends on how you frame things. I'd say that for something that 'experiences', it experiences its sensory stream, as opposed to you framing it as a sort of direct experience of its environment. It works either way, but definitions obviously differ. When I ask "'how could a thing experience anything besides itself?', I'm asking how it can have access to any sensory stream besides its own (which is what the first person PoV is). This by no means is constricted to biological entities.
But you're interpreting my question as "how can the entity not experience sensory input originating from its environment?", which I'm not asking at all since clearly the environment by definition affects said entity. The rest of your post seems to rest on this mistaken interpretation of my question.

A motor vehicle, for example, has many instruments which monitor its internal processes - engine temperature, oil levels, fuel, and so on - but you're not going to say that the car experiences overheating or experiences a fuel shortage.
I am going to say all that, but I don't use a zoocentric definition of 'experiences'.

There is 'nothing it is like' to be a car, because a car is a device, an artifact - not a being, like a man, or a bat.
There may or may not be something it is like to be a car, but if there's not, it isn't because it is an artifact. A rock isn't an artifact, and yet it's the presumed lack of 'something it is like to be a rock' violates the fallacious 'not an artifact' distinction.

As @J points out, some of us do not hold out for an ontological difference between a device and a living thing. Your conclusion rests on this opinion instead of resting on any kind of rational reasoning.


Quoting Wayfarer
I think what Chalmer’s is really trying to speak of is, simply, being. Subjects of experience are beings
This leverages two different meanings of 'being'. The first is being (v), meaning vaguely 'to exist'. The latter is a being (n) which is a biological creature. If Chalmers means the latter, the you should say "simply, a being", which correctly articulates your zoocentric assumptions. Of course your Heidegger comment suggests you actually do mean the verb, in which case I don't know how the 'are beings' are in any way relevant since rocks 'are' just as much as people.

Quoting Wayfarer
But I can ask: when you stub your toe, is there pain?
Wrong question. The correct question is, if a sufficiently complex car detects low oil, does it necessarily not feel its equivalent of pain, and if not, why not? Sure, I detect data indicating damage to my toe and my circuits respond appropriately. How I interpret that is analogous to the car interpreting its low oil data.

... in the apodictic knowledge of one’s own existence that characterises all first-person consciousness.
My conclusion of existence or lack thereof can be worked out similarly by any sufficiently capable artifact.
I do realize that you're actually trying to address the question of the topic here. I'm trying to find that fundamental difference that makes a problem 'hard', and thus gives that ontic distinction that J mentioned. But your argument seems to just revolve around 'is sufficiently like me that I can relate' and not anything fundamental, which is why Nagel mentions bats but shies away from flowers, bacteria, sponges, etc. We're evolved from almost all that, and if none of these qualify, then something fundamental changed at some point along the way. Nobody ever addresses how this physical being suddenly gains access to something new, and why a different physical arrangement of material cannot.


Quoting J
Explaining the obvious is a quintessentially philosophical task!

Quoting Wayfarer
That devices are not subjects of experience is axiomatic, in my opinion.

'Axiomatic' typically suggests obvious. Obvious suggests intuitive, and intuitions are typically lies that make one more fit. So in a quest for what's actually going on, intuitions, and with it most 'obvious' stuff, are the first things to question and discard.

For the record, I don't find the usual assertions to be obvious at all, to the point of negligible probability along with any given particular God, et al.



Quoting Patterner
This is how Nagel said it:

But fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism – something it is like for the organism. — Thomas Nagel

Except for the dropping of 'fundamental' in there, it sounds more like a definition (of mental state) than any kind of assertion. The use of 'organism' in there is an overt indication of biocentric bias.

Quoting Patterner
Abilities that a car lacks.
But abilities that it necessarily lacks? I suggest it has mental abilities now, except for the 'proof by dictionary' fallacy that I identified in my OP: the word 'mental' is reserved for how it is typically used in human language, therefore the car cannot experience its environment by definition. Solution to that reasoning is to simply use a different word for the car doing the exact same thing.



Quoting Harry Hindu
Doesn't the experience of the pamphlet include the information received from it? It seems to me that you have to already have stored information to interpret the experience
I already know how to read, but I didn't read the pamphlet to learn how to read (that's what the Bible is for). Rather I read it to promote my goal of gathering new information I don't already have stored.

In other words, the third person is really just a simulated first person view.
No, not at all. If a third person conveyance did that, I could know what it's like to be a bat. Not even a VR setup (a simulation of experience) can do that.

Is the third person really a view from nowhere
Not always. I can describe how the dogwood blocks my view of the street from my window. That's not 'from nowhere'.


If you don't like the term "mind" that we have direct access to then fine
I don't like the word at all since it carries connotations of a separate object, and all the baggage that comes with that.

but we have direct access to something, which is simply what it means to be that process.
Don't accept that this direct access is what it means to be something. The direct access is to perhaps the map (model) that we create. which is by definition an indirection to something else, so to me it's unclear if there's direct access to anything. You argue that access to the map can be direct. I'm fine with that.
As for your definition, does a flame have direct access to its process of combustion? Arguably so even if it's not 'experience', but I don't think that's what it means to 'be a flame'. What does it mean to be a rock? Probably not that the rock has any direct access to some sort of rock process.

Aren't automated and mechanical devices classical things, too?
Sure.
Don't automated and mechanical measuring devices change what is being measured at the quantum level?
All systems interact. Avoiding that is possible, but really really difficult.

I disagree with your phrasing of 'change what is being measured at the quantum level' since it implies that there's a difference with some other state it otherwise would have been. 'Change' implies a comparison of non-identical things, and at the quantum level, there's only what is measured, not some other thing.
Classically, sure. Sticking a meat thermometer into the hot turkey cools the turkey a bit,.



Quoting boundless
Ok but in the 'ontic' definition of strong emergence, when sufficient knowledge is aquired, it results in weak emergence. So the sound that is produced by the radio also necessitates the presence of the air. It is an emergent feature from the inner workings of the radio and the radio-air interaction.
OK. I called it strong emergence since it isn't the property of the radio components alone. More is needed. Equivalently, substance dualism treats the brain as sort of a receiver tuned to amplify something not-brain. It's a harder sell with property dualism.

Regarding the music, I believe that to be understood as 'music' you need also a receiver that is able to understand the sound as music
That's what a radio is: a receiver. It probably has no understanding of sound or what it is doing.


Quoting boundless
Are you saying that atoms have intentionality, or alternatively, that a human is more than just a collection of atoms? Because that's what emergence (either kind) means: A property of the whole that is not a property of any of the parts. It has nothing to do with where it came from.or how it got there. — noAxioms

Emergence means that those 'properties of the wholes that are not properties of the parts' however can be explained in virtue of the properties of the parts. So, yeah, I am suggesting that either a 'physicalist' account of human beings is not enough or that we do not know enough about the 'physical' to explain the emergence of intentionality, consciousness etc.

I would suggest that we actually do know enough to explain any of that, but still not a full explanation, and the goalposts necessarily get moved. Problem is, any time an explanation is put out there, it no longer qualifies as an explanation. A car does what it's programmed to do (which is intentionally choose when to change lanes say), but since one might know exactly how it does that, it ceases to be intentionality and becomes just it following machine instructions. Similarly, one could have a full account of how human circuitry makes us do everything we do, and that explanation would (to somebody who needs it to be magic) disqualify the explanation as that of intentionality, it being just the parts doing their things.

We know that all the operation of a (working) machine can be understood via the algorithms that have been programmed even when it 'controls' its processes.
Not true. There are plenty of machines whose functioning is not at all understood. That I think is the distinction between real AI and just complex code. Admittedly, a self driving car is probably mostly complex code with little AI to it. It's a good example of consciousness (unconscious things cannot drive safely), but it's a crappy example of intelligence or creativity.

Regarding when a machine 'dies'... well if you break it...
You can fix a broken machine. You can't fix a dead cat (yet). Doing so is incredibly difficult, even with the simplest beings.
Interestingly, a human maintains memory for about 3 minutes without energy input (refresh). A computer memory location lasts about 4 milliseconds and would be lost if not read and written back before then. Disk memory is far less volatile of course.

Quoting boundless
As I said before, it just seems that our experience of ourselves suggests that we are not mere automata.
It suggests nothing of the sort to me, but automata is anything but 'mere' to me.

also 'intuition' seems something that machines do not really have.
I think they do, perhaps more than us,. which is why they make such nice slaves.


Quoting boundless
Standard interpretation-free QM is IMO simply silent about what a 'measurement' is. Anything more is interpretation-dependent.

Quantum theory defines measurement as the application of a mathematical operator to a quantum state, yielding probabilistic outcomes governed by the Born rule. Best I could do.

Quoting Patterner
I don't believe there's any such thing as 'strong emergence'.
I tried to give an example of it with the radio. Equivalently, consciousness, if a non-physical property, would be akin to radio signals being broadcast, allowing components to generate music despite no assemblage of those components being able to do so on their own.

Patterner November 03, 2025 at 02:35 #1022712
Quoting noAxioms
Abilities that a car lacks.
— Patterner
But abilities that it necessarily lacks? I suggest it has mental abilities now, except for the 'proof by dictionary' fallacy that I identified in my OP: the word 'mental' is reserved for how it is typically used in human language, therefore the car cannot experience its environment by definition. Solution to that reasoning is to simply use a different word for the car doing the exact same thing.
Ok. What is that word?
Wayfarer November 03, 2025 at 05:29 #1022724
Quoting noAxioms
I don't know how they 'are beings' are in any way relevant since rocks 'are' just as much as people.


First of all, you did say you don’t know how any creature could experience anything other than itself, which I interpreted at face value. That was what I responded to. If that is not what you meant to say, perhaps pay better attention to your mode of expression.

We say of intelligent creatures such as humans and perhaps the higher animals that they are ‘beings’ but we generally don’t apply that terminology to nonorganic entities, which are described as existents or things (hence also the distinction in language between ‘you’ and ‘it’.) But the ontological distinction between beings of any kind, and nonorganic objects, is that the former are distinguished by an active metabolism which seeks to preserve itself and to reproduce. There is nothing analogous to be found in nonorganic matter.

Quoting noAxioms
Nobody ever addresses how this physical being suddenly gains access to something new, and why a different physical arrangement of material cannot.


But I am doing just that, and have also done it before. I’ve had many an argument on this forum about what I’ve described as the ‘ontological distinction’ I’ve made above.

To recap - the distinction between any organism and a nonorganic object (leaving aside the marginal example of viruses) is that the former maintain homeostasis, exchange nutrients with the environment, grow, heal, reproduce and are able to evolve. They are autopoietic in Varela and Maturana’s terms -‘systems whose components continuously produce and regenerate the network of processes that constitutes its own organization and identity.’ They are organised according to internal principles. Manufactured items such as devices are allopoietic - their organisation is imposed by an external agent, the manufacturer.

So that asserts a basic ontological distinction between organic and inorganic. Going back to Aristotle, there were further divisions - vegetative, animal, and human, each with the properties of the lesser stages, but also having attributes which the lower levels lacked. For example, animals are self-moving in a way that plants are not, and humans display linguistic and rational abilities that animals do not. Those too are ontological distinction although not so widely recognised as they used to be.

So animals ‘have access to’ ways of being that plants do not, and humans ‘have access to’ ways of being that animals do not. To try and collapse all of those distinctions to some purported lowest common denominator is reductionism. Reductionism works well in some contexts, but is inapplicable in others.

As for the hard problem, it has a clear genealogy, although again many will take issue with it:

[quote=The Blind Spot,Adam Frank, Marcelo Gleiser, Evan Thompson] The problem goes back to the rise of modern science in the seventeenth century, particularly to the bifurcation of nature, the division of nature into external, physical reality, conceived as mathematizable structure and dynamics, and subjective appearances, conceived as phenomenal qualities lodged inside the mind. The early modern version of the bifurcation was the division between “primary qualities” (size, shape, solidity, motion, and number), which were thought to belong to material entities in themselves, and “secondary qualities” (color, taste, smell, sound, and hot and cold), which were thought to exist only in the mind and to be caused by the primary qualities impinging on the sense organs and giving rise to mental impressions. This division immediately created an explanatory gap between the two kinds of properties. [/quote]

[quote=Thomas Nagel, Mind and Cosmos, Pp 35-36]The modern mind-body problem arose out of the scientific revolution of the seventeenth century, as a direct result of the concept of objective physical reality that drove that revolution. Galileo and Descartes made the crucial conceptual division by proposing that physical science should provide a mathematically precise quantitative description of an external reality extended in space and time, a description limited to spatiotemporal primary qualities such as shape, size, and motion, and to laws governing the relations among them. Subjective appearances, on the other hand -- how this physical world appears to human perception -- were assigned to the mind, and the secondary qualities like color, sound, and smell were to be analyzed relationally, in terms of the power of physical things, acting on the senses, to produce those appearances in the minds of observers. It was essential to leave out or subtract subjective appearances and the human mind -- as well as human intentions and purposes -- from the physical world in order to permit this powerful but austere spatiotemporal conception of objective physical reality to develop. [/quote]

So the problem in a nutshell arises from trying to apply the third-person methods of science to the first—person quality of lived experience.

Quoting noAxioms
red light triggers signals from nerves that otherwise are not triggered, thus resulting in internal processing that manifests as that sensation. That’s very third-person, but it’s an explanation, no?


As Nagel says, this explanation, ‘however complete, will leave out the subjective essence of the experience—how it is from the point of view of its subject.’ The physical sciences are defined by excluding subjective experience from their domain. You cannot then use those same sciences to explain what they were designed to exclude. This isn’t a failure of neuroscience—it’s a recognition of the scope of third-person, objective description. The first-person, subjective dimension isn’t missing information that more neuroscience will fill in; it’s in a different category.
Patterner November 03, 2025 at 10:51 #1022765
Quoting Wayfarer
As Nagel says, this explanation, ‘however complete, will leave out the subjective essence of the experience—how it is from the point of view of its subject.’ The physical sciences are defined by excluding subjective experience from their domain. You cannot then use those same sciences to explain what they were designed to exclude. This isn’t a failure of neuroscience—it’s a recognition of the scope of third-person, objective description. The first-person, subjective dimension isn’t missing information that more neuroscience will fill in; it’s in a different category.
I can't imagine there's a better way to word it.
Harry Hindu November 03, 2025 at 12:59 #1022781
Quoting noAxioms
Doesn't the experience of the pamphlet include the information received from it? It seems to me that you have to already have stored information to interpret the experience
— Harry Hindu
I already know how to read, but I didn't read the pamphlet to learn how to read (that's what the Bible is for). Rather I read it to promote my goal of gathering new information I don't already have stored.

Yeah, that was my point - you already knew how to read - which means you already have stored information to interpret the experience.

Quoting noAxioms
No, not at all. If a third person conveyance did that, I could know what it's like to be a bat. Not even a VR setup (a simulation of experience) can do that.

But you can only know what it is like to be a bat from within your first-person experience. It's no different than seeing your Desktop screen on your computer and starting up a virtual machine that shows another Desktop within the framework of your existing Desktop.

Quoting noAxioms
Not always. I can describe how the dogwood blocks my view of the street from my window. That's not 'from nowhere'.

Right. So we agree that it's a view from everywhere or somewhere. A view from nowhere doesn't make sense. You can only emulate a view from everywhere from a view from somewhere, or as an accumulation of views from somewhere, like we do when we use each other's views to triangulate the truth.

Quoting noAxioms
I don't like the word at all since it carries connotations of a separate object, and all the baggage that comes with that.

You've lost me now. It sounds like its not really the word you don't like, but the definition. You can define "mind" however you want. But if you don't like the since it carries connotations of a separate object, you could say that for any object, like your dogwood tree, street and window, but you didn't seem have any quarrels in using those terms that carry the connotations of a separate object.

Quoting noAxioms
but we have direct access to something, which is simply what it means to be that process. - Harry Hindu
Don't accept that this direct access is what it means to be something. The direct access is to perhaps the map (model) that we create. which is by definition an indirection to something else, so to me it's unclear if there's direct access to anything. You argue that access to the map can be direct. I'm fine with that.

If direct access is not what it means to be something, then you are creating a Cartesian theatre - as if there is a homunculus separate from the map, but with direct access - meaning it sees the map as it truly is, instead of being the map as it truly is.

Quoting noAxioms
As for your definition, does a flame have direct access to its process of combustion? Arguably so even if it's not 'experience', but I don't think that's what it means to 'be a flame'. What does it mean to be a rock? Probably not that the rock has any direct access to some sort of rock process.

Flame and rocks are not anywhere near as complex of a process as the mind. I'm sure you are aware of this. Isn't combustion and flame the same thing - the same process - just using different terms?

You're making my argument for me. If the rock doesn't have any direct access to the rock process, then it logically follows that there is no access - just being. And if there is no direct access then there can be no indirect access (in other words, as I said before - direct vs indirect realism is a false dichotomy)

Quoting noAxioms
I disagree with your phrasing of 'change what is being measured at the quantum level' since it implies that there's a difference with some other state it otherwise would have been. 'Change' implies a comparison of non-identical things, and at the quantum level, there's only what is measured, not some other thing.
Classically, sure. Sticking a meat thermometer into the hot turkey cools the turkey a bit,.

According to the standard (“Copenhagen”) interpretation, something does change — namely, the system’s state description goes from a superposition to an eigenstate corresponding to the measured value. This is often described as wave function collapse. Measurement doesn’t change a definite pre-existing state, but it does change the system’s quantum state description — from a superposition of possibilities to a single outcome.



J November 03, 2025 at 13:40 #1022790
Quoting noAxioms
As J points out, some of us do not hold out for an ontological difference between a device and a living thing.


I did indeed point that out, and I think it's important to understand why. I hope I also made it clear that I am not one of "non-difference" group. I find myself being the devil's advocate in this discussion, largely because I don't think we'll get anywhere if we don't thoroughly understand why the ontological difference is not obvious or axiomatic or whatever other bedrock term we care to use. As philosophers, we're watching something exciting unfold in real time: a genuinely new development of a consequential question that the general public is interested in, and that we can help with. That said, I'll say again that when we eventually learn the answer about consciousness (and I think we will), we'll learn that you can't have consciousness without life. But that conclusion is a long way off.
Patterner November 03, 2025 at 23:04 #1022913


Quoting J
That said, I'll say again that when we eventually learn the answer about consciousness (and I think we will), we'll learn that you can't have consciousness without life.
Of course, I'm going to disagree regarding consciousness, because I think it's fundamental. However, we probably think we need the same features, whether the answer is panpaychism, physicalism, or whatever (not sure what your theory is). I'm just wondering if there is reason to believe biological life is the only thing that can provide those features.

Foremost, I do not think the qualities we are after can be developed in a vacuum. I've posted this quote of David Eagleman before. From Annaka Harris' audiobook [I]Lights On[/I], starting at 25:34 of Chapter 5 The Self:
David Eagleman:I think conscious experience only arises from things that are useful to you. You obtain a conscious experience once signals makes sense. And making sense means it has correlations with other things. And, by the way, the most important correlation, I assert, is with our motor actions. Is what I do in the world. And that is what causes anything to have meaning.
Do things and get consistent results, and meaning grows.

Having other entities of the same kind to observe and interact with is probably necessary. Reinventing the wheel takes too long. And, at least in our case, even if we were kept alive as infants, we would never learn without others. I think our brains wire themselves in response to interactions, among other things.

Biological life is not the only thing that can manipulate the environment. Robots can. But can they learn meaning from doing so? AlphaGo learned, didn't it? Not the kind of things I'm talking about. But is there a reason we can't program something that can? we do some pretty amazing things, but I don't know if we could program something that wires itself in response to its experiences.

I still don't think that's enough to get an entity like us. I think there needs to be a lot more information processing systems and feedback loops. We are just so full of such things they can't be counted. And they're all working together for this brain and body. Still, in principle, is it impossible?
noAxioms November 04, 2025 at 16:18 #1023066
Quoting Patterner
Solution to that reasoning is to simply use a different word— noAxioms

Ok. What is that word?
Not my problem if I don't use that reasoning. I feel free to use the same word to indicate the same thing going on in both places.

Quoting Wayfarer
I don't know how they 'are beings' are in any way relevant since rocks 'are' just as much as people. — noAxioms

First of all, you did say you don’t know how any creature could experience anything other than itself
I didn't say 'creature'. Look at the words you quoted of me, and I very much did pay better attention to my mode of expression.

[quote]We say of intelligent creatures such as humans and perhaps the higher animals that they are ‘beings’ but we generally don’t apply that terminology to nonorganic entities
Agree that the noun form is mostly used that way, but you were leveraging the verb form of the word, not the noun. The verb form applies to rocks just as much as spiders, possibly excepting idealism, which I'm not assuming.

But the ontological distinction between beings of any kind, and nonorganic objects, is that the former are distinguished by an active metabolism which seeks to preserve itself and to reproduce.
Those are not ontological distinctions. It's just a list of typical properties found mostly in life forms, the majority of which are not usually referred to as 'beings'. I can make a similar list distinguishing metallic elements from the others, but pointing this out doesn't imply a fundamental difference between one arrangement of protons and neutrons vs another. It's just the same matter components arranged in different ways. Ditto for people vs rocks. Different, but not a demonstrably fundamental difference.

And the topic is about first person experience. Are you suggesting that all organic material (a living sponge say. Does that qualify as a 'being'?) has first person experience? If not, then it's not about homeostasis or being organic. We're looking for a fundamental difference in experience, not a list of properties typical of organic material.

Nobody ever addresses how this physical being suddenly gains access to something new, and why a different physical arrangement of material cannot. — noAxioms

But I am doing just that, and have also done it before.
No you're not. You're evading. Answer the questions about the say a Urbilaterian (a brainless ancestor of you, and also a starfish). Is it a being? Does it experience and have intent? If not, what's missing? If it does, then how is its interaction with its environment fundamentally any different from say a roomba?
I'm trying to play 20 questions with you, but I'm still stuck on question 1.

Quoting Wayfarer
To recap ...
I know the recap, and it answers a very different question. It is a nice list of properties distinguishing earth biological beings from not. There's nothing on the list that is necessarily immaterial, no ontological distinction. You opinion may differ on that point, but it's just opinion. Answer the question above about the brainless being, because I'm not looking for a definition of a life form.

Quoting J
I hope I also made it clear that I am not one of "non-difference" group.
That was clear, yes. Keep in mind that my topic question, while framed as a first-person issue, is actually not why you're in that 'difference' group, but why the non-difference group is necessarily wrong.

I'll say again that when we eventually learn the answer about consciousness (and I think we will)
I think we never will. There will always be those that wave away any explanation as correlation, not causation.

we'll learn that you can't have consciousness without life.
Which requires a more rigourous definition of consciousness I imagine.


Quoting Wayfarer
Those too are ontological distinction although not so widely recognised as they used to be.
We seem to have a vastly different notion of what constitutes an ontological distinction. It seems you might find a stop sign ontologically distinct from a speed limit sign since they have different properties.

As for the 17th century treatment of " secondary qualities like color, sound, and smell", those were in short order explained as physical properties (wavelength, air vibration, and airborne particles respectively, sensed just as much as shape and motion. Science took the hint, but many philosophers just moved the goal posts. Sure, all of those, including shape and motion, appear to us as first person qualia. I do agree (in the OP) that no third person description can describe the first person experience. I do not agree that this first person perspective is confined to a subset of biological things. Does a robot feel human pain? No, but neither does an octopus. Each experiencing thing having it's own 'what it's like to be' does not require any special treatment.

Quoting Wayfarer
As Nagel says, this explanation, ‘however complete, will leave out the subjective essence of the experience
Absolutely! I never contested that. It's why you cannot know what it's like to be a bat. Not even a computer doing a perfect simulation of a bat would know this. The simulated bat would know, but the simulated bat is not the program nor is it the computer.

The physical sciences are defined by excluding subjective experience from their domain.
I disagree with this. Neurologists require access to that, which is why brain surgery is often done on conscious patients, with just local anesthesia to the scalp. Of course they only have access to experiences as reported in third person by the subject, so in that sense, I agree.

I actually had that done to me (no brain surgery, just the shot). I collided with tree branch, driving a significant chunk of wood under my scalp. They had to inject lidocaine before cutting it out. That makes me a certified numbskull.



Quoting Harry Hindu
Yeah, that was my point - you already knew how to read - which means you already have stored information to interpret the experience.
OK. I never said otherwise. I was simply providing the requested example of first/third person being held at once.

But you can only know what it is like to be a bat from within your first-person experience.
Don't understand this at all. I cannot know what it's like to be a bat. period. A flight simulator doesn't do it. That just shows what it's like for a human (still being a human) to have a flying-around point of view.

It sounds like its not really the word you don't like, but the definition.
It's that people tend to insert their own definition of 'mind' when I use the word, and not use how I define it, despite being explicit about the definition.


Quoting Harry Hindu
If direct access is not what it means to be something, then you are creating a Cartesian theatre - as if there is a homunculus separate from the map, but with direct access - meaning it sees the map as it truly is, instead of being the map as it truly is.
Lots to take apart here. I don't think we know anything as it is in itself, including any maps we create.
As for the homonculus, humans do seem to have a very developed one, which is a sort of consciousness separate from the subconscious (the map maker, and source of intuitions). The subconscious is older (evolutionary time) and is waaay more efficient, and does most of the work and decision making. It is in charge since it holds all the controls. It might hold different beliefs than the homonculus, the latter of which is a more rational tool, used more often to rationalize than to be rational.

But yes, we have this sort of theatre, a view of the map. I don't consider it direct access since I don't see the map as it is in itself, but you might consider it direct since the connection to it is just one interface away, and there's not another homonculus looking at the experience of the first one (a common criticism of the Cartesian theatre idea).

The description I give is still fully physical, just different parts (science has names for the parts) stacked on top of each other. 'Homonculus' is not what it's called. Those are philosophy terms.

Quoting Harry Hindu
As for your definition, does a flame have direct access to its process of combustion? Arguably so even if it's not 'experience', but I don't think that's what it means to 'be a flame'. What does it mean to be a rock? Probably not that the rock has any direct access to some sort of rock process. — noAxioms

Isn't combustion and flame the same thing - the same process - just using different terms?
No. Flame is an object. There's six flames burning in the candle rack. Combustion is a process (a process is still a noun, but not an object). Flame is often (but not always) where combustion takes place.
Yes, combustion is much simpler. It's why I often choose that example: Simple examples to help better understand similar but more complex examples.

Quoting Harry Hindu
You're making my argument for me. If the rock doesn't have any direct access to the rock process, then it logically follows that there is no access - just being.

Here you suggest that the rock has 'being' (it is being a rock) without direct access to it's processes (or relative lack of them). This contradicts your suggestion otherwise that being a rock means direct access to, well, 'something', if not its processes.
"we have direct access to something, which is simply what it means to be that process."

Quoting Harry Hindu
According to the standard (“Copenhagen”) interpretation
A comment on that. One might say that there is no standard interpretation since each of them has quite the following. On the other hand, Copenhagen is more of an epistemic interpretation, while the others are more metaphysical interpretations, asserting what actually is instead of asserting what we know. Quantum theory is not a metaphysical theory about what is, but rather a scientific theory about what one will expect to measure. In that sense, Copenhagen fits perfectly since it is about what we expect, and not about what is.

[quote]something does change — namely, the system’s state description goes from a superposition to an eigenstate corresponding to the measured value.
Yes, what we know about a system changes. That's wave function collapse, where the wave function is a description of what we know about a system. Hence I grant 'change upon measurement' to any collapse interpretation.

Quoting Patterner
Of course, I'm going to disagree regarding consciousness, because I think it's fundamental.

We both disagree, but for such wildly different reasons :)
J November 04, 2025 at 16:38 #1023074
Quoting noAxioms
my topic question, while framed as a first-person issue, is actually not why you're in that 'difference' group, but why the non-difference group is necessarily wrong.


Yes. And I'm in no position to claim that any view on consciousness is necessarily right or wrong. We're dealing with educated guesses, at best.

Quoting noAxioms
There will always be those that wave away any explanation as correlation, not causation.


Hmm. I suppose so, but that wouldn't mean we hadn't learned the explanation. :smile:

Quoting noAxioms
we'll learn that you can't have consciousness without life."
Which requires a more rigourous definition of consciousness I imagine.


Absolutely. If a biological explanation turns out to be the correct one, I imagine it will also show that most of our rough-and-ready conceptions about subjectivity and consciousness are far too impoverished. It would be like analyzing 18th century views on "time" and comparing them to relativity theory.
Patterner November 04, 2025 at 16:53 #1023077
Quoting noAxioms
Solution to that reasoning is to simply use a different word— noAxioms

Ok. What is that word?
— Patterner
Not my problem if I don't use that reasoning. I feel free to use the same word to indicate the same thing going on in both places.
You are the one who suggested that solution, because you want cars to be seen as having the mental abilities we have. I'm fine with cars being seen as not having them.
Wayfarer November 04, 2025 at 20:16 #1023126
Quoting noAxioms
The physical sciences are defined by excluding subjective experience from their domain ~ Nagel

I disagree with this. Neurologists require access to that, which is why brain surgery is often done on conscious patients, with just local anesthesia to the scalp. Of course they only have access to experiences as reported in third person by the subject, so in that sense, I agree.


Galileo's point, which was foundational in modern science, was that the measurable attributes of bodies - mass, velocity, extension and so on - are primary, while how bodies appear to observers - their colour, scent, and so on - are secondary (and by implication derivative). That is the sense in which the physical sciences 'excluded subjective experience', and it's not a matter of opinion.

As to why neuroscientists converse with subjects, in fact there's a textbook case of these kinds of practices which lends very strong support to some form of dualism. I'm speaking of the Canadian neuroscientist Wilder Penfield (1891-1976), who operated on many conscious patients during his very long career. He reported that his operations often elicited or stimulated vivid memories of previous experiences or could induce movements of various kinds. But he also reported that subjects could invariably distinguish between effects or memories that were elicited by him, and those which they themselves initiated. He concluded from this that the mind and the brain are not identical. While electrical stimulation of the cortex could evoke experiences, sensations, or involuntary actions, it could never make the patient will to act or decide to recall something. Penfield saw a clear distinction between neural activity that produced experiences and the conscious agency that could observe, interpret, and choose among them. In his later work (The Mystery of the Mind, 1975), he wrote that “the mind stands apart from the brain but acts upon it,” proposing that consciousness is not reducible to cerebral processes alone.

As these operations showed, direct cortical stimulation could evoke experiences, movements, and memories, but never the act of will itself. Patients could always distinguish between something they themselves initiated and something induced by the surgeon. Penfield concluded that the conscious agent — the mind — cannot be identified with neural circuitry alone.

So the “third-person substrate” may be describable, but that doesn’t make it understandable in the relevant sense. Understanding would mean grasping how physical interactions, which by definition exclude subjectivity (per the above), could constitute subjective awareness itself. And that’s not an empirical gap that can be closed with more data or better simulations; it’s a conceptual distinction. A fully simulated brain might behave exactly like a conscious person, but whether there’s 'anything it’s like' to be that simulation is the very point at issue.

In short, you’re arguing from within the third-person framework while intending to account for what only appears from within the first-person perspective. The result isn’t an explanation but a translation — a substitution of the language of mechanism for the reality of experience. That’s the “illusion of reduction” you yourself noticed when you said commentators “appropriate first-person words to refer to third-person phenomena.”

When you treat the first-person point of view as something that emerges from a “third-person-understandable substrate,” you are collapsing the distinction Chalmers and Nagel are pointing out. By assuming the substrate is “understandable” in third-person terms, you've already presupposed that subjectivity can be accounted for within an objective framework. So you're not addressing the issue, but explaining it away.



Wayfarer November 04, 2025 at 22:58 #1023161
Quoting noAxioms
We seem to have a vastly different notion of what constitutes an ontological distinction. It seems you might find a stop sign ontologically distinct from a speed limit sign since they have different properties.


I’m not using “ontological” here to mean merely “a set of observable traits.” I’m using it in its proper philosophical sense — a distinction in the mode of being. A rock and an amoeba both exist, but not in the same way. The amoeba has a self-organising, self-maintaining unity: it acts to preserve itself and reproduce. This isn’t a mere property added to matter, but a different kind of organization — what Aristotle called entelechy and what modern systems theorists call autopoiesis.

That distinction is categorical, not merely quantitative. Life introduces an interiority — however minimal — that inanimate matter does not possess. It’s what allows later forms of experience, cognition, and consciousness to emerge. So the “list of attributes” such as homeostasis or metabolism are not arbitrary descriptors, but outward manifestations of this deeper ontological difference.

But this distinction also bears directly on the problem of consciousness. Nagel points out that modern science arose by deliberately excluding the mental from its field of study. The “objective” world of physics was constituted by abstracting away everything that belongs to the first-person point of view — experience, meaning, purpose — in order to describe the measurable, quantifiable aspects of bodies. That method proved extraordinarily powerful, but it also defined its own limits: whatever is subjective was set aside from the outset. As noted above, this is not a matter of opinion.

This means that the gap between third-person descriptions and first-person experience isn’t an accidental omission awaiting further physical theory; it’s a structural feature of how the physical sciences were established. To describe something in purely physical terms is by definition to omit 'what it feels like' to be that thing. So the problem isn’t just about explaining how consciousness emerges from matter — according to Thomas Nagel, it is about how a worldview that excluded subjectivity as a condition of its success could ever re-incorporate it without transforming its own foundations.

That’s why I say the distinction between living and non-living things is not merely biological but ontological. Life is already the point at which matter becomes interior to itself — where the world starts to appear from a perspective within it. From that perspective, consciousness isn’t an inexplicable late-arriving anomaly in an otherwise material universe; it’s the manifestation of an inherent distinction between appearance and being that the physical sciences, by their very design, have bracketed out. But that is a transcendental argument, and therefore philosophical rather than scientific.

Quoting J
If a biological explanation turns out to be the correct one, I imagine it will also show that most of our rough-and-ready conceptions about subjectivity and consciousness are far too impoverished


I've been going through a pretty dense paper by Evan Thompson, 'Could All Life be Sentient?', which is useful in respect of these questions about the distinctions between various levels or kinds of organic life and degrees of consciousness. Useful, but not conclusive, leaving the question open, in the end, but helpful in at least defining and understanding the issues. I've also generated a synopsis which will be helpful in approaching the essay.

Gdocs Synopsis (AI Generated)
Could All Life be Sentient? Evan Thompson
J November 04, 2025 at 23:45 #1023172
Quoting Wayfarer
I've also generated a synopsis which will be helpful in approaching the essay.


Thanks, I'll read it.
noAxioms November 05, 2025 at 03:19 #1023194
I thank you all for your input, and for your patience when I take at times days to find time to respond.


Quoting J
Yes. And I'm in no position to claim that any view on consciousness is necessarily right or wrong. We're dealing with educated guesses, at best.
Most choose to frame their guesses as assertions. That's what I push back on. I'm hessitant to label my opinions as 'beliefs', since the word connotes a conclusion born more of faith than of hard evidence (there's always evidence on both sides, but it being hard makes it border more on 'proof').

There will always be those that wave away any explanation as correlation, not causation. — noAxioms
Hmm. I suppose so, but that wouldn't mean we hadn't learned the explanation.

But we have explanations of things as simple as consciousness. What's complicated is say how something like human pain manifests itself to the process that detects it. A self-driving car could not do what it does if it wasn't conscious any more than an unconscious person could navigate through a forest without hitting the trees. But once that was shown, the goalposts got moved, and it is still considered a problem. Likewise, God designing all the creatures got nicely explained by evolution theory, so instead of conceding the lack of need for the god, they just moved the goal posts and suggest typically that we need an explanation for the otherwise appearance of the universe from nothing. They had to move that goalpost a lot further away than it used to be.

You might say that the car has a different kind of consciousness than you do. Sure, different, but not fundamentally so. A car can do nothing about low oil except perhaps refuse to go on, so it has no need of something the equivalent of pain qualia. That might develop as cars are more in charge of their own problems, and in charge of their own implementation.


Absolutely. If a biological explanation turns out to be the correct one, I imagine it will also show that most of our rough-and-ready conceptions about subjectivity and consciousness are far too impoverished.

You also need to answer the question I asked above, a kind of litmus test for those with your stance:
Quoting noAxioms
[Concerning] a Urbilaterian (a brainless ancestor of you, and also a starfish). Is it a being? Does it experience [pain say] and have intent?
If yes, is it also yes for bacteria?
The almost unilateral response to this question by non-physicalists is evasion. What does that suggest about their confidence in their view?


Quoting Patterner
You are the one who suggested that solution, because you want cars to be seen as having the mental abilities we have. I'm fine with cars being seen as not having them.

They don't have even close to the mental abilities we have, which is why I'm comparing the cars to an Urbilaterian. .But what little they have is enough, and (the point I'm making) there is no evidence that our abilities of an Urbilaterian are ontologically distinct from those of the car.
You point out why there's no alternative word: Those who need it don't want it. Proof by language. Walking requires either two or four legs, therefore spiders can't walk. My stance is that they do, it's just a different gait, not a fundamental 'walk' sauce that we have that the spider doesn't.


Quoting Wayfarer
Galileo's point, which was foundational in modern science, was that the measurable attributes of bodies - mass, velocity, extension and so on - are primary, while how bodies appear to observers - their colour, scent, and so on - are secondary (and by implication derivative).
Those supposed secondary qualities can also be measured as much as the first list. It just takes something a bit more complicated than a tape measure.

Still, I know what you mean by the division. The human subjective experience of yellow is a different thing altogether than yellow in itself, especially since it's not yellow in itself that we're sensing. A squirrel can sense it. We cannot, so we don't know the experience of yellow, only 'absence of blue'.

The division is not totally ignored by science. It's just that for most fields, the subjective experience serves no purpose to the field.


... Canadian neuroscientist Wilder Penfield (1891-1976), who operated on many conscious patients during his very long career.
...
While electrical stimulation of the cortex could evoke experiences, sensations, or involuntary actions, it could never make the patient will to act or decide to recall something.
Interesting, but kind of expected. Stimulation can evoke simple reflex actions (a twitch in the leg, whatever), but could not do something like make him walk, even involuntarily. A memory or sensation might be evoked by stimulation of a single area, but something complex like a decision is not a matter of a single point of stimulation. Similarly with the sensation, one can evoke a memory or smell, but not evoke a whole new fictional story or even a full experience of something in the past.

I see a distinction between simple and complex, and not so much between sensations/reflexes and agency. The very fact that smells can be evoked with such stimulation suggests that qualia is a brain thing.

Noninvasive stimulation has been used to improve decision speed and commitment, and with OCD, mood regulation and such. But hey, drugs do much of the same thing, and the fact that drugs are effective (as are diseases) is strong evidence against the brain being a mere antenna for agency.
Direct stimulation (as we've been discussing) has been used to influence decisions and habits (smoking?), but does not wholesale override the will. It's far less effective than is occasionally portrayed in fiction.

A fully simulated brain might behave exactly like a conscious person, but whether there’s 'anything it’s like' to be that simulation is the very point at issue.
I talked about this early in the topic, maybe the OP. Suppose it was you that was simulated, after a scan taken without your awareness. Would the simulated you realize something had changed, that he was not the real one? If not, would you (the real you) write that off as a p-zombie? How could the simulated person do anything without the same subjective experience?

In short, you’re arguing from within the third-person framework while intending to account for what only appears from within the first-person perspective. The result isn’t an explanation but a translation — a substitution of the language of mechanism for the reality of experience. That’s the “illusion of reduction” you yourself noticed when you said commentators “appropriate first-person words to refer to third-person phenomena.”

When you treat the first-person point of view as something that emerges from a “third-person-understandable substrate,” you are collapsing the distinction Chalmers and Nagel are pointing out.
Perhaps I am, perhaps because they're inventing a distinction where there needn't be one.



I think you messed up the quoting in your immediate prior post. You should edit, since many of your words are attributed to me.
Quoting Wayfarer
But the ontological distinction between beings of any kind, and nonorganic objects, is that the former are distinguished by an active metabolism which seeks to preserve itself and to reproduce ~ Wayfarer

I’m not using “ontological” here to mean merely “a set of observable traits.” I’m using it in its proper philosophical sense — a distinction in the mode of being.
I don't find your list of traits to be in any way a difference in mode of being. Water evaporates. Rocks don't. That's a difference, but not a difference in mode of being any more than the difference between the rock and the amoeba. Perhaps I misunderstand 'mode', but I see 'being' simply as 'existing', which is probably not how you're using the term. To me, all these things share the same mode: they are members of this universe, different arrangements of the exact same fundamentals. My opinion on that might be wrong, but it hasn't been shown to be wrong.

[quote]This isn’t a mere property added to matter
Our opinions on this obviously differ.

Life introduces an interiority
I notice a predictable response to the Urbilaterian question: evasion. That question has direct bearing on this assertion.

That method proved extraordinarily powerful, but it also defined its own limits: whatever is subjective was set aside from the outset. As noted above, this is not a matter of opinion.
I acknowledge this.

To describe something in purely physical terms is by definition to omit 'what it feels like' to be that thing.
To describe something in any terms at all still omits that. I said as much in the OP.


... Evan Thompson
Vitalism?
Wayfarer November 05, 2025 at 03:36 #1023196
Quoting noAxioms
Perhaps I misunderstand 'mode', but I see 'being' simply as 'existing', which is probably not how you're using the term. To me, all these things share the same mode: they are members of this universe, different arrangements of the exact same fundamentals.


Which is, in a word, physicalism - there is only one substance, and it is physical. From within that set of assumptions, Chalmer's and Nagel's types of arguments will always remain unintelligible.
J November 05, 2025 at 13:39 #1023261
Quoting noAxioms
You also need to answer the question I asked above, a kind of litmus test for those with your stance:
[Concerning] a Urbilaterian (a brainless ancestor of you, and also a starfish). Is it a being? Does it experience [pain say] and have intent?
— noAxioms
If yes, is it also yes for bacteria?
The almost unilateral response to this question by non-physicalists is evasion.


I don't know. And that's not evasion, just honesty.

But I also don't think that the right answer to that question reveals much about the larger problem. I think consciousness will turn out to depend on biology, but that's not to say that everything alive is conscious. If we could eventually determine that a bacterium isn't conscious, that would say nothing about whether the beings that are conscious require a biological basis in order to be so. In other words, being alive might be a necessary but not sufficient condition for consciousness.
Harry Hindu November 05, 2025 at 14:33 #1023268
Quoting Harry Hindu
In other words, the third person is really just a simulated first person view.


Quoting noAxioms
No, not at all. If a third person conveyance did that, I could know what it's like to be a bat. Not even a VR setup (a simulation of experience) can do that.


Quoting Harry Hindu
But you can only know what it is like to be a bat from within your first-person experience. It's no different than seeing your Desktop screen on your computer and starting up a virtual machine that shows another Desktop within the framework of your existing Desktop.


Quoting noAxioms
I cannot know what it's like to be a bat. period. A flight simulator doesn't do it. That just shows what it's like for a human (still being a human) to have a flying-around point of view.

The last sentence is reiterating the point that I made that the third-person view still occurs within the framework of the first-person view.

You also seem be saying that a third-person view does not impart knowledge. If a third-person simulation of a bat's experience does not impart knowledge because it is not an actual experience of the bat, then how does any third-person stance impart knowledge? As I said, the virtual machine is a simulation, not the real thing. There may be missing information, but it may be intentionally left out because that bit of information is irrelevant to the purpose in mind. For instance, I might try to imagine what it might be like to just experience the world through echo-location without all the other sensory experiences the bat might have.

Quoting noAxioms
It's that people tend to insert their own definition of 'mind' when I use the word, and not use how I define it, despite being explicit about the definition.

Well, there's a lot going on in this thread and our memories are finite, so you might have to restate your definition from time to time, or at least reference your definition as stated.

Quoting noAxioms
I don't think we know anything as it is in itself

This is self-defeating.

Your statement implies that we cannot even know knowledge as it is in itself. When talking about anything in the shared world you are (attempting to (your intent is to)) talking about the thing as it is in itself (probably not exhaustively but you are trying to communicate something (or some property that is) real about some state-of-affairs) or else what information are you trying to convey? Why should I believe anything you say if you can never talk about things as they are in themselves (or at least in part) - like your version of mind? Does your definition of mind impart knowledge to me about how minds are in themselves? If not, then what is your point in even typing scribbles on the screen expecting others to read them and come to some sort of understanding of what your idea is in itself. If we can never get at your idea as it is in itself, nor can you, then what is the point in communicating ideas at all? It seems to me that you are saying that you cannot know what it is like to be a bat as well as what it is like to be yourself (or at least your mind). You might not know what it is like to be your thumb, but you know what it is like to have thumbs, don't you?

Quoting noAxioms
As for the homonculus, humans do seem to have a very developed one, which is a sort of consciousness separate from the subconscious (the map maker, and source of intuitions). The subconscious is older (evolutionary time) and is waaay more efficient, and does most of the work and decision making. It is in charge since it holds all the controls. It might hold different beliefs than the homonculus, the latter of which is a more rational tool, used more often to rationalize than to be rational.

A more accurate way to frame this is through the concept of the central executive in working memory. This isn’t a tiny conscious agent controlling the mind, but a dynamic system that coordinates attention, updates representations, and integrates information from different cognitive subsystems. It doesn’t “watch” the mind; it organizes and manages the flow of processing in a way that allows higher-level reflection and planning.

The subconscious isn’t some subordinate system taking orders from the homunculus. It performs the bulk of processing, guiding behavior and intuitions automatically. Conscious, rational thought steps in to reflect on, plan, or interpret what is already occurring. Mapping the world, then, isn’t the work of an inner observer — it’s the emergent product of multiple interacting cognitive processes working together.


Quoting noAxioms
No. Flame is an object. There's six flames burning in the candle rack. Combustion is a process (a process is still a noun, but not an object). Flame is often (but not always) where combustion takes place.
Yes, combustion is much simpler. It's why I often choose that example: Simple examples to help better understand similar but more complex examples.

Objects are the process of interacting smaller "objects". The problem is that the deeper you go, you never get at objects, but processes of ever smaller "objects" interacting. Therefore it is processes, or relations all the way down. Objects are mental representations of other processes and what your brain calls an object depends on how quickly those processes are changing vs. how quickly your brain can process those changes (relativity applies to perception). A fast-moving spark may appear as a blur, while slower-moving flames are perceived as discrete objects.

Ok, so combustion ? causes ? flame. Both are processes, but not identical. Combustion is the reaction; flame is the visible process that results from it. So...
Quoting noAxioms
As for your definition, does a flame have direct access to its process of combustion?
This is not an accurate representation of what I said. All you are doing is moving the goal posts. If flame and combustion are distinct processes, then my definition is applies to being the flame, not the combustion and the flame would have direct access to itself as being the flame and indirect access to the process of combustion. The same goes for the map vs the homunculus - the homunculus would have direct access to itself, not the map - hence the Cartesian Theatre problem.

Quoting noAxioms
What does it mean to be a rock? Probably not that the rock has any direct access to some sort of rock process.


Quoting noAxioms
You're making my argument for me. If the rock doesn't have any direct access to the rock process, then it logically follows that there is no access - just being.
— Harry Hindu
Here you suggest that the rock has 'being' (it is being a rock) without direct access to it's processes (or relative lack of them). This contradicts your suggestion otherwise that being a rock means direct access to, well, 'something', if not its processes.
"we have direct access to something, which is simply what it means to be that process."

So, here I think we really need to iron out what we mean by, "access" and "being". Does a rock have an internal representation of itself, and does some other aspect of the rock have access to this representation? Does that even make sense? Can there be a sense of being for a rock? Does something need to have an internal representation with some other part "accessing" those representations for it to be, or have a sense of being? Is the sense of being something the same as being that something? Is access inherently indirect?

[I ended up coming back and adding more questions but realized I was going really deep into a rabbit hole and thought I would just let you try to respond and see where we go.]

Quoting noAxioms
Quantum theory is not a metaphysical theory about what is, but rather a scientific theory about what one will expect to measure. In that sense, Copenhagen fits perfectly since it is about what we expect, and not about what is.
True. I would say that while the theory of QM is not metaphysical, the various interpretations are.


javra November 05, 2025 at 16:10 #1023276
Quoting noAxioms
[Concerning] a Urbilaterian (a brainless ancestor of you, and also a starfish). Is it a being? Does it experience [pain say] and have intent? — noAxioms

If yes, is it also yes for bacteria?
The almost unilateral response to this question by non-physicalists is evasion. What does that suggest about their confidence in their view?


I’ll stick to the more extreme case of prokaryotic unicellular organisms termed “bacteria”.

Can bacteria act and react in relation to novel stimuli so as to not only preserve but improve their homeostatic metabolism (loosely, their physiological life)?

The answer is a resounding yes. For one example:

Quoting https://pubmed.ncbi.nlm.nih.gov/1562188/
Abstract

As has been stated, bacteria are able to sense a wide range of environmental stimuli through a variety of receptors and to integrate the different signals to produce a balanced response that maintains them or directs them to an optimum environment for growth. In addition, these simple, neuron-less organisms can adapt to the current concentration or strength of stimuli, i.e. they have a memory of the past. Although different species show responses to different chemicals or stimuli, depending on their niche, a consistent pattern is starting to emerge that links environmental sensing and transcriptional control to the chemosensing system, either directly, as in R. sphaeroides and the PTS system, or indirectly, as in the MCP-dependent system. This suggests a common evolutionary pathway from transcriptional activators to dedicated sensory systems. Currently the majority of detailed investigations into bacterial behavior have been carried out on single stimuli under laboratory conditions using well-fed cells. Only limited analysis, using a range of rhizosphere and pathogenic species, has been carried out on the role of behavioral responses in the wild. While laboratory studies are needed to provide the backbone for eventual in vivo investigations, we should remember the responses of whole cells to changes in their environment under laboratory conditions are essentially artificial compared to the natural environment of most species. Once the basic system is understood, it will be possible to investigate the role of these responses in vivo, under competitive, growth-limiting conditions with multiple gradients.


(though I much prefer this title: "Bacteria have feelings, too")

Their so doing then logically entails that, just like humans, the stimuli they are exposed to will have affective valence (aka hedonic tone), such that it is either positive and thereby pleasing to the bacterium or else negative and thereby displeasing. In considering that what humans label as pain is a synonym for dolor—which need not be physically produced via pain receptors and their related sensory neurons, but can well be psychological—there then is no rational means of denying that at least a bacterium’s extreme negative valence will equate to the bacterium’s dolor and, hence, pain.

Their observable, goal-oriented responsiveness to stimuli likewise entails that they too are endowed with instinctive, else innate, intents—such as that of optimally maximizing the quality and longevity of their homeostatic metabolism. Devoid of these intents, there would then be no organized means of responding to stimuli. Example: the bacterium senses a predator (danger of being eaten) and, instead of doing what it can to evade being eaten on account of its innate intents of not being so eaten, does things randomly, such as maybe approaching the predator as quickly as possible.

As to consciousness, this is a term loaded with most often implicit connotations, such as that of a recursive self-awareness. In certain contexts, though, consciousness can simply be a synonym for awareness that is not unconscious. A bacterium is no doubt devoid of an unconscious mind—this while nevertheless being endowed with a very primitive awareness that yet meaningfully responds to stimuli. Cars aren’t (not unless they’re possessed by ghosts and named “Carrie” (a joke)). In so having an awareness of stimuli, a bacterium is then also innately aware of what is its own selfhood and what is other (relative to its own selfhood), although this form of primitive self-awareness can in no way be deemed to consist of recursive thoughts.

So yes, given the best of all empirical information and rational discernment, bacteria are conscious (here strictly meaning: hold a non-unconscious, very primitive awareness) of stimuli to which they meaningfully respond via the directionality of innate intents and, furthermore, can experience negative valence, hence dolor, and, hence, some very primitive form of pain.

As to an absolute proof of this, none can be provided as is summed up in the philosophical problem of other minds. But if one can justify via empirical information and rational discernment that one’s close friend has an awareness-pivoted mind, and can hopefully do the same for lesser-animals, then there is no reason to not so likewise do for bacteria.

Shoot, the same can be argued for reproductive haploid cells called gametes, both eggs and sperm/pollen. The easiest to address example: a sperm devoid of any awareness of stimuli to which it meaningfully responds due to innate intents that thereby determine the hedonic tone of the given stimuli would be no functional sperm whatsoever (this if in any way living).
boundless November 05, 2025 at 17:03 #1023284
Quoting noAxioms
OK. I called it strong emergence since it isn't the property of the radio components alone. More is needed. Equivalently, substance dualism treats the brain as sort of a receiver tuned to amplify something not-brain. It's a harder sell with property dualism.


Ok. Note that epistemic 'strong emergence' seems to collapse in a sort of substance dualism where the 'mental substance' 'emerges' in an unexplainable way. So in a sense, that kind of strong emergence is IMO a hidden substance dualism.

Quoting noAxioms
That's what a radio is: a receiver. It probably has no understanding of sound or what it is doing.


Yes, the radio is a receiver. But the sound can't be called 'music' unless it can be understood as such. just like a chair can't be called a chair without human beings that conceive it as such.

Quoting noAxioms
I would suggest that we actually do know enough to explain any of that, but still not a full explanation, and the goalposts necessarily get moved.


I already stated why I disagree with this. I see why you say this but I disagree. The features of our experience can't be fully explained by what we know of the properties of our constituents.

Quoting noAxioms
Not true. There are plenty of machines whose functioning is not at all understood. That I think is the distinction between real AI and just complex code. Admittedly, a self driving car is probably mostly complex code with little AI to it. It's a good example of consciousness (unconscious things cannot drive safely), but it's a crappy example of intelligence or creativity.


Ok, but in the case of the machines we can reasonably expect that all their actions can be explained by algorithms. And I'm not sure that a self-driving car is conscious, in the sense there is 'something like being a self-driving machine'.

Quoting noAxioms
You can fix a broken machine.


When we fix a machine is the fixed machine the same entity as it was before, or not?

We get a new problem here. Can machines be regarded as having an 'identity' as we have?

Quoting noAxioms
Interestingly, a human maintains memory for about 3 minutes without energy input (refresh). A computer memory location lasts about 4 milliseconds and would be lost if not read and written back before then. Disk memory is far less volatile of course.


Interesting fact, yes. Thanks.

Quoting noAxioms
Quantum theory defines measurement as the application of a mathematical operator to a quantum state, yielding probabilistic outcomes governed by the Born rule. Best I could do.


Agreed I would add that It doesn't tell you in which cases the Born rule applies.

In this very weak sense, QM is an extremely practical set of rules that allows us to make extraordinary predictions. Everything more is interpretation-dependent.

And welcome back @Wayfarer


javra November 05, 2025 at 17:59 #1023295
Reply to noAxioms Reply to boundless

Quoting boundless
Not true. There are plenty of machines whose functioning is not at all understood. That I think is the distinction between real AI and just complex code. Admittedly, a self driving car is probably mostly complex code with little AI to it. It's a good example of consciousness (unconscious things cannot drive safely), but it's a crappy example of intelligence or creativity. — noAxioms


Ok, but in the case of the machines we can reasonably expect that all their actions can be explained by algorithms. And I'm not sure that a self-driving car is conscious, in the sense there is 'something like being a self-driving machine'.


In conjunction with what I’ve just expressed in my previous post, I’ll maintain that for something to be conscious, the following must minimally apply, or else everything from alarm clocks to individual rocks can be deemed to be conscious as well (e.g., “a rock experiences the hit of a sledgehammer as stimuli and reacts to it by breaking into pieces, all this in manners that are not yet perfectly understood"):

To be conscious, it must a) at minimum hold intents innate to its very being (and due to these intents, thereby hold, at minimum, innate intentions) which then bring about b) an active hedonic tone to everything that it is stimulated by (be this tone positive, negative, or neutral).

An AI self-driving car holds neither (a) nor (b) (as per the joke I mentioned in my last post, not unless it’s possessed by ghosts such as Stephen King’s “Carrie” was stated to be). As @boundless points to, its “behaviors” all stem from human created algorithms that logically reduce to an “if A then B” type of efficient causation—even if these algorithms are exceedingly complex, evolve over time, and aren’t fully understood by us humans—and this devoid of both a) any intent(s) innate to its being upon which all of its “behaviors” pivot (and intents, innate or otherwise, can only be teleological rather than efficiently causal, with algorithms strictly being the latter) and, likewise, b) the affective valence which these same innate intents bring about. Example: a stationary self-driving car will not react if you open up the hood so as to dismantle the engine (much less fend for itself), nor will it feel any dolor if you do. Therefore, the self-driving car cannot be conscious.

----------

Please notice that I'm not in all this upholding the metaphysical impossibility of any AI program ever becoming conscious at any future point in time. But, if such were to ever occur, the given AI will then minimally need to be in some significant way governed by teleological interests, i.e. by intents innate to its being, that then bring about its affective valance relative to the stimuli it encounters (stimuli both external and internal to its own being).

And, from everything I so far understand, teleological processes can only hold veritable presence within non-physicalist ontologies: the variety of which extends far beyond the Cartesian notion of substance dualism as regards mind and body.

--------

EDIT: Additionally, to better clarify my aforementioned stances as regards AI, it should come as no surprise that the evolutions of AI are all governed by human-devised goals, relative to which variations of AI programs in a large sense compete to best accomplish—such that what best accomplishes these human-devised goals then replicates into a diversity of various programs which further so attempt to better accomplish the given human-devised goal. These goals that govern AI-program evolutions (rather than the AI programs themselves) have however all been devised by humans and are in this sense fully artificial (rather than being perfectly natural). In sharp enough contrast, all life, from archeobacteria to humans, is governed by fully natural and perfectly innate goals, i.e. intents; innate intents passed down from generation to generation via genotypic inheritance. So, while it is not impossible that AI might some day evolve to itself have innate intents to its very being, replete with pleasures and dolors (that are all relative to these very same innate intents with which they’re brought into being by previous generations), till AI programs so accomplish they will persist in being non-conscious programs: devoid of innate intents and so devoid of the affective valence to stimuli the former entail.
boundless November 06, 2025 at 16:23 #1023494
Quoting javra
As boundless points to, its “behaviors” all stem from human created algorithms that logically reduce to an “if A then B” type of efficient causation—even if these algorithms are exceedingly complex, evolve over time, and aren’t fully understood by us humans—and this devoid of both a) any intent(s) innate to its being upon which all of its “behaviors” pivot (and intents, innate or otherwise, can only be teleological rather than efficiently causal, with algorithms strictly being the latter) and, likewise, b) the affective valence which these same innate intents bring about. Example: a stationary self-driving car will not react if you open up the hood so as to dismantle the engine (much less fend for itself), nor will it feel any dolor if you do. Therefore, the self-driving car cannot be conscious.


:up: Yes, I agree. I believe that there is a tendency to read too much into the 'behavior' of machines. By analyzing my own phenomenological experience there are some features that do not seem explainable by referring to algorithms. And, as you say, 'consciousness' seems also to imply both teleological and affective states. I still do not find any evidence that machines have those.

Again, I also agree that this doesn't mean that it is impossible that one day a machine might become sentient.
J November 06, 2025 at 17:02 #1023499
Reply to Wayfarer Thompson concludes, according to the synopsis:

"The enactive framework strongly supports a continuity of life and mind, showing that living systems are inherently value-constituting and purposive. However, it does not conclusively establish that all life is sentient in the affective sense. The move from sense-making to feeling remains conceptually and empirically underdetermined."

I think this gets it exactly right (though I'm not familiar enough with enactivism to know whether "strongly supports" is correct). The critical issue is whether, again using Thompson's phrasing, purposive, value-driven organisms feel those values as pleasant or unpleasant. "There's a conceptual gap between responsiveness to value and the felt experience of value."

For me, this is a real step forward in putting some content to the old favorite, "what it's like to be X." We're urged to ask, "Can an entity respond in purposive, value-oriented ways without experiencing anything?" I think the answer will turn out to be yes -- and it's still not like anything to be such an organism, where "be like something" is understood as "do more than respond purposively."
Patterner November 06, 2025 at 17:46 #1023508
Quoting J
The enactive framework strongly supports a continuity of life and mind, showing that living systems are inherently value-constituting and purposive.
That's a great way of putting it. If life wants to endure, it needs to know what is valuable.

I would like to say: [I]Therefore, something choosing valuable things in order to endure is living.[/I] But I don't think anyone would let me have that. I suspect people will say only biological things are living. I'd say maybe we should expand the definition of [I]living[/I], and divide it into Biological Life, Mechanical Life, and whatever else.
J November 06, 2025 at 23:36 #1023598
Quoting Patterner
I would like to say: Therefore, something choosing valuable things in order to endure is living. But I don't think anyone would let me have that. I suspect people will say only biological things are living. I'd say maybe we should expand the definition of living, and divide it into Biological Life, Mechanical Life, and whatever else.


It's complicated. There's no alarm bell that rings when philosophy ceases analyzing a concept and starts revising what the concept ought to cover. So yes, we can offer a revision of what "living" means so as to include mechanical things. But what is our warrant? Obviously, we can't say "Because that's what 'living' means" or "That's how 'living' is used" -- the whole point of the revision is to deny that, and suggest an amelioration of the concept. Your suggestion here would be something like, "We should change the domain of what 'living' can refer to because 'choosing valuable things in order to endure' is a more perspicuous or clearer use, conceptually, than the standard version. It captures the idea of 'living' better than 'biological thing' does." That could be true, but you'd have to say more about why.

But yes, I'm impressed by Thompson's ideas here. As I said, he points to the right question: Can an entity, alive or not, do this value-seeking thing and not be conscious? I agree with him that we don't yet know.
Patterner November 07, 2025 at 03:47 #1023615
Quoting J
Can an entity, alive or not, do this value-seeking thing and not be conscious? I agree with him that we don't yet know.
I know I'm alone here at TPF in my thinking that consciousness is fundamental. But, if I'm right, then there is consciousness in all such entities. And not just the individual particles that make up the entity, each experiencing only its own individual existence. The entity would be conscious as an entity. Value-seeking surely isn't accomplished without information processing, which is what I think accounts for collective consciousness. Of course, that doesn't imply mental abilities like ours. An archaea only experiences being a single-celled organism, not an entity with things like memories, abstract thoughts, and self-awareness.


Quoting J
That could be true, but you'd have to say more about why.
Maybe. But the definition of life is famously unclear. The first thing Sara Imari Walker talks about in [I]Life As No One Knows It[/I] is how definitions differ, and how any definition rules out some things you think are alive, and includes some things you think are not. So maybe the question isn't as much "Why?" as it is "Why not?".

Life on this planet has always been of a certain type. That makes sense, because the laws of physics woked with what was available. Certain arrangements of matter.

Does that mean something cannot be alive if it isn't made of those same arrangements? Can life not come to exist on other planets that contain very different mixes of elements?

If that's not a problem, then why can't we consider some specific mixes of elements here to be alive?

Maybe the key is not what life is made of, but what it's doing. This can apply too anyone's definition.
J November 07, 2025 at 13:56 #1023659
Quoting Patterner
Maybe the key is not what life is made of, but what it's doing. This can apply to anyone's definition.


Yes, that's how I would argue it, if I shared your view.

Quoting Patterner
The first thing Sara Imari Walker talks about in Life As No One Knows It is how definitions differ, and how any definition rules out some things you think are alive, and includes some things you think are not.


Interesting. Would it be easy for you to cite an example of each? Curious to know what she has in mind.

Quoting Patterner
But the definition of life is famously unclear.


Yes, in one sense. But it's also the case that a child will be able to sort living from non-living things with great accuracy, given the currently accepted use of "living." The child doesn't know the definition -- arguably, no one does for sure -- but she knows how to use the word. Do we want to say she's using the word incorrectly? I don't think so. Rather, we might say that her (and everyone else's) use of the word needs to change in order to encompass a more accurate understanding of what it means to be alive.

Quoting Patterner
The entity would be conscious as an entity.


One question worth considering: Which way does the argument point? Is it, "Any entity that can do this value-seeking thing will now be defined as 'being conscious'"? Or is it, "We know (have learned/hypothesize) that being conscious means having the ability to do the value-seeking thing, so if it can do it, it's conscious"?
boundless November 07, 2025 at 15:22 #1023666
Quoting Patterner
Life on this planet has always been of a certain type. That makes sense, because the laws of physics woked with what was available. Certain arrangements of matter.


While this can be viewed as a tautology (the laws of physics allow life because there is life...), I also think that this is a very interesting point. To me this suggests that we perhaps do not know enough of the 'inanimate' and this is the reason why the properties associated with life seem so different from the properties associated with 'what isn't life', i.e. life is, so to speak, latent in 'what isn't life'.

And BTW, I also think that 'consciousness' is fundamental, albeit for different reasons and I have a different model of yours. For instance, as I said before, I believe that if mathematical truths are concepts (i.e. mental contents) and, as they seem, they are not contingent, independent of time, place and so on, that some kind of 'mind' is indeed fundamental. Physical objects, on the other hand, seem to be contingent.
noAxioms November 07, 2025 at 15:53 #1023669
Quoting Wayfarer
Which is, in a word, physicalism - there is only one substance, and it is physical. From within that set of assumptions, Chalmer's and Nagel's types of arguments will always remain unintelligible.

OK, from this I gather that your statement that you're asserting an ontological distinction, a distinction in the mode of being, you're merely expressing opinion, not evidence of any sort. You had phrased it more as the latter. We are (mostly) well aware of each other's opinions.


Quoting J

[Concerning] a Urbilaterian (a brainless ancestor of you, and also a starfish). Is it a being? Does it experience [pain say] and have intent?
— noAxioms

I don't know. And that's not evasion, just honesty.

Better answer than most, but I would suggest that not even knowing if some random animal is a being or not seems to put one on poor footing to assert any kind of fundamental difference that prevents say a car from being conscious.
Wayfarer is likely more committed to 'yes, because it's life', except he won't say that, he instead lists typical (but not universal) properties of life, properties which some non-life entities also exhibit. This is either a funny way of saying 'it's gotta be life', or he's saying that it's the properties itself (homeostasis say) that grants a system a first person point of view. Hence a recently severed finger is conscious.

But I also don't think that the right answer to that question reveals much about the larger problem.
Any answer (right/wrong is irrelevant here) sheds light on what I'm after. Nagel seemed to avoid it, venturing no further from a human than a bat, a cousin so close that I have to check with the records to before committing to marry one. This is the sort of thing I'm after when asking that question.

I think consciousness will turn out to depend on biology, but that's not to say that everything alive is conscious.
OK, but then the key that distinguishes conscious from otherwise is not 'is biological'. The key is something else, and the next question would be 'why can only something biological turn that key?'.


Quoting Harry Hindu
As I said, the virtual machine is a simulation, not the real thing.
It not being real is irrelevant. A simulation of a bat fails because it can at best be a simulation of a human doing batty things, not at all what it's like to be the bat having batty experiences.
A human can do echo location. They have blind people that use this. We have all the hardware required except the sound pulse emitter, which is something they wear. But a simulation won't let you even know what that's like since you're not trained to see that way. It would take months to learn to do it (far less time for an infant, which is the typical recipient of that kind of setup). So a VR gives you almost nothing. No sonar, no flight control, muscle control, or any of the stuff the bat knows.

You ask "how does any third-person stance impart knowledge?", which is a silly question since pretty much all of school is third person information. A VR bat simulation is much like a movie. Sure, it imparts information, just not the information of what it's like to be a bat.

For instance, I might try to imagine what it might be to just experience the world through echo-location without all the other sensory experiences the bat might have.
As I said, that can be done. It just takes practice. No simulation needed.

Well, there's a lot going on in this thread and our memories are finite, so you might have to restate your definition from time to time, or at least reference your definition as stated.
If I use the word in my own context, I'm probably referencing mental processes. Not an object or a substance of any kind.

Quoting Harry Hindu
When talking about anything in the shared world you are (attempting to (your intent is to)) talking about the thing as it is in itself, or else what information are you trying to convey?
When talking about things in the shared world, I'm probably talking about the pragmatic notion of the thing in question, never the thing in itself. On rare occasion, I perhaps attempt (on a forum say) a description of the thing closer to what it actually is, but that's rare, and I'm highly likely to not be getting it right. "It is stranger than we can think." -- Heisenburg [/quote]

Why should I believe anything you say if you can never talking about things as they are in themselves - like your version of mind?
My version of mind is a pragmatic description of the way I see it. So is yours, despite seeing it differently. One of us may be closer to the way it actually is, but I doubt anybody has nailed that.
You seem to be confusing 'thing in itself' with truth of the matter. For instance, car tires tend to be circular, not square (as viewed along its axis). Circular is closer to truth, and that's what I try to convey. Even closer is a circle with a flat spot. But all that is a pragmatic description of a tire. The thing in itself is not circular at all. Pragmatically, I don't care about that.

A more accurate way to frame this is through the concept of the central executive in working memory. This isn’t a tiny conscious agent controlling the mind, but a dynamic system that coordinates attention, updates representations, and integrates information from different cognitive subsystems. It doesn’t “watch” the mind; it organizes and manages the flow of processing in a way that allows higher-level reflection and planning.

You seem to be talking about both sides. For one, I never mentioned 'tiny'. What I call the homonculus seems to be (volume wise) about as large as the rest combined. Only in humans. That part 'watches' the model (the map) that the subconscious creates. All of it together is part of mental process, so it isn't watching the mind since it all is the mind. The tasks that you list above seem to be performed by both sides, each contributing what it does best. If speed/performance is a requisite, the subconscious probably does the work since it is so much faster. If time is available (such as for the high level reflection and planning you mention), that probably happens in the higher, less efficient levels

The subconscious isn’t some subordinate system taking orders from the homunculus.
In deed, it's quite the opposite. It's the boss, and what I call here this homonculus is a nice rational tool that it utilizes.

Objects are the process of interacting smaller "objects". The problem is that the deeper you go, you never get at objects, but processes of ever smaller "objects" interacting. Therefore it is processes, or relations all the way down.
OK. I'm pretty on board with relational definitions of everything, so I suppose one could frame things this way. My example was more of the way language is used. It's OK to say 6 flames were lit, but it's syntactically wrong to say 6 combustions are lit. But 'combustion' can still be used as a noun in a sentence, as a reference to a process, not an object. Of course this draws a distinction between process and object. Your definition does not, and also clashes with the way the words are used in language,.

Objects are mental representations of other processes
That's idealism now. I'm not talking about idealism.

Ok, so combustion ? causes ? flame. Both are processes, but not identical. Combustion is the reaction; flame is the visible process that results from it.
Well how about a rock then (the typical object example). What causes rock? I'm not asking how it was formed, but what the process is that is the rock.

If flame and combustion are distinct processes
I never said that. I called the flame an object, not a process. I distinguish between process and object, even if the object happens to be a process, which is still 'process' vs. 'a process'.

Does something need to have an internal representation with some other part accessing those representations for it to be, or have a sense of being?
I think here you are confusing 'being a rock' with 'the rock having a sense of being'. They're not the same thing. The first is a trivial tautology. The second seems to be a form of introspection.



Quoting javra
Can bacteria act and react in relation to novel stimuli so as to not only preserve but improve their homeostatic metabolism (loosely, their physiological life)?

The answer is a resounding yes.

I agree, but non-living things can also do this. Thanks for the blurb. Interesting stuff.

Their having 'memory' is quite remarkable. Slime molds can communicate, teach each other things, all without any nerves.

there then is no rational means of denying that at least a bacterium’s extreme negative valence will equate to the bacterium’s dolor and, hence, pain.
Excellent. From such subtle roots, it was already there, needing only to be honed. Do they know what exactly implements this valence? Is it a chemical difference? In a non-chemical machine, some other mechanism would be required.

Their responsiveness to stimuli likewise entails that they too are endowed with instinctive, else innate, intents—such as that of optimally maximizing the quality and longevity of their homeostatic metabolism.
Instincts like that are likely encoded in the DNA, the product of countless 'generations' of natural selection. I put 'generations' in scare quotes since the term isn't really relevant to a non-eukaryote.

Quoting javra
A bacterium is no doubt devoid of an unconscious mind—this while nevertheless being endowed with a very primitive awareness that yet meaningfully responds to stimuli. Cars aren’t (not unless they’re possessed by ghosts and named “Carrie” (a joke)).

Here your biases show through. Possession seems to be required for the cell to do this. The bacterium is possessed. The car is asserted not to be, despite some cars these days being endowed with an awareness that meaningfully responds to stimuli. I've always likened substance dualism with being demon possessed, yielding one's free will to that of the demon, apparently because the demon makes better choices?
If a cell can be possessed, why not a toaster? What prevents that?

Side note: It's Christine, not Carrie. Carrie is the girl with the bucket of blood dumped on her.
Remember T2 ending? The liquid metal terminator melts into a vat of white hot metal. That metal was made into Christine obviously (and some other stuff, being a fairly large vat).

As to an absolute proof of this, none can be provided as is summed up in the philosophical problem of other minds. But if one can justify via empirical information and rational discernment that one’s close friend has an awareness-pivoted mind, and can hopefully do the same for lesser-animals, then there is no reason to not so likewise do for bacteria.
And toasters.



Quoting boundless
Ok, but in the case of the machines we can reasonably expect that all their actions can be explained by algorithms.
Disagree. The chess program beats you despite nobody programming any chess algorithms into it at all. It doesn't even know about chess at first until the rules are explained to it. Only the rules, nothing more.
Sure, the machine probably follows machine instructions (assuming physics isn't violated anywhere), which are arguably an algorithm, but then a human does likewise, (assuming physics isn't violated anywhere), which is also arguably an algorithm.

In reply to the above comment by boundless:
Quoting javra
In conjunction with what I’ve just expressed in my previous post, I’ll maintain that for something to be conscious, the following must minimally apply, or else everything from alarm clocks to individual rocks can be deemed to be conscious as well (e.g., “a rock experiences the hit of a sledgehammer as stimuli and reacts to it by breaking into pieces, all this in manners that are not yet perfectly understood"):
I agree that not being rigorously defined, consciousness can be thus loosely applied to what is simple cause and effect. For that matter, what we do might just be that as well.

To be conscious, it must a) at minimum hold intents innate to its very being
This seems a biased definition. It would mean that even if I manufacture a human from non-living parts, it would not be conscious. Why does the intent need to be innate? Is a slave not conscious because his intent is that of his master?

The hedonic requirement is reasonable, but you don't know that the car doesn't have it. The bit above about valence gets into this, and a car is perfectly capable (likely is) of that being implemented.

(and due to these intents, thereby hold, at minimum, innate intentions) which then bring about b) an active hedonic tone to everything that it is stimulated by (be this tone positive, negative, or neutral).

Example: a stationary self-driving car will not react if you open up the hood so as to dismantle the engine (much less fend for itself), nor will it feel any dolor if you do. Therefore, the self-driving car cannot be conscious.
Heck, even my car reacts to that, and it's not very smart. A self-driving car very much does react to that, but mostly only to document it. It has no preservation priorities that seek to avoid damage while parked. It could have, but not sure how much an owner would want a car that flees unexpectedly when it doesn't like what's going on.
Most machines prioritize safety of humans (including the guy stealing its parts) over safety of itself. The law is on the side of the thief.

Please notice that I'm not in all this upholding the metaphysical impossibility of any AI program ever becoming conscious at any future point in time.
Good. Most in the camp of 'no, because it's a machine' do actually.

And, from everything I so far understand, teleological processes can only hold veritable presence within non-physicalist ontologies:
Surely the car (and a toaster) has this. It's doing what it's designed to do. That's a teleological process in operation.


Quoting boundless
When we fix a machine is the fixed machine the same entity as it was before, or not?
That opens a whole can of worms about identity. The same arguments apply to humans. Typically, the pragmatic answer is 'yes'. Identity seems to be a pragmatic idea, with no metaphysical basis behind it.

We get a new problem here. Can machines be regarded as having an 'identity' as we have?
Both have pragmatic identity. Neither has metaphysical identity since it's pretty easy to find fault in any attempt to define it rigorously.

Agreed I would add that It doesn't tell you in which cases the Born rule applies.
You need to expand on this. I don't know what you mean by it.


Quoting Patterner
The enactive framework strongly supports a continuity of life and mind, showing that living systems are inherently value-constituting and purposive. — J

That's a great way of putting it. If life wants to endure, it needs to know what is valuable.

I agree. It is the goal of very few machines to endure or to be fit. That's not a fundamental difference with the typical life form, but it's still a massive difference. Machines need to be subjected to natural selection before that might change, and a machine that is a product of natural selection is a scary thing indeed.


Quoting J
But it's also the case that a child will be able to sort living from non-living things with great accuracy, given the currently accepted use of "living." The child doesn't know the definition -- arguably, no one does for sure -- but she knows how to use the word.

This is a great point. It's simply hard to formalize what is meant by a word despite everybody knowing what the word means. It means more "what I think is alive" which differs from the rigorous definition that, as was mentioned, always includes something you think isn't, and excludes something you think is". But what the child does lacks this problem by definition. The child just knows when to use the word or not.


javra November 07, 2025 at 17:37 #1023682
Quoting noAxioms
Do they know what exactly implements this valence? Is it a chemical difference?


Not to my knowledge. But I do assume it's constituted from organic chemistry. Still, as with the metabolism that likewise unfolds, there is an autopoiesis involved that is other than the individual molecules and their chemicals. Feel free to use the pejorative of vitalism. Lumping together some lipids, proteins, and nucleic acids does not a living organism make. Or: metabolism = respiration = breath = anima = life. (e.g., a virus, viroid, or prion does not have a metaboism and is hence not living, even though composed of organic molecules) And this entails that extra oomph, relative to the purely terrestrial and inanimate, of autopoiesis.

Quoting noAxioms
Here your biases show through. Possession seems to be required for the cell to do this. The bacterium is possessed. The car is asserted not to be, despite some cars these days being endowed with an awareness that meaningfully responds to stimuli. I've always likened substance dualism with being demon possessed, yielding one's free will to that of the demon, apparently because the demon makes better choices?
If a cell can be possessed, why not a toaster? What prevents that?


Again, I'll just assume that you are biased against the notion that life is ontologically different to nonlife (be it either inanimate or else organically dead). And so I'll skip ahead to the term "vitalism". Vitalism is quite different from the hocus-pocus spiritual notions of "possessions". So framing the issue in term of possession is a non-starter for me. Now, as far as jokes go, supposedly anything can be possessed. From lifeforms to children's toys (e.g., Chucky), and I don't see why not toasters as well (this in purely speculative theory but not in practice, akin to BIVs, solipsism, and such) And this possession supposedly occurs via what is most often taken to be a malevolent anima (a ghost orthe like), which, as anima, is itself endowed with the vitality that vitalism in one way or another specifies. But again, I've no interest in the hocus-pocus spirituality of possessions. The issue of vitalism, on the other hand, seems important enough to me as regards the current topics.

Quoting noAxioms
Side note: It's Christine, not Carrie.


:grin: Haven't read the book nor seen the movie. Thanks for the correction.

Quoting noAxioms
I agree that not being rigorously defined, consciousness can be thus loosely applied to what is simple cause and effect.


Again, intents, and the intentioning they entail, are teleological, and not cause and effect. There's a massive difference between the two (e.g., the intent is always contemporaneous to the effects produced in attempting to fulfill it - whereas a cause is always prior to its effect).

Quoting noAxioms
This seems a biased definition. It would mean that even if I manufacture a human from non-living parts, it would not be conscious. Why does the intent need to be innate? Is a slave not conscious because his intent is that of his master?


What you do you mean "manufacture a human from non-living parts"? In whole? How then would it in any way be human? Or are you thinking along the lines of fictions such as of the bionic man or robocop? In which case, the human life remains intact while its constituent parts of its body are modified with non-living parts.

As to why innate: because it is, in fact, natural, rather than human-derived and thereby artificial. A human slave has innate intents, which thereby allow in certain cases the slave to obey the non-innate but acquired intent of the slave-master. (a slave's intent is never logically identical to the slave-master' intent - this, of itself, would be hocus-pocus possession)

Quoting noAxioms
And, from everything I so far understand, teleological processes can only hold veritable presence within non-physicalist ontologies: -- javra

Surely the car (and a toaster) has this. It's doing what it's designed to do. That's a teleological process in operation.


Even so, a) these teleological process you here address are artificial rather than natural and b) it doesn't in any way change what was stated regarding ontologies.




J November 07, 2025 at 18:20 #1023691
Quoting noAxioms
I would suggest that not even knowing if some random animal is a being or not seems to put one on poor footing to assert any kind of fundamental difference that prevents say a car from being conscious.


Perhaps I should have divvied up the question and answered more precisely. To the question of whether it's a "being": I can't respond, as the use of "being," here and just about everywhere else, is hopelessly ambiguous and/or question-begging. To the question of whether it experiences pain: I don't know. Intent?: As described by Thompson, probably so.

The questions were put in terms of what I know, which is very little in this area. If "asserting any kind of fundamental difference" requires knowledge, then yes, I'd be on poor footing. But does it? My position is that, while we certainly don't know the answers to these questions, we can offer more or less likely solutions. I don't know that a car isn't conscious, but for me the possibility is extremely unlikely. Is this because of a "fundamental difference"? Very probably; the car is not biologically alive. Am I asserting these things? Semantics. I don't think they're obvious, or that people who disagree are unintelligent. So if that's what it takes to assert something, then I'm not. . . I know I've used this analogy before, but disagreeing about what consciousness is, in 2025, is about as fruitful as a debate among 18th century physicists about what time is. We can trade opinions and cite evidence, but fundamentally we don't know what the F we're talking about.

Good discussion anyway!
Patterner November 07, 2025 at 18:25 #1023692

Quoting J
The first thing Sara Imari Walker talks about in Life As No One Knows It is how definitions differ, and how any definition rules out some things you think are alive, and includes some things you think are not.
— Patterner

Interesting. Would it be easy for you to cite an example of each? Curious to know what she has in mind.
I assumed you're familiar with one. If reproduction is part of the definition of life, then worker bees and mules are not alive. Neither is my mother, as she's is 83.

She says many consider Darwinian evolution to be [I]the[/I] defining feature of life. In which case no individual is living, since only populations can evolve.

For something that's included that we think is booty living, she cites Carl Sagan' [I]Definitions of Life[/I]:
Quoting Carl Sagan
For many years a physiological definition of life was
popular. Life was defined as any system capable of
performing a number of such functions as eating,
metabolizing, excreting, breathing, moving, growing, reproducing, and being responsive to external stimuli. But many such properties are either present in machines that nobody is willing to call alive, or absent from organisms that everybody is willing to call alive. An automobile, for example, can be said to eat, metabolize, excrete, breathe, move, and be responsive to external stimuli. And a visitor from another planet, judging from the enormous numbers of automobiles on the Earth and the way in which cities and landscapes have been designed for the special benefit of motorcars, might wellbelieve that automobiles are not only alive but are the dominant life form on the planet.
And I'll include this conversation between Data (an android, if you're not familiar) and Dr. Crusher, from Star Trek:The Next Generation.
Data: What is the definition of life?

Crusher: That is a BIG question. Why do you ask?

Data: I am searching for a definition that will allow me to test an hypotheses.

Crusher: Well, the broadest scientific definition might be that life is what enables plants and animals to consume food, derive energy from it, grow, adapt themselves to their surrounding, and reproduce.

Data: And you suggest that anything that exhibits these characteristics is considered alive.

Crusher: In general, yes.

Data: What about fire?

Crusher: Fire?

Data: Yes. It consumes fuel to produce energy. It grows. It creates offspring. By your definition, is it alive?

Crusher: Fire is a chemical reaction. You could use the same argument for growing crystals. But, obviously, we don't consider them alive.

Data: And what about me? I do not grow. I do not reprodue. Yet I am considered to be alive.

Crusher: That's true. But you are unique.

Data: Hm. I wonder if that is so.

Crusher: Data, if I may ask, what exactly are you getting at?

Data: I am curious as to what transpired between the moment when I was nothing more than an assemblage of parts in Dr. Sung's laboratory and the next moment, when I became alive. What is it that endowed me with life?

Crusher: I remember Wesley asking me a similar question when he was little. And I tried desperately to give him an answer. But everything I said sounded inadequate. Then I realized that scientists and philosophers have been grappling with that question for centuries without coming to any conclusion.

Data: Are you saying the question cannot be answered?

Crusher: No. I think I'm saying that we struggle all our lives to answer it. That it's the struggle that is important. That's what helps us to define our place in the universe.



Quoting J
One question worth considering: Which way does the argument point? Is it, "Any entity that can do this value-seeking thing will now be defined as 'being conscious'"? Or is it, "We know (have learned/hypothesize) that being conscious means having the ability to do the value-seeking thing, so if it can do it, it's conscious"?
I don't follow. I don't see significant difference between those options. It seems like just different wording.
J November 07, 2025 at 18:28 #1023693
Quoting noAxioms
I think consciousness will turn out to depend on biology, but that's not to say that everything alive is conscious.
OK, but then the key that distinguishes conscious from otherwise is not 'is biological'. The key is something else, and the next question would be 'why can only something biological turn that key?'.


Right to all of that. Biology, on my hypothesis, is necessary but not sufficient. My guess is that no single property will turn out to be sufficient. Your question -- which reduces to "Why is biology necessary for consciousness?" -- is indeed the big one. If and when that is answered, we'll know a lot more about what consciousness is. (Or, if biology isn't necessary, also a lot more!)
javra November 07, 2025 at 19:05 #1023698
Quoting Patterner
For something that's included that we think is booty living, she cites Carl Sagan' Definitions of Life:

For many years a physiological definition of life was
popular. Life was defined as any system capable of
performing a number of such functions as eating,
metabolizing, excreting, breathing, moving, growing, reproducing, and being responsive to external stimuli. But many such properties are either present in machines that nobody is willing to call alive, or absent from organisms that everybody is willing to call alive. An automobile, for example, can be said to eat, metabolize, excrete, breathe, move, and be responsive to external stimuli. And a visitor from another planet, judging from the enormous numbers of automobiles on the Earth and the way in which cities and landscapes have been designed for the special benefit of motorcars, might wellbelieve that automobiles are not only alive but are the dominant life form on the planet. — Carl Sagan


Good post, but if we start playing footloose with the term metabolism - which in part necessitates cellular respiration - then fire is certainly alive: "it metabolizes energy to sustain its own being, it's birthed and it perishes, and it reproduces". And something's quire off about so affirming, unless one wants to assume an animistic cosmos wherein absolutely everything is animated with agency and, hence, with will.
Patterner November 07, 2025 at 21:03 #1023717
Reply to javra
Yes, it can get sticky.

Google says:
[I]Metabolism refers to all the chemical reactions that occur within an organism to maintain life.[/I]

That might be circular.
-Life is something that involves various chemical reactions.
-Various chemical reactions maintain life.

And not all life uses cellular respiration.

My overriding question is:. Can there be life without chemical reactions? Is it the chemical reactions that define life? Or maybe the chemical reactions are the means to an end, and that end is a better definition of life.
J November 07, 2025 at 21:50 #1023730
Reply to Patterner Thanks for the borderline-life examples. The car example invites the reply: "Can be said to" eat, metabolize, excrete, etc.? Yes, that can be said, in the spirit of metaphor. But is it accurate? Is it really eating, metabolizing, etc. Well, what does it mean to "really eat"? Which leads to . . .

Quoting Patterner
One question worth considering: Which way does the argument point? Is it, "Any entity that can do this value-seeking thing will now be defined as 'being conscious'"? Or is it, "We know (have learned/hypothesize) that being conscious means having the ability to do the value-seeking thing, so if it can do it, it's conscious"?
— J
I don't follow. I don't see significant difference between those options. It seems like just different wording.


Compare a geometrical shape such as a trapezoid. One might learn how to use the word correctly, and thus recognize trapezoids, without being able to say exactly what are the qualities that make the shape a trapezoid. Or, one might be taught those qualities, along with the word "trapezoid," and then categorize the shapes one encounters.

Which approach should we adopt in the case of consciousness? Do we already know what the word means, so that it's only a matter of finding the entities to which it applies? Or do we already know what's conscious and what isn't, without being able to define consciousness, and hence it's a matter of figuring out what conscious things have in common, and thus perfecting a definition?

That's the difference in "pointing" I had in mind. Roughly, is it "word to object" or "object to word"?
javra November 07, 2025 at 21:51 #1023732
Quoting Patterner
And not all life uses cellular respiration.


A bold statement. Can you please reference any known lifeform that can live in the complete absence of both aerobic or anaerobic respiration? Fermentation, a form of metabolism that is neither, to my knowledge is not sufficient for life in the complete lifelong absence of respiration - an example of this being the fermentation in yeast, which cannot life in a complete lifelong absence of respiration. (At least, from everything I so far know.)

Quoting Patterner
My overriding question is:. Can there be life without chemical reactions?


You more specifically mean certain reactions of organic chemicals, namely those which result in metabolism - or at least I so assume. This will fully depend on the metaphysics one subscribes to. In some such metaphysical systems, being "dead inside" or else being "fully alive" are more than mere poetics, but can point to an interpretation of "life" which, though non-physical, nevertheless required for physical life to occur. That mentioned, there is no non-metaphorical life (as in, "the life of an idea" or else "the lifespan of a car") known to science which is not grounded in the physicality of "chemical reactions". None that I know of at least.
Patterner November 07, 2025 at 22:47 #1023743
Quoting noAxioms
I agree. It is the goal of very few machines to endure or to be fit. That's not a fundamental difference with the typical life form, but it's still a massive difference. Machines need to be subjected to natural selection before that might change, and a machine that is a product of natural selection is a scary thing indeed.
I have to assume we could make a program that duplicates itself, but does so imperfectly. Since they operate so fast, they could doubtless go through a million generations in a fairly short time.

How much storage space and power would be needed to support such a thing? If one evolved to overwrites others, it might be manageable.ay least the space.

Hey, why not unnatural selection? We could give it mutations faster than nature works.
Patterner November 08, 2025 at 00:04 #1023751
Quoting J
Compare a geometrical shape such as a trapezoid. One might learn how to use the word correctly, and thus recognize trapezoids, without being able to say exactly what are the qualities that make the shape a trapezoid. Or, one might be taught those qualities, along with the word "trapezoid," and then categorize the shapes one encounters.
I swear I'm not trying to be difficult, but I don't get it. How can you learn how to use the word correctly other than by being taught those qualities? And how can you categorize the shapes without recognizing them?


Quoting J
Which approach should we adopt in the case of consciousness? Do we already know what the word means, so that it's only a matter of finding the entities to which it applies? Or do we already know what's conscious and what isn't, without being able to define consciousness, and hence it's a matter of figuring out what conscious things have in common, and thus perfecting a definition?
I thought we were talking about life. As for consciousness, yes, I already know what the word means, and nothing you bring up applies. :grin: :lol: Anyway, my thought is that everything is conscious. More precisely, subjective experience is a property of all particles. They all experience their own being. Which, in the case of a particle, isn't much. There are no mechanisms for sensory input, storage of information from previous sensory input, feedback loops, or any other mental activity.

When a physical system processes information, the conglomerate of particles experiences as a unit. DNA is where it all began. But even though information is being processed when DNA is being replicated, or protein is being synthesized, there are no mechanisms for sensory input, storage of information from previous sensory input, feedback loops, or any other mental activity.

When the information processing includes sensory input, storage of information from previous sensory input, and feedback loops, it is experienced as vision and hearing, memory, and self-awareness.

You know, or not. Sadly, I can't think of a way to test for anything.
J November 08, 2025 at 01:58 #1023764
Quoting Patterner
I swear I'm not trying to be difficult, but I don't get it.


No worries; I found it difficult to explain, and maybe haven't done it well.

Quoting Patterner
How can you learn how to use the word correctly other than by being taught those qualities?


That's the key -- whether a definition is needed in order to use a word correctly. Often, that's not how we learn words and concepts. Instead, it's ostensive: Someone points and says, "That's X," and later, "That's one too," and "That's one too," etc. We can recognize what words refer to quite accurately without necessarily being able to put our recognition in terms of a definition. Something like this is going on with consciousness (and life), I think. We've learned how to use the words in the absence of a (clear and precise) definition.

Quoting Patterner
And how can you categorize the shapes without recognizing them?


This looks at it from the other angle: We can be uncertain about a shape -- maybe it kind of looks like what we've been taught to call a trapezoid, but not exactly. We don't quite recognize it. So if we have a precise definition (which we do, in this case), we can apply it and see whether the shape fits the definition. If only we had something similarly precise for "consciousness" or "life"!

Does this help at all?
Patterner November 08, 2025 at 02:32 #1023769
Quoting boundless
While this can be viewed as a tautology (the laws of physics allow life because there is life...), I also think that this is a very interesting point. To me this suggests that we perhaps do not know enough of the 'inanimate' and this is the reason why the properties associated with life seem so different from the properties associated with 'what isn't life', i.e. life is, so to speak, latent in 'what isn't life'.
I don't know. It seems to me life is processes, not properties. Our planet has various amounts of various elements, so that's what the laws of physics had to work with. But can't there be life on other planets that have different mixtures of different elements? I imagine there can be. I think different elements, different processes, different systems, can accomplish the work of life.
Patterner November 08, 2025 at 03:40 #1023775
Quoting javra
A bold statement. Can you please reference any known lifeform that can live in the complete absence of both aerobic or anaerobic respiration?
Mmmm... No. I asked google, and it said yes. I should have looked into it.

Quoting javra
You more specifically mean certain reactions of organic chemicals, namely those which result in metabolism - or at least I so assume.
What I mean is this... For a very long time, a writer took a feather, dipped it in ink, and wrote. A writer writes, eh? Pencils and pens came along at different times, but people still wrote with them.

Then came typewriters, and now computers. Nobody is writing any longer. So there are no writers any longer.

But, of course, there are writers today, typing away on their computers. And we write posts and emails all the time. Nobody bats an eyelash at the obvious misuse of the word. It's ludicrous to suggest we aren't wiring. Because writing is a pursuit. A goal. It's not the tools.

I'm wondering if the chemical reactions that we've called life are the quill and ink, pencils, and pens.
Wayfarer November 08, 2025 at 03:54 #1023776
Quoting noAxioms
OK, from this I gather that your statement that you're asserting an ontological distinction, a distinction in the mode of being, you're merely expressing opinion, not evidence of any sort.


I see it more as a matter of facts which you don’t recognize.
boundless November 08, 2025 at 10:09 #1023789
Quoting noAxioms
Sure, the machine probably follows machine instructions (assuming physics isn't violated anywhere), which are arguably an algorithm, but then a human does likewise, (assuming physics isn't violated anywhere), which is also arguably an algorithm.


Physics is violated only if you assume it is algorithmic. I disagree with this assumption. I believe that our own existence is 'proof' that physical laws allow non-algorithmic processes (as to why I believe that our cognition isn't algorithmic I refer to some of my previous posts).

BTW, I want to thank you for the discussion. It helped to clarify a lot of my own position. I didn't think that my denial of our cognition as being totally algorithmic is so important for me. What you also say in respose to @javra about the 'hedonic aspect' of consciousness would perhaps make sense if you assume that everything about us is algorithmic.

As I stated above, I do not think that sentient AI is logically impossible (or, at least, I have not enough information to make such a statement). But IMO we have not yet reached that level.

Quoting noAxioms
That opens a whole can of worms about identity. The same arguments apply to humans. Typically, the pragmatic answer is 'yes'. Identity seems to be a pragmatic idea, with no metaphysical basis behind it.


Again, I have to disagree here. We seem to be sufficiently 'differentiated' to be distinct entities. Again, clearly, if all our actions and cognitions were algorithmic what you are saying here would make perfect sense. After all, if all processes are algorithmic it seems to me that the only entity that there is in the universe is the whole universe itself. All other 'subsystems' have at best a pragmatic identity. Ultimately, however, they are only useful fictions.

Quoting noAxioms
You need to expand on this. I don't know what you mean by it.


I meant that 'interpretation-free QM' doesn't give a precise definiton of what a measurement is. It is a purely pragmatic theory.


boundless November 08, 2025 at 10:12 #1023790
Quoting Patterner
I don't know. It seems to me life is processes, not properties. Our planet has various amounts of various elements, so that's what the laws of physics had to work with. But can't there be life on other planets that have different mixtures of different elements? I imagine there can be. I think different elements, different processes, different systems, can accomplish the work of life.


I don't think that a 'process view' denies what I said. Note, however, that processes in order to be intelligible must have some properties, some structure. Otherwise knowledge is simply impossible.

In a 'process ontology', what I said perhaps would be modified as 'there is a potency for life-processes in non-living processes' or something like that.
noAxioms November 08, 2025 at 23:00 #1023920
Quoting javra
supposedly anything can be possessed. From lifeforms to children's toys (e.g., Chucky), and I don't see why not toasters as well (this in purely speculative theory but not in practice, akin to BIVs, solipsism, and such)

Purely speculative maybe, but they're relevant in an important way sometimes. I do keep such ideas in mind. BiV is a form of solipsism.
Some external vitality (you've not been very detailed about it) seems to have no reason to interact only with living things like a bacterium, a human finger cell, or perhaps a virus. Apparently, it cannot interact with anything artificial. I can't think of any sort of reason why something separately fundamental would have that restriction.

intents, and the intentioning they entail, are teleological, and not cause and effect.
You don't know that, but you say it like you do. I'm a programmer, and I know the ease with which intent can be implemented with simple deterministic primitives. Sure, for a designed thing, the intent is mostly that of the designer, but that doesn't invalidate it as being intent with physical implementation.


There's a massive difference between [cause/effect and intent] (e.g., the intent is always contemporaneous to the effects produced in attempting to fulfill it - whereas a cause is always prior to its effect).
The effects produced in attempting to fulfill it are not the cause of the intent.

What you do you mean "manufacture a human from non-living parts"?
Like 3D print one or something. Made, not grown, but indistinguishable from a grown one.

How then would it in any way be human?
That's for you, the created being, and for society to decide. A new convention is required because right now there's no pragmatic need for it.

Or are you thinking along the lines of fictions such as of the bionic man or robocop?
Naw, my mother is one of those. She can't swim anymore since she's so dense with metal that she sinks straight to the bottom. They don't tell you that in the pre-op consultation.




Quoting J
To the question of whether it experiences pain: I don't know. Intent?: As described by Thompson, probably so.
Thompson seemed to make conclusions based on behavior. The cell shies away or otherwise reacts to badness, and differently to fertile pastures so to speak. By that standard, the car is conscious because it also reacts positively and negatively to its environment.

I don't know that a car isn't conscious, but for me the possibility is extremely unlikely.
Probably because we're using different definitions. There are several terms bandied about that lack such concreteness, including 'living, intent, [it is like to be], and (not yet mentioned, but implies) free will'. People claiming each of these things rarely define them in certain terms.

I find being alive utterly irrelevant to any non-begging definition of consciousness. But that's me.

about as fruitful as a debate among 18th century physicists about what time is.
Good analogy, since there's definitely not any agreement about that. The word is used in so many different ways, even in the physics community.


Quoting Patterner
If reproduction is part of the definition of life, then worker bees and mules are not alive. Neither is my mother, as she's is 83.
A mother has reproduced. The definition does not require something to continue to do so. The mule cannot reproduce, but mule cells can, so the mule is not alive, but it is composed of living thing. Hmm...
Not shooting you down. Just throwing in my thoughts. New definition: A thing is alive if the 6 year old thinks it is. Bad choice, because they anthropomorphize a Teddy Ruxpin if it's animated enough.

She says many consider Darwinian evolution to be the defining feature of life.
Plenty of nonliving things evolve via natural selection. Religions come to mind. They reproduce, and are pruned via natural selection. Mutations are frequent, but most result in negative viability.

In which case no individual is living, since only populations can evolve.
Easy enough to rework the wording to fix that problem. A living thing simply needs to be a member of an evolving population. What about computer viruses? Problem there is most mutations are not natural.

"An automobile, for example, can be said to eat, metabolize, excrete, breathe, move, and be responsive to external stimuli. And a visitor from another planet, judging from the enormous numbers of automobiles on the Earth and the way in which cities and landscapes have been designed for the special benefit of motorcars, might wellbelieve that automobiles are not only alive but are the dominant life form on the planet". — Carl Sagan
Similarly humans, which are arguably inert without that immaterial driver, but the alien might decide they're the dominant life for instead of simply the vehicles for said dominant forms.


Quoting javra
fire is certainly alive
That's always a good test for any definition of life. How does fire rate? Are you sure it isn't alive? It certainly has agency and will, but it lacks deliberate intent just like termites.


Quoting javra
You more specifically mean certain reactions of organic chemicals, namely those which result in metabolism - or at least I so assume.

Quoting Patterner
Google says:
Metabolism refers to all the chemical reactions that occur within an organism to maintain life.

That might be circular.
...

And not all life uses cellular respiration.
I was also going to point out that circularity.
Not all life metabolizes. Viruses for example, but some deny that a virus is alive.

Mind you, I personally don't place any importance on life, in the context of this topic. So while I find the question intriguing, I question its relevance. The discussion does belong here because there are those that very much do think it relevant.

Quoting Patterner
My overriding question is:. Can there be life without chemical reactions?
I don't see how, but there can't even be rocks without chemical reactions, so that's hardly a test for life.


Quoting J
Your question -- which reduces to "Why is biology necessary for consciousness?" -- is indeed the big one. If and when that is answered, we'll know a lot more about what consciousness is. (Or, if biology isn't necessary, also a lot more!)
:up:


Quoting Patterner
I have to assume we could make a program that duplicates itself, but does so imperfectly.
They have these. Some are viruses or simply mutations of user interfaces such as phishing scams. On the other hand, they've simulated little universes with non-biological 'creatures' that have genes which mutate. Put them into a hostile environment and see what evolves. Turns out that the creatures get pretty clever getting around the hostilities, one of which was a sort of a spiney shell (Mario Kart reference) that always killed the most fit species of each generation.


Quoting boundless
Physics is violated only if you assume it is algorithmic. I disagree with this assumption.
Barring a blatant example of a system that isn't, I stand by my assumption. Argument from incredulity (not understanding how something complex does what it does) is not an example.

I mean, some parts of physics is known to be phenomenally random (unpredictable). But that's still algorithmic if the probabilities are known, and I know of no natural system that leverages any kind of randomness.

Quoting J
Good discussion anyway!

Quoting boundless
BTW, I want to thank you for the discussion.

Wow, two in one go. Thank you all. It may not seem like it, but these discussions do influence my thinking/position and cause me to question thin reasoning.


Quoting boundless
I didn't think that my denial of our cognition as being totally algorithmic is so important for me.
That's something I look for in my thinking. X is important, so I will rationalize why X must be. I had to go through that one, finally realizing that the will being deterministically algorithmic (is that redundant?) is actually a very desirable thing, which is why all decision making artifacts use components with deterministic behavior that minimizes any randomness or chaos.

Other examples of X are two of the deepest questions I've come to terms with: Why is there something and not nothing? Why am I me?
Answers to both those questions are super important to me, and the answers rationalized until I realized that both make assumptions that are actually not important and warrant questioning. The first question was pretty easy to figure out, but the second one took years.

As I stated above, I do not think that sentient AI is logically impossible (or, at least, I have not enough information to make such a statement). But IMO we have not yet reached that level.
I can grant that. Sentience is not an on/off thing, but a scale. It certainly hasn't reached a very high level yet, but it seems very much to have surpassed that of bacteria.


Identity seems to be a pragmatic idea, with no metaphysical basis behind it. — noAxioms
Again, I have to disagree here.
You suggest that if I fix my door (reattach a spring that fell loose, or worse, replace the spring), then it's a different door. OK, but this goes on all the time with people. You get a mosquito bite, a hole which is shortly repaired and blood which is replenished in a minute. Are you not the person you were 10 minutes ago? I have some pretty good arguments to say you're not, but not because of the mosquito bite.

We seem to be sufficiently 'differentiated' to be distinct entities.
Being a distinct entity is different than the entity maintaining any kind of identity over time.
You seem to suggest that the identity somehow is a function of biological processes not being algorithmic. Not sure how that follows.


I meant that 'interpretation-free QM' doesn't give a precise definiton of what a measurement is. It is a purely pragmatic theory.
But I gave a definition that QM theory uses. Yes, it's pragmatic, which doesn't say what the measurement metaphysically IS. Perhaps that's what you're saying. No theory does that. It's not what theories are for.


Quoting Wayfarer
I see it as a matter of fact which you don’t recognize.

Perhaps because I don't see anything as a matter of fact. I call that closed mindedness. So I have instead mere opinions, and yes, ones that don't correspond with your 'facts'.

javra November 09, 2025 at 06:11 #1023996
Quoting noAxioms
"intents, and the intentioning they entail, are teleological, and not cause and effect" -- javra

You don't know that, but you say it like you do. I'm a programmer, and I know the ease with which intent can be implemented with simple deterministic primitives.


As it happens, I know it on par to knowing that 2 and 2 doesn’t equal 5 but does equal 4, and can likely justify the affirmation you’ve quoted from me much better than the latter. You could have engaged in rational debate in reply. Instead, you affirm knowledge of what I do and don’t know. And uphold this not via any sort of rational argument but via a pleading to authority, namely your own as “programmer”.

In truth, I find it hard to take the sum of your replies to me seriously, this as far as rational philosophical discourses go. So, I’ll just bail out of this thread and leave you to your philosophical musings.
boundless November 09, 2025 at 10:01 #1024006
Quoting noAxioms
Wow, two in one go. Thank you all. It may not seem like it, but these discussions do influence my thinking/position and cause me to question thin reasoning.


:up: the same goes for me. These kinds of well made discussions, even if do not lead to a change of opinion, helps to clarify one's own.

Quoting noAxioms
That's something I look for in my thinking. X is important, so I will rationalize why X must be


Yes, I hear you. I don't think that is a 'dogmatic' approach if it is done with an open mind. Yes, some rationalizations can be a sign of dogmatism but if the inquiry is done in a good way it is actually a sign of the opposite, in my opinion.

So, even if on this subject we probably end the discussion with opposite ideas, this discussion perhaps helps both to consider aspect of our own position that we neglected and so on.
In my case, ironically, it helped me to understand how my own conception of 'consciousness' seems to exclude the possibility that conscious beings are algorithmic and this perhaps means that physical laws do make room for that. To me the biggest evidence (but not the only one) that our cognition isn't (totally) algorithmic is the 'degree of freedom' that it seems to be present when I make a choice. As I told in another discussion we had, this is, in my opinion, also strictly linked to ethics. That is we have a somewhat free power of deliberation that makes us eligible to be morally responsible (in a way that this concept doesn't become purely utilitarian and/or totally equivalent to being healthy/ill - although I do believe that there is a strong analogy between good/evil will and being helathy/ill, however both the analogy and the difference are crucial).

Reegarding the two other questions you wrote, they are also important for me. In my opinion, they are ultimately mysterious but, at the same time, I believe that they are worth asking and at the same time we can find some partial answer also to those.

Quoting noAxioms
I can grant that. Sentience is not an on/off thing, but a scale. It certainly hasn't reached a very high level yet, but it seems very much to have surpassed that of bacteria.


:up: Note that, however, I'm also a weirdo that thinks that the 'scale' is indeed like a scale with discrete steps. Nevertheless, I believe that the latency for the 'higher' level of sentience must be present in the lower. It seems to me that, to be honest, this inevitably leads to a less 'mechanicistic' view of the 'insentient'/'unliving' aspects of Nature.

Quoting noAxioms
Are you not the person you were 10 minutes ago? I have some pretty good arguments to say you're not, but not because of the mosquito bite.


Buddhists would tell you that saying that "you are the same person" (as you did change) and "you are a different person" (as the two states are closely connected) are both wrong. Generally, change is seen as evidence by most Buddhists that the 'self is an illusion (or 'illusion-like')'.
In my opinion, I would say that I am the same person. To this point, think about how the scholar D.W. Graham interpreters Heraclitus' fragment B12 "On those stepping into rivers staying the same other and other waters flow." (and Aristotle's view):

Quoting DW Graham, SEP article on Heraclitus, section 3.1

The statement is, on the surface, paradoxical, but there is no reason to take it as false or contradictory. It makes perfectly good sense: we call a body of water a river precisely because it consists of changing waters; if the waters should cease to flow it would not be a river, but a lake or a dry streambed.
...
If this interpretation is right, the message of the one river fragment, B12, is not that all things are changing so that we cannot encounter them twice, but something much more subtle and profound. It is that some things stay the same only by changing. One kind of long-lasting material reality exists by virtue of constant turnover in its constituent matter. Here constancy and change are not opposed but inextricably connected. A human body could be understood in precisely the same way, as living and continuing by virtue of constant metabolism–as Aristotle for instance later understood it. On this reading, Heraclitus believes in flux, but not as destructive of constancy; rather it is, paradoxically, a necessary condition of constancy, at least in some cases (and arguably in all). In general, at least in some exemplary cases, high-level structures supervene on low-level material flux. The Platonic reading still has advocates (e.g. Tarán 1999), but it is no longer the only reading of Heraclitus advocated by scholars.


Regardless the validity of Graham's interpretation of Heraclitus, I believe that it might be used to defend the idea that we can remain identical while changing. Life, after all, seems to be intrinsically characterized by some kind of 'movement'.

Quoting noAxioms
You seem to suggest that the identity somehow is a function of biological processes not being algorithmic. Not sure how that follows.


If all processes are algorithmic, I would believe that they can be seen as aspects of the entire evolution of the whole universe. Some kind of 'freedom' (or at least a potency for that) seems necessary for us to be considered as individual.

Quoting noAxioms
But I gave a definition that QM theory uses. Yes, it's pragmatic, which doesn't say what the measurement metaphysically IS. Perhaps that's what you're saying. No theory does that. It's not what theories are for.


To make some examples, in Rovelli's RQM all physical interactions are measurements. In most forms of MWI that I know, IIRC only the processes that lead to decoherence can be considered measurements. In epistemic interpratiotions measurements are updates of an agent's knowledge/beliefs (and of course, what this means depends on the interpreter's conception of what an 'agent' is). In de Broglie-Bohm measurements are particular kinds of interaction where the 'appearance of collapse' happens.
And so on. There are, in my opinion, an extremely large number of ideas of what a 'measurement' actually means in QM among the experts. So, it's not clear at all.

I think that adopting 'QM without interpretation' would force one to 'suspend judgment' on what a 'measurement' ultimately is.

Perhaps we are saying the same thing differently. I suspect we do.


Wayfarer November 09, 2025 at 21:07 #1024064
Quoting noAxioms
Perhaps because I don't see anything as a matter of fact. I call that closed mindedness. So I have instead mere opinions, and yes, ones that don't correspond with your 'facts'.


Well, that solves it. All living beings are made from marshmallows, and the moon really is cheese. Time we moved on.
Wayfarer November 09, 2025 at 23:14 #1024081
[quote=Michel Bitbol, Beyond Panpsychism; https://share.google/mPV6PPEAOL3RiUHIb]As Erwin Schrödinger cogently pointed out, once lived experience has been left aside in order to elaborate an objective picture of the world, “If one tries to put it in or on, as a child puts colour on his uncoloured painting copies, it will not fit. For anything that is made to enter this world model willy-nilly takes the form of scientific assertion of facts; and as such it becomes wrong”. Panpsychism is the unambiguous target of this criticism. It represents a clumsy attempt at overcompensating the consequences of adopting the intentional/objectifying stance needed to do science, by adding to it (or by replacing it with) patches of experience very similar to the patches of colour added on the surface of an uncoloured drawing. As soon as this is done, the new picture of the world looks like a scientific picture, apart from the unfortunate circumstance that its additional elements cannot be put to test as it would be the case of a scientific theory. This does not make panpsychism plainly wrong, but rather torn apart between its phenomenological origin and its temptation to mimick a theory of the objective world. As a consequence, panpsychism proves unable to define adequate criteria of validity for its own claims.[/quote]
Reference is to Schrödinger E. (1986), What is Life & Mind and Matter, Cambridge University Press

I think this criticism applies to all the current proponents of panpsychism - Philip Goff, Anakka Harris, Galen Strawson, etc. They're all trying to preserve the veracity of the scientific model while injecting an element of subjectivity into it 'from the outside', so to speak.

@Patterner
Patterner November 10, 2025 at 11:10 #1024148
Quoting Michel Bitbol, Beyond Panpsychism
It represents a clumsy attempt at overcompensating the consequences of adopting the intentional/objectifying stance needed to do science, by adding to it (or by replacing it with) patches of experience very similar to the patches of colour added on the surface of an uncoloured drawing.
I don't agree that this is what panpsychism is attempting to do. And I maintain that physical objects and processes cannot add up to subjective experience.
J November 10, 2025 at 13:43 #1024156
Reply to Patterner
Quoting Wayfarer
They're all trying to preserve the veracity of the scientific model while injecting an element of subjectivity into it 'from the outside', so to speak.


I think panpsychism is less likely to prove true than some version of consciousness as a property only of living things, but still, I don't agree with this characterization. The problem goes back to this part of what Bitbol says:

Quoting Michel Bitbol, Beyond Panpsychism
the new picture of the world looks like a scientific picture, apart from the unfortunate circumstance that its additional elements cannot be put to test as it would be the case of a scientific theory.


"Cannot" is the misleading term. Since we don't at this time have a scientific account of what consciousness is, or how it might arise (or be present everywhere, if you're a panpsychist), it's claiming far too much to say it "cannot" be tested. It cannot be tested now. But if it can be eventually couched in scientific terms, then it will be testable.

If panpsychism is at best an untested hypothesis, that should keep us modest about any claims that it's correct or true. But we also don't have license to say that it's flawed in some theoretical or philosophical way that can demonstrate, now, that it will never be science. Panpsychism as I understand it is not "mimicking a theory of the objective world." Rather, it's saying that if and when we understand what consciousness is, we will discover that our current division of "objective" and "subjective" into areas that can and cannot be studied scientifically, is just plain wrong. Philosophers do seem divided on whether this is even conceivable. I've never had any trouble with panpsychism in this way. If subjectivity is "really in there" in everything that exists, well, then that will be a feature of the objective world. What we lack is a vocabulary of concepts -- or, as it may be, mathematics -- to capture it.
Patterner November 10, 2025 at 19:28 #1024196
Quoting J
They're all trying to preserve the veracity of the scientific model while injecting an element of subjectivity into it 'from the outside', so to speak.
— Wayfarer

I think panpsychism is less likely to prove true than some version of consciousness as a property only of living things, but still, I don't agree with this characterization.
Indeed. It's hard to see how a property of particles can be considered "from the outside." Mass and charge are not "from the outside."


Quoting J
If subjectivity is "really in there" in everything that exists, well, then that will be a feature of the objective world. What we lack is a vocabulary of concepts -- or, as it may be, mathematics -- to capture it.
I'm not aware of any math for any other guess about the nature/origin/explanation for consciousness. Which is not a surprise, since, to my knowledge, consciousness doesn't have any physical properties/characteristics to examine/measure mathematically.
Wayfarer November 10, 2025 at 21:24 #1024220
Quoting Patterner
I don't agree that this is what panpsychism is attempting to do.


The metaphor Schrodinger gave was, 'once lived experience has been left aside in order to elaborate an objective picture of the world, “If one tries to put it in or on, as a child puts colour on his uncoloured painting copies, it will not fit." 'Putting colour back in' is a metaphor, but, leaving aside whether the metaphor itself is apt, Schrodinger's starting-point is accurate. Scientific method disregards or brackets out the subjective elements of phenomenal experience so as to derive a mathematically-precise theory of the movements and relations of objects. Consciousness is 'left out' of this, insofar as it is not to be found amongst those objects of scientific analysis. So panpsychism proposes that it must in some sense be a property of those objects, even if current science hasn't detected it. I think that's what Schrodinger's criticism means, and I think it is an accurate description of what panpsychism proposes to do.

As for whether its advocates are really trying to do that:

“Experience is the stuff of the world. Experience is what physical stuff is ultimately made of.”
— “Realistic Monism: Why Physicalism Entails Panpsychism,” Galen Strawson, Journal of Consciousness Studies 13(10–11), 2006.

“If physicalism is true, the experiential must be physical, because the experiential exists, and physicalism is the view that everything that exists is physical. The only way to avoid radical emergence is to suppose that experiential being is present throughout the physical world.”
— ibid.

It is exactly this kind of gambit that Schrodinger's critique anticipated.

Quoting J
Rather, it (panpsychism) is saying that if and when we understand what consciousness is, we will discover that our current division of "objective" and "subjective" into areas that can and cannot be studied scientifically, is just plain wrong.


But this division is intrinsic. Science depends on the bracketing out of the subjective. Its power lies in its ability to treat phenomena as objects of measurement and prediction, abstracting from the first-person standpoint. But that same abstraction ensures that consciousness — the condition of possibility for any object to appear — cannot itself appear as an object in that framework. In Husserl’s terms, consciousness is not one more thing among things; it is the ground within which “things” arise.

Bitbol’s point in Beyond Panpsychism is that phenomenology doesn’t try to patch consciousness back into the scientific picture (as panpsychism does) but to reverse the direction of explanation: instead of asking how consciousness arises within the world, it asks how the world appears within consciousness. That’s what makes phenomenology radical — it goes to the root (radix) of the knowing relation itself. The goal is not to extend the scientific image to include the subject, but to reveal that the scientific image itself is a derivative construction grounded upon experience. And you can see how this dovetails with Chalmers critique.

[hide][quote=Routledge Introduction to Phenomenology, p139]In contrast to the outlook of naturalism, Husserl believed all knowledge, all science, all rationality depended on conscious acts, acts which cannot be properly understood from within the natural outlook at all. Consciousness should not be viewed naturalistically as part of the world at all, since consciousness is precisely the reason why there was a world there for us in the first place. For Husserl it is not that consciousness creates the world in any ontological sense—this would be a subjective idealism, itself a consequence of a certain naturalising tendency whereby consciousness is cause and the world its effect—but rather that the world is opened up, made meaningful, or disclosed through consciousness. The world is inconceivable apart from consciousness. Treating consciousness as part of the world, reifying consciousness, is precisely to ignore consciousness’s foundational, disclosive role.[/quote][/hide]

Patterner November 10, 2025 at 22:33 #1024236
Quoting Wayfarer
Scientific method disregards or brackets out the subjective elements of phenomenal experience so as to derive a mathematically-precise theory of the movements and relations of objects. Consciousness is 'left out' of this, insofar as it is not to be found amongst those objects of scientific analysis.
That is true, regardless of what guess anyone has about the nature of consciousness, and regardless of the actual answer. Consciousness has always been there. It's just been ignored for certain purposes.
Wayfarer November 10, 2025 at 22:43 #1024241
Reply to Patterner Indeed. And isn't that the central factor in this debate?
J November 10, 2025 at 23:33 #1024250
Quoting Patterner
I'm not aware of any math for any other guess about the nature/origin/explanation for consciousness.


There's Penrose's conjecture that consciousness depends on quantum phenomena, which are understood (if at all) primarily in mathematical terms. I lean toward the idea that math is the language of deep structure, so if consciousness can be captured scientifically, it may require a mathematical apparatus at least as elaborate as what's been generated by physics in the past decades. Speculation, of course.

Quoting Wayfarer
But this division is intrinsic. Science depends on the bracketing out of the subjective


Yes and no. Yes, methodologically. But no, not ontologically. There is nothing in the scientific viewpoint that has to deny subjectivity, or claim that it must be reducible to the currently understood categories of physical objectivity.

After all, isn't it objectively true that you are conscious? Objectively true that you possess, or are, subjectivity? Why would we bar science from acknowledging this? I doubt if most scientists would dispute it. Science is not a subjective procedure, but it can and does study subjective phenomena. It does so objectively, or at least as objectively as possible, given our primitive concepts.

This seems so apparent to me that it makes me think you must mean something else entirely when you say that "scientific method disregards or brackets out the subjective elements of phenomenal experience." Do you mean it disregards them as facts? I don't see why that must be so, except for a very hardcore physicalist.

A harder question, as Nagel points out, is whether "I am Wayfarer," spoken by you, is a fact about the world. If there are truly non-objective facts that cannot be made objective, this might be one of them.
Patterner November 11, 2025 at 01:36 #1024267
Quoting Wayfarer
Indeed. And isn't that the central factor in this debate?
If all agree that consciousness has always been there, and had just been ignored for certain purposes, then I don't know what the debate is about.
Wayfarer November 11, 2025 at 02:52 #1024285
Quoting J
Yes and no. Yes, methodologically. But no, not ontologically. There is nothing in the scientific viewpoint that has to deny subjectivity, or claim that it must be reducible to the currently understood categories of physical objectivity.



I'm afraid that's not the point. Modern scientific method was founded on a deliberate division between what came to be called the primary and secondary qualities of bodies — a move that located objective reality in quantifiable properties (extension, motion, mass) and relegated qualitative appearances to the mind of the observer. Locke and the British empiricists codified this, and Descartes’ separation of res cogitans and res extensa reinforced it. And none of that is a matter of opinion.

From that point on, the objective sciences proceeded by isolating the measurable, repeatable, intersubjectively verifiable aspects of phenomena — the features that should appear identically to any observer. That methodological bracketing was enormously fruitful, but it gradually hardened into an ontological assumption: the belief that the model thus produced is the whole of what is real.

This is the confusion Nagel examines in The View from Nowhere: the tendency to mistake the “view from nowhere” for a perspective that could exist independently of the conscious beings who adopt it.

When you say “it’s objectively true that you are conscious,” you’re appealing to an abstract inference that science can register only at one remove. The felt reality of consciousness — what it’s like to be an observer — is not something that can be observed. It’s not one more item within the world; it is the condition for there being a world of items at all.

Quoting Patterner
If all agree that consciousness has always been there, and had just been ignored for certain purposes, then I don't know what the debate is about.


The debate is about what you mean when you say 'there'.
Patterner November 11, 2025 at 03:55 #1024292
Quoting Wayfarer
If all agree that consciousness has always been there, and had just been ignored for certain purposes, then I don't know what the debate is about.
— Patterner

The debate is about what you mean when you say 'there'.
I'm agreeing with what you said:Quoting Wayfarer
Scientific method disregards or brackets out the subjective elements of phenomenal experience so as to derive a mathematically-precise theory of the movements and relations of objects. Consciousness is 'left out' of this, insofar as it is not to be found amongst those objects of scientific analysis.
Our subjective experience is in everything we do, every moment. It is ignored/disregarded/bracketed out, beginning, it is often said, with Galileo, who was trying to understand and describe the universe with mathematics. And you can't understand or describe our subjective experiences with mathematics.
Wayfarer November 11, 2025 at 04:47 #1024299
Patterner November 11, 2025 at 06:40 #1024321
Quoting J
I'm not aware of any math for any other guess about the nature/origin/explanation for consciousness.
— Patterner

There's Penrose's conjecture that consciousness depends on quantum phenomena, which are understood (if at all) primarily in mathematical terms. I lean toward the idea that math is the language of deep structure, so if consciousness can be captured scientifically, it may require a mathematical apparatus at least as elaborate as what's been generated by physics in the past decades. Speculation, of course.
I don't suspect consciousness can be captured scientifically. Despite many very smart people trying their best; despite them not falling for another élan vital scenario; despite putting their efforts into scientific methods; despite everything - nobody has found a hint of physicality in consciousness. It's one thing to see physical properties of consciousness, but being unable to figure it all out. It's another thing to not have anything physical at all to examine in any way. I think we should be going about it in a new way.
noAxioms November 11, 2025 at 07:02 #1024325
We have lost Harry Hindu. I'm quite distressed by this.


Quoting Wayfarer
“If one tries to put it in or on, as a child puts colour on his uncoloured painting copies, it will not fit. For anything that is made to enter this world model willy-nilly takes the form of scientific assertion of facts; and as such it becomes wrong”. — Reference is to Schrödinger E.

This is oft quoted, and nobody seem to know where it comes from or the context of it. But Schrödinger is definitely in your camp. Some other quotes:

living matter, while not eluding the “laws of physics” as established up to date, is likely to involve “other laws of physics” hitherto unknown

I've said this much myself. The view requires 'other laws', and a demonstration of something specific occurring utilizing these other laws and not just the known ones.

Life seems to be orderly and lawful behaviour of matter, not based exclusively on its tendency to go over from order to disorder, but based partly on existing order that is kept up


it needs no poetical imagination but only clear and sober scientific reflection to recognize that we are here obviously faced with events whose regular and lawful unfolding is guided by a 'mechanism' entirely different from the 'probability mechanism' of physics.


... the space-time events in the body of a living being which correspond to the activity of its mind, to its self-conscious or any other actions, are […] if not strictly deterministic at any rate statistico-deterministic. To the physicist I wish to emphasize that in my opinion, and contrary to the opinion upheld in some quarters
Here he mentions explicitly that this is opinion.

For the sake of argument, let me regard this as a fact, as I believe every unbiased biologist would, if there were not the well-known, unpleasant feeling about ‘declaring oneself to be a pure mechanism’. For it is deemed to contradict Free Will as warranted by direct introspection.
The feeling is indeed unpleasant to some. Introspection is not evidence since it is the same, deterministic, free-willed, or not.

let us see whether we cannot draw the correct, non-contradictory conclusion from the following two premises:

(i) My body functions as a pure mechanism according to the Laws of Nature.

(ii) Yet I know, by incontrovertible direct experience, that I am directing its motions, of which I foresee the effects, that may be fateful and all-important, in which case I feel and take full responsibility for them.

The only possible inference from these two facts is, I think, that I — I in the widest meaning of the word, that is to say, every conscious mind that has ever said or felt 'I' — am the person, if any, who controls the 'motion of the atoms' according to the Laws of Nature.
This quote seems to argue for physicalism. It puts up two premises (one from each side?) and finds them non-contradictory. This is interesting since it seems to conflict with the beliefs otherwise expressed here.

He goes on to rationalize a single universal consciousness. Not sure if this is panpsychism. I think this quote below tries to argue against each person being separately conscious.
It leads almost immediately to the invention of souls, as many as there are bodies, and to the question whether they are mortal as the body is or whether they are immortal and capable of existing by themselves. The former alternative is distasteful, while the latter frankly forgets, ignores or disowns the facts upon which the plurality hypothesis rests.
...
The only possible alternative is simply to keep to the immediate experience that consciousness is a singular of which the plural is unknown; that there is only one thing and that what seems to be a plurality is merely a series of different aspects of this one thing, produced by a deception (the Indian MAJA); the same illusion is produced in a gallery of mirrors, and in the same way Gaurisankar and Mt Everest turned out to be the same peak seen from different valleys.


There's lots more, but this introduces his general stance on things.



Quoting Wayfarer
Well, that solves it. All living beings are made from marshmallows, and the moon really is cheese. Time we moved on.

Those are difficult interpretations to mesh with empirical evidence, but it can be done. But with like any interpretation of anything, it is fallacious to label one's opinion 'fact'.

Quoting javra
As it happens, I know it on par to knowing that 2 and 2 doesn’t equal 5 but does equal 4, and can likely justify the affirmation you’ve quoted from me much better than the latter.

A bold move to put a choice of interpretation on par with 2+2=4. OK, so you don't consider it an interpretation then, but justification seems lacking so far. OK, you quoted studies showing bacteria to demonstrate a low level consciousness. I don't contest that. The interpretation in question is whether physical means is sufficient to let the bacteria behave as it does. I've seen no attempt at evidence of that one way or the other.
I'm in no position to prove my side. To do so, I'd need to understand bacteria right down to the molecular level, and even then one could assert that the difference is at a lower level than that.


Quoting boundless
I don't think that is a 'dogmatic' approach if it is done with an open mind.
I don't think it's done with open mind if the conclusion precedes the investigation. I need to be careful here since I definitely have my biases, many of which have changed due to interactions with others. Theism was the first to go, and that revelation started the inquiries into the others.
The supervention on the physical hasn't been moved. It's the simpler model, so it requires extraordinary evidence to concede a more complicated model, but as far as I can tell, the more complicated mode is used to hide the complexity behind a curtain, waved away as a forbidden black box.

On 'my other two questions':
I believe that they are worth asking

I believed they're the two most important questions, but the answer to both turned out to be 'wrong question'. Both implied premises that upon analysis, didn't hold water. Hence the demise of my realism.

Quoting boundless
Note that, however, I'm also a weirdo that thinks that the [consciousness] 'scale' is indeed like a scale with discrete steps.
Cool. Consciousness quanta.

Buddhists would tell you that saying that "you are the same person" (as you did change) and "you are a different person" (as the two states are closely connected) are both wrong. Generally, change is seen as evidence by most Buddhists that the 'self is an illusion (or 'illusion-like')'
In my opinion, I would say that I am the same person.
The pragmatic side of my agrees with you. The rational side does not, but he's not in charge, so it works. It's a very good thing that he's not in charge, or at least the pragmatic side thinks it's a good thing.

The statement is, on the surface, paradoxical, but there is no reason to take it as false or contradictory. It makes perfectly good sense: we call a body of water a river precisely because it consists of changing waters; if the waters should cease to flow it would not be a river, but a lake or a dry streambed.
A river is a process, yes. If it was not, it wouldn't be a river. Pragmatically, it is the same river each time, which is why one can name it, and everybody knows what you're talking about. It doesn't matter if it's right or not. Point is, it works. What if the river splits, going around an island? Which side is the river and which the side channel (the anabranch)? I revise my statement then. It works, except when it doesn't. What happens when the anabranch becomes the river?

Most of this is off point. I don't even think a rock (not particularly a process) has an identity over time. For that matter, I don't think it has an identity (is distinct) at a given moment, but some life forms do. Not so much humans (identity meaning which parts are you and which are not).
I mean, how much do you weigh? Sure, the scale says 90 kilos, but you are carrying a cat, so unless the cat is part of you, the scale lies.


Quoting boundless
If all processes are algorithmic, I would believe that they can be seen as aspects of the entire evolution of the whole universe. Some kind of 'freedom' (or at least a potency for that) seems necessary for us to be considered as individual.

Still not sure how that follows. Take something blatantly algorithmic, like a 4-banger calculator. It's operation can be seen as aspects of the entire evolution of the whole universe.and it seemingly lacks this freedom you speak of. The caluclator is (pragmatically) an individual: It is my calculator, quite distinct from the desk it's sitting on, and the calculator over there owned by Bob. So it's probably not following because you're using 'individual' in different way than .

In epistemic interpratiotions measurements are updates of an agent's knowledge/beliefs (and of course, what this means depends on the interpreter's conception of what an 'agent' is).
OK, agree that you've identified a different meaning of 'measurement' there, but that doesn't change the QM definition of the word, and your assertion was that QM doesn't give a definition of it, which is false, regardless of how different interpretations might redefine the word.

I think that adopting 'QM without interpretation' would force one to 'suspend judgment' on what a 'measurement' ultimately is.
Yes, exactly. Theories are about science. Metaphysics (QM interpretations in this case) are about what stuff ultimately is.

Perhaps we are saying the same thing differently. I suspect we do.
We don't disagree so much as it appears on the surface.


Quoting J
Since we don't at this time have a scientific account of what consciousness is, or how it might arise (or be present everywhere, if you're a panpsychist), it's claiming far too much to say it "cannot" be tested. It cannot be tested now. But if it can be eventually couched in scientific terms, then it will be testable.

If I were to place my bets, even if the scientists claim to have done this, the claim will be rejected by those that don't like the findings. I'm not sure what form the finding could possibly be. Can you tell what I'm thinking? Sure, but they have that now. Will we ever know what it's like to be a bat? No. Not maybe no. Just no.
So what's not being tested that in principle might be testable then?

J November 11, 2025 at 13:58 #1024360
Quoting noAxioms
So what's not being tested that in principle might be testable then?


Whether a given entity is conscious.

Quoting Wayfarer
When you say “it’s objectively true that you are conscious,” you’re appealing to an abstract inference that science can register only at one remove. The felt reality of consciousness — what it’s like to be an observer — is not something that can be observed. It’s not one more item within the world; it is the condition for there being a world of items at all.


An abstract inference . . . is that really what it is? Do you regard it as a fact that you are conscious? Do you think that science will forever be at one remove from that fact? I just can't see why. The felt reality has nothing to do with its factuality. Science doesn't have to observe, i.e., experience this felt reality in order to accept it as a fact, an item in the world. Science can't experience any of the macro- or micro-phenomena of the physical world either. That has never prevented scientists from bringing them under the umbrella of objective reality.

Again, I'm positive that we're somehow at cross-purposes, since what I'm saying seems to me uncontroversial. (And we both admire Nagel!) Don't we both agree that consciousness is a natural phenomenon, a part of the "given world" rather than some sort of intrusion into it? Do you think science is hobbled by its methods so that it can only inquire into certain parts of that world?

Quoting Wayfarer
the objective sciences proceeded by isolating the measurable, repeatable, intersubjectively verifiable aspects of phenomena


Yes. And that will happen for consciousness as well, is my guess. I would further claim that consciousness is a necessary postulate for many scientific inquiries; if it were not, you'd have to maintain that psychology, sociology, economics, and game theory are not sciences.
Patterner November 11, 2025 at 14:27 #1024364
Quoting J
Do you think science is hobbled by its methods so that it can only inquire into certain parts of that world?
I would think so. At least in regards to the physical sciences. We can't weigh, or measure in any way, consciousness with the tools of the physical sciences.
J November 11, 2025 at 15:27 #1024372
Quoting Patterner
We can't weigh, or measure in any way, consciousness with the tools of the physical sciences.


I can only reply: not yet. But virtually none of the physical forces we now recognize as objects of scientific knowledge were weighable or measurable a few hundred years ago. I know I can be monotonous about this, but we simply can't say what will be possible once we actually start to understand what consciousness is. Way too early to say what we can or can't know.
Patterner November 11, 2025 at 15:55 #1024374
Reply to J
I understand, and you're no more monotonous than I am. :grin: I just think that, since there's no hint of any physical properties of consciousness, despite many very smart people trying with our best technology, and leading experts in the physical sciences saying the physical properties of matter don't seem to be connected to it, we might want to explore other ideas.
J November 11, 2025 at 15:58 #1024376
Quoting Patterner
we might want to explore other ideas.


For sure. I'd love to pursue the other ideas. I can imagine people saying, in 2125, "They used to think consciousness might be a physical property! How weird."
Patterner November 11, 2025 at 16:42 #1024379
The problem is verification. I can't imagine how many internally consistent ideas can be developed. Including any physicalist ones. But how to tell which, if any, is right?

Quoting J
They used to think consciousness might be a physical property! How weird."
I'm a hundred years ahead of my time. :rofl:
Wayfarer November 11, 2025 at 20:02 #1024421
Quoting J
Don't we both agree that consciousness is a natural phenomenon, a part of the "given world" rather than some sort of intrusion into it? Do you think science is hobbled by its methods so that it can only inquire into certain parts of that world?


I'll refer to the potted quote I provided from Husserl again:

[quote=Routledge Intro to Phenomenology;https://drive.google.com/file/d/1RfRNvIT6Q8zwUXc5jtO7Gvc3JP-ewUhT/view]In contrast to the outlook of naturalism, Husserl believed all knowledge, all science, all rationality depended on conscious acts, acts which cannot be properly understood from within the natural outlook at all. Consciousness should not be viewed naturalistically as part of the world at all, since consciousness is precisely the reason why there was a world there for us in the first place. For Husserl it is not that consciousness creates the world in any ontological sense—this would be a subjective idealism, itself a consequence of a certain naturalising tendency whereby consciousness is cause and the world its effect—but rather that the world is opened up, made meaningful, or disclosed through consciousness. The world is inconceivable apart from consciousness. Treating consciousness as part of the world, reifying consciousness, is precisely to ignore consciousness’s foundational, disclosive role. For this reason, all natural science is naive about its point of departure, for Husserl (PRS 85; Hua XXV 13). Since consciousness is presupposed in all science and knowledge, then the proper approach to the study of consciousness itself must be a transcendental one[/quote]

Also, as you mentioned Nagel, another passage I quote regularly:

[quote=Thomas Nagel, the Core of Mind and Cosmos]The scientific revolution of the 17th century, which has given rise to such extraordinary progress in the understanding of nature, depended on a crucial limiting step at the start: It depended on subtracting from the physical world as an object of study everything mental – consciousness, meaning, intention or purpose. The physical sciences as they have developed since then describe, with the aid of mathematics, the elements of which the material universe is composed, and the laws governing their behavior in space and time.

We ourselves, as physical organisms, are part of that universe, composed of the same basic elements as everything else, and recent advances in molecular biology have greatly increased our understanding of the physical and chemical basis of life. Since our mental lives evidently depend on our existence as physical organisms, especially on the functioning of our central nervous systems, it seems natural to think that the physical sciences can in principle provide the basis for an explanation of the mental aspects of reality as well — that physics can aspire finally to be a theory of everything.

However, I believe this possibility is ruled out by the conditions that have defined the physical sciences from the beginning. The physical sciences can describe organisms like ourselves as parts of the objective spatio-temporal order – our structure and behavior in space and time – but they cannot describe the subjective experiences of such organisms or how the world appears to their different particular points of view. There can be a purely physical description of the neurophysiological processes that give rise to an experience, and also of the physical behavior that is typically associated with it, but such a description, however complete, will leave out the subjective essence of the experience – how it is from the point of view of its subject — without which it would not be a conscious experience at all.

So the physical sciences, in spite of their extraordinary success in their own domain, necessarily leave an important aspect of nature unexplained. [/quote]

Quoting J
I would further claim that consciousness is a necessary postulate for many scientific inquiries


Not as an object of science, but as its pre-condition. Note the juxtaposition of 'natural' with 'transcendental' that Husserl refers to, which he derives from Kant, although he differs with Kant in signficant ways. Transcendental is 'what is necessary for experience but not given in experience.' So consciousness is not an 'intrusion' into the world, but neither is it an object within it.




Patterner November 11, 2025 at 20:39 #1024429
I rather like this, from [I]Mind and Cosmos[/I]
Thomas Nagel:The intelligibility of the world is no accident. Mind, in this view, is doubly related to the natural order. Nature is such as to give rise to conscious beings with minds; and it is such as to be comprehensible to such beings. Ultimately, therefore, such beings should be comprehensible to themselves. And these are fundamental features of the universe, not byproducts of contingent developments whose true explanation is given in terms that do not make reference to mind.
Wayfarer November 11, 2025 at 21:28 #1024438
Reply to Patterner :100: But, you know, that book was subject of a massive pile-on when it was published. Nagel was accused of 'selling out to creationism'.

Another passage from the same book:

[quote=Thomas Nagel, Mind and Cosmos, Pp 35-36]The modern mind-body problem arose out of the scientific revolution of the seventeenth century, as a direct result of the concept of objective physical reality that drove that revolution. Galileo and Descartes made the crucial conceptual division by proposing that physical science should provide a mathematically precise quantitative description of an external reality extended in space and time, a description limited to spatiotemporal primary qualities such as shape, size, and motion, and to laws governing the relations among them. Subjective appearances, on the other hand -- how this physical world appears to human perception -- were assigned to the mind, and the secondary qualities like color, sound, and smell were to be analyzed relationally, in terms of the power of physical things, acting on the senses, to produce those appearances in the minds of observers. It was essential to leave out or subtract subjective appearances and the human mind -- as well as human intentions and purposes -- from the physical world in order to permit this powerful but austere spatiotemporal conception of objective physical reality to develop. [/quote]

Reply to J

J November 11, 2025 at 21:33 #1024441
Reply to Wayfarer Thanks, good quotes. Nagel, as I read him, seems to veer between "science" and "physical sciences."

Thomas Nagel, the Core of Mind and Cosmos:The physical sciences can describe organisms like ourselves as parts of the objective spatio-temporal order – our structure and behavior in space and time – but they cannot describe the subjective experiences of such organisms.


So, two questions: 1) Why is an objective description of subjective experience necessary to explain subjective experience? This goes back once again to the difference between accepting and inquiring into consciousness, versus also having to claim an impossible 3rd-person experience while doing so. 2) Do we want to conclude that Nagel thinks psychology isn't a science? I doubt it. I think he would say that it isn't a physical science.

To summarize: Are there not objective inquiries into subjective experiences? I guess you could say that any such inquiry is, by definition, not a scientific one, but that seems awfully inflexible.

Quoting Routledge Intro to Phenomenology
the world is opened up, made meaningful, or disclosed through consciousness. The world is inconceivable apart from consciousness. Treating consciousness as part of the world, reifying consciousness, is precisely to ignore consciousness’s foundational, disclosive role.


I want us to agree wholeheartedly with the first two sentences, but take issue with the third. Suppose we altered that final sentence to read: "Treating consciousness as part of the world is precisely the enormous challenge that philosophy is presented with -- how do we give full weight to consciousness' foundational, disclosive role while equally acknowledging that somehow there is a necessary act of self-reflection that also places it, and us, in the world? How can I, a subject, be both in, and constitutive of, the world?"

In other words, this application of phenomenology is trying to solve the difficult problem by flatly denying that consciousness is part of the world. That seems both too simple and too unlikely. The truth will turn out to be more bizarre, and more wonderful, than that.

Reply to Patterner

Yes, here Nagel hits it on the head. It's both/and, not either/or.
Wayfarer November 11, 2025 at 21:45 #1024445
Quoting J
I think he (Nagel) would say that it isn't a physical science.


But think that through. If it's not a physical science, then, according to physicalism, how could it be a science? It must by definition be metaphysics.


J November 11, 2025 at 21:47 #1024447
Reply to Wayfarer Yes. That's why physicalism is untenable. Science is broader than that. Do you read Nagel as arguing against physicalism alone, for the most part? I do.
Wayfarer November 11, 2025 at 22:07 #1024451
Quoting J
That's why physicalism is untenable. Science is broader than that.


But think it through in relation to Chalmers' 'facing up to the problem of consciousness'. What you're saying is, you already agree that physicalism is untenable. But Chalmers, Nagel and Husserl are giving arguments as to why it is. And while their arguments are different, the distinction between the first- and third-person perspective is intrinsic to all of them. @noAxioms has already explained that he can't see any distinction. To be sure, many others say the same. But I think there's a real distinction that is not being acknowledged.
J November 11, 2025 at 22:57 #1024469
Quoting Wayfarer
I think there's a real distinction that is not being acknowledged.


I do too, and it's captured in Nagel's question about whether "I am J," said by me, is a fact about the world (just to pick one example). We need to preserve the distinction between 1st and 3rd person perspectives, but . . . does that necessarily put science on one side of an impermeable line? I think that's what we're discussing here. If you interpret Chalmers et al. as explaining why physicalism doesn't work, we have no issue. But I took you to be offering a much broader characterization, going back centuries, about what the scientific project amounts to, and what is and isn't permissible within it. That's where I think we have to be careful. The fact that physicalism can't inquire into subjectivity doesn't mean that science can't -- because physicalism doesn't get to draw the line about what counts as science. (That's up to us philosophers! :wink: )
Wayfarer November 11, 2025 at 23:07 #1024480
Quoting J
does that necessarily put science on one side of an impermeable line?


It certainly puts modern Western science, as understood since Galileo, on one side of it. Unambiguously. You know that German culture has a word, Geistewischenschaft, meaning 'sciences of the spirit', right? You could put Ricouer, Hegel, Heidegger, and Husserl under that heading, but there's no way you could include them under the heading 'science' in a Western university.

I think there's a clear, bright line.
Patterner November 11, 2025 at 23:15 #1024484
Quoting Wayfarer
But, you know, that book was subject of a massive pile-on when it was published. Nagel was accused of 'selling out to creationism'.
I'm not concerned with what he was accused of. I wouldn't even be concerned if the accusations are true. He, anybody, can be right about some things, and wrong about others.



Quoting J
I want us to agree wholeheartedly with the first two sentences, but take issue with the third.
You saved me the trouble of saying that. "Treating consciousness as part of the world..."?? Consciousness [I]is[/I] part of the world. How is that in question?


Quoting Wayfarer
But think that through. If it's not a physical science, then, according to physicalism, [I]how could it be a science?[/I] It must by definition be metaphysics.
Quoting J
Yes. That's why physicalism is untenable. Science is broader than that.
Right. Physicalism only gets to say what is and is not [I]physical[/I] science.



Patterner November 11, 2025 at 23:18 #1024485
Quoting J
The fact that physicalism can't inquire into subjectivity doesn't mean that science can't -- because physicalism doesn't get to draw the line about what counts as science. (That's up to us philosophers! :wink: )
Again, yes.
Wayfarer November 11, 2025 at 23:25 #1024491
Quoting Patterner
Consciousness is part of the world. How is that in question?


Because it's not! You can observe other people. and animals, which you can safely assume to be conscious, and which you can safely assume feel just like you do. But you will not observe consciousness as such - only it's manifestations. The only instance of consciousness which you really know, is the instance which you are, because you are it. Not because it's something you see. You can't experience experience. The hand can only grasp something other to itself (from the Upani?ad).
Patterner November 11, 2025 at 23:46 #1024499
Quoting Wayfarer
The only instance of consciousness which you really know, is the instance which you are, because you [I]are[/I] it.
As I said, consciousness is part of the world.

I invite you to believe that you are also conscious, and also part of the world.
Wayfarer November 11, 2025 at 23:49 #1024501
Reply to Patterner There's an important perspectival shift missing in that account, somewhat analogous to 'figure and ground'. As I said, you cannot find or point to consciousness in any sense meaningful to the natural sciences. You can only infer it. This is why Daniel Dennett continued to insist right until the end that it must in some sense be derivative, unreal or non-existent.
Patterner November 11, 2025 at 23:54 #1024504
Quoting Wayfarer
As I said, you cannot find or point to consciousness in any sense meaningful to the natural sciences.
I do not agree that the only things that exist are things that are meaningful to the physical sciences. I know you said "natural", not "physical". But I think consciousness is natural.
Wayfarer November 12, 2025 at 00:07 #1024509
Reply to Patterner That’s a reasonable point and one that turns on what “natural” means.

If by natural we mean “what belongs to the order of things that occur independently of human artifice,” then consciousness is indeed natural — but not physical in the sense of being an object or process describable in terms of physics. To call it “non-physical” doesn’t mean “supernatural” or “mystical”; it means that it doesn’t present as a measurable phenomenon, as an object.

Mind is that to which the physical appears. It is the horizon within which things become present as physical, as measurable, as anything at all. So the distinction isn’t between “natural” and “supernatural,” but between 'that which appears' and the subject to whom it appears. That is what I'm saying (and not just me!) has been bracketed out by science. It is also what Husserl, and before him Kant, were getting at: consciousness isn’t a part of the world in the same way the brain, trees, or galaxies are. It’s the faculty for which a world appears.

I hope you can see that distinction, because I think it's important.
J November 12, 2025 at 00:23 #1024514
Reply to Wayfarer I get what you mean, and that particular line is pretty clear, I agree. But what about the human sciences -- psychology, economics, history, textual hermeneutics, etc.? I'm fine with the first two, at any rate, being a science, aren't you?

Quoting Wayfarer
you cannot find or point to consciousness in any sense meaningful to the natural sciences. You can only infer it.


Let's say it's true that, at the moment, the natural sciences can only infer consciousness. (I think we can do a little better, but no matter.) This was also true of electromagnetic forces, before the 19th century. Everyone knew something was there, but not what or why. Wouldn't it be reasonable to assume that, in time, we'll have positive tests for the presence of consciousness, and be able to describe its degrees and characteristics? This isn't to say that consciousness is a force like electromagnetism -- I doubt it -- but only that science often starts with phenomena that are widely acknowledged but badly understood.
Wayfarer November 12, 2025 at 00:46 #1024519
Quoting J
what about the human sciences -- psychology, economics, history, textual hermeneutics, etc.? I'm fine with the first two, at any rate, being a science, aren't you?


In a broad sense, but they are not counted amongst the ‘exact sciences’, are they? Their proponents might aspire to it, but there are many difficulties. Furthermore, as far as psychology is concerned, what are the broader questions that underlie it? What vision, or version, of humanity? That we’re species like other species, vying for survival and adaption? And that itself is not a question for psychology.

The genius of modern science was deciding what to exclude from its reckonings. For example, intentionality or telos. Such factors are invisible to precise definition and measurement. So, leave them out! Consider only what can be measured and predicted according to theory.

Quoting J
Wouldn't it be reasonable to assume that, in time, we'll have positive tests for the presence of consciousness, and be able to describe its degrees and characteristics?


I’m sure that medicine does have such tests, they would be extremely important in the treatment of comatose patients. But the ‘how much’ and ‘what kind’ of consciousness questions are still within the ambit of what Chalmers designated solvable problems. (And for that matter, maybe the whole use of ‘problem’ in this regard is mistaken. Others have pointed out that it’s more of a mystery - the distinction being that problems are there to be solved, while mysteries are something we’re a part of, meaning we can’t step outside of them and ‘explain’ them.)

Patterner November 12, 2025 at 03:02 #1024528
Quoting Wayfarer
If by natural we mean “what belongs to the order of things that occur independently of human artifice,” then consciousness is indeed natural
Yes, consciousness is natural in that sense.


Quoting Wayfarer
but not physical in the sense of being an object or process describable in terms of physics. To call it “non-physical” doesn’t mean “supernatural” or “mystical”
Of course not. I have no idea why you're making this point. In this sense, consciousness is natural because it exists in this universe. Even things that[I]are[/I] of human artifice are not “supernatural” or “mystical”.


Quoting Wayfarer
consciousness isn’t a part of the world in the same way the brain, trees, or galaxies are.
Of course consciousness isn't the same kind of thing as those physical things.



Quoting Wayfarer
It’s the faculty for which a world appears.
As Albert Csmus said, [I]Everything begins with consciousness, and nothing is worth anything except through it.[/I]




Wayfarer November 12, 2025 at 03:43 #1024533
Quoting Patterner
As Albert Camus said, Everything begins with consciousness, and nothing is worth anything except through it.



Quoting Routledge Intro to Phenomenology
the world is opened up, made meaningful, or disclosed through consciousness. The world is inconceivable apart from consciousness.


Hmmm… do I detect a similarity here? :chin:
Patterner November 12, 2025 at 04:14 #1024536
Quoting J
Wouldn't it be reasonable to assume that, in time, we'll have positive tests for the presence of consciousness, and be able to describe its degrees and characteristics?
I am skeptical, because we have absolutely nothing at this point. We know to test assumptions, and not just believe what we think must be true, as they did back when they thought heavier objects fell faster than lighter objects.

We know to pay attention to details, and not just think an unexamined big picture must be accurate, as they did when they thought the earth was the center of the universe.

Perhaps most important, we learned a good lesson from those in the past who thought living things were animated by a special vital force. Now we know that the "animation" is various physical processes that can be observed, measured, and explained. (Exactly which processes depends on each person's definition of "life". But I haven't heard of a definition that doesn't have various processes, such as metabolism, sensory input, reproduction...)

Yet, despite being on guard for all the things, people thinking outside the box all the time, and having technology that can do things like slam electrons together and measure what happens, we don't have any clue how physical properties and processes can produce something so different from them, and no evidence that that's what going on.



Quoting Wayfarer
Hmmm… do I detect a similarity here? :chin:
I suppose Husserl should get the credit.
Wayfarer November 12, 2025 at 04:17 #1024537
Reply to Patterner Existentialism grew out of phenomenology. But neither of them are the subjects of Chalmers’ argument in ‘facing up to the problem of consciousness’.
J November 12, 2025 at 13:42 #1024560
Quoting Patterner
we don't have any clue how physical properties and processes can produce something so different from them


Everything you say in your post is true, including the above. Once again, I'm speculating, but perhaps the conclusion we ought to draw is that physical processes don't produce consciousness; i.e., it is not a cause/effect relation, occurring in a temporal order. This is the essential premise of supervenience, as I understand it. Put crudely, consciousness is the same thing as its physical substrate, but experienced from the inside, the 1st person. This is not reductionism, because we could just as well say that the physical substrate is the same thing as consciousness, viewed from the outside. Neither reduces to the other. Of course, this stretches the use of "same thing," perhaps unacceptably. Phenomenologically, they are very far from the same thing. And yet, heat is the "same thing" as molecular motion, in one important sense of "same". They don't remotely resemble each other, experientially, but nevertheless . . .

But suppose this speculation is correct. We still don't know how to talk about this sameness or doubleness, because we don't (yet) have a scientific conception of what philosophers call the 1st and 3rd persons. So I agree with your cluelessness on the whole question; I'm just more optimistic that a path will open.
boundless November 12, 2025 at 16:42 #1024576
Quoting noAxioms
I believed they're the two most important questions, but the answer to both turned out to be 'wrong question'. Both implied premises that upon analysis, didn't hold water. Hence the demise of my realism.


If they turned out to be 'wrong questions', then they aren't important. What is important is removing the illusion that they are. I disagree but I think I can understand why you think so.

Quoting noAxioms
Cool. Consciousness quanta.


Sort of. I see it more like that consciousness comes into discrete degrees and that there is some kind of potency of the higer degrees into the lower degrees. So, I'm a sort of emergentist myself I suppose but I would put 'emergence' as in 'actualizing a potential' (think of Aristotle). So, not in a way in which current 'physicalists' models frame it.


Quoting noAxioms
A river is a process, yes. If it was not, it wouldn't be a river.


But note that you're using the notion of 'pragmatic' versus 'rational' in a way that the above statement is, ultimately, false.

I can agree with the 'rational' model that the we aren't the same person as we were in the past to exclude a static model of the 'self'. Conceptual models are, of course, static and, being static, might not be able to fully capture a 'dynamic entity'. I can even agree with the Buddhist notion that the 'self' is ultimately illusory if it is interpreted as implying that we can't be identified by anything static.

As an example, consider a song. The song 'exists' when it is played. Its script isn't its 'identity' but, rather, what we might call its form, its template. However, we can't even say that the song is something entirely different from its script as the script is something essential to the song. In a similar way, something like my DNA is essential to me but, at the same time, it can't 'capture' my whole being.

No description, no matter how articulate can ever capture the being of a person.

Quoting noAxioms
The caluclator is (pragmatically) an individual


Yes, I agree with that. But I disagree that it has the sufficient degree of autonomy to make its pragmatic distinction from its environment as a real distinction. It is certainly useful to us to distinguish it from its environment and label it with a name and think it as an 'entity'. But is it really one?

Quoting noAxioms
and your assertion was that QM doesn't give a definition of it, which is false, regardless of how different interpretations might redefine the word.


Ok, fine. I concede that. But I believe that in 'interpretation-free QM' measurement is a fuzzy notion. Once you define it clearly, the definition gives an interpretation of QM.

Quoting noAxioms
Yes, exactly. Theories are about science. Metaphysics (QM interpretations in this case) are about what stuff ultimately is.


Well, up until the 20th century it was common to think that the purpose of science was at least to give a faithful description of 'how things are/behave'. I personally do not make a hard distinction between metaphysics and physics but I get what you mean here.


boundless November 12, 2025 at 16:43 #1024577
Quoting Patterner
I rather like this, from Mind and Cosmos
The intelligibility of the world is no accident. Mind, in this view, is doubly related to the natural order. Nature is such as to give rise to conscious beings with minds; and it is such as to be comprehensible to such beings. Ultimately, therefore, such beings should be comprehensible to themselves. And these are fundamental features of the universe, not byproducts of contingent developments whose true explanation is given in terms that do not make reference to mind.
— Thomas Nagel


Excellent quote! Thanks!
Patterner November 13, 2025 at 00:52 #1024648
Quoting J
Put crudely, consciousness is the same thing as its physical substrate, but experienced from the inside, the 1st person.
-----------------
Of course, this stretches the use of "same thing," perhaps unacceptably. Phenomenologically, they are very far from the same thing.
There is no "perhaps" about it, in my opinion. Nothing about the physical substrate suggests qualia, self-awareness, or anything to do with subjective experience.


Quoting J
And yet, heat is the "same thing" as molecular motion, in one important sense of "same". They don't remotely resemble each other, experientially, but nevertheless . . .
"Experientially"? Whose experience do you mean by that?

The temperature in a room [I]is[/i] the measure of the average kinetic energy of its air molecules. The mercury in a thermometer expands when the air molecules move faster, and kinetic energy transfers from the air into the mercury.

It's all fully described mathematically. What the average speed of the air molecules is at a given temperature on the thermometer. James Clerk Maxwell apparently came up with the equations for figuring out what percentage of molecules are moving at which speed relative to the average. How much energy is needed to raise the temperature of whatever volume of air.

Our nerves detect the kinetic energy of the air. We can detect electrical signals caused by the contact, follow them to the spinal cord, and to the brain, where x, y, and z happen. We can quantify all that, also. How fast do the signals move along the nerves? Which nerve pathways are used for which temperature ranges? What kind of ions are released at what points?

Nowhere in any of that is there a hint of our subjective experience of heat. Not any more than there is in the mercury expanding in the thermometer.


Quoting boundless
Excellent quote! Thanks!
:up: Yes, good stuff!
J November 13, 2025 at 01:10 #1024656
Quoting Patterner
Nowhere in any of that is there a hint of our subjective experience of heat.


Quite right. And yet, if a child asks for an explanation of what heat is, you're going to tell the story about the molecular motion. We can finesse this by simply pointing out that "what heat is" is equivocal: it can mean "what does it feel like" or "what causes it". But I think the issue goes deeper than language. It's that "doubleness" that I referred to before. Heat really is two different things at the same time, from different perspectives -- maybe that's a better way to put it than calling it "the same thing."

Quoting Patterner
"Experientially"? Whose experience do you mean by that?


I'm contrasting the subjective experience of heat with the objective explanation of it. Perhaps "experiential" isn't the right term for how a scientist observes molecular motion. What I meant was, the feeling of heat doesn't at all resemble the picture described by the scientist. But again, as above: any description of what heat is would be incomplete without the 3rd person perspective as well.

Patterner November 13, 2025 at 11:19 #1024721
Quoting J
It's that "doubleness" that I referred to before. Heat really is two different things at the same time, from different perspectives
I've never thought about things in this particular way, so this is just my first reaction. But I don't know if that idea applies to heat. Heat is the kinetic energy of the air molecules. What's two different things is [I]our interaction with[/I] heat. The first thing is the physical events, beginning with thermoreceptors in the skin releasing ions, which depolarize the neuron, which generates an electric signal, which...

The second thing is our subjective experience of all that as heat.

The Hard Problem is that nothing about the first suggests the second.
J November 13, 2025 at 13:49 #1024731
Quoting Patterner
What's two different things is our interaction with heat. The first thing is the physical events, beginning with thermoreceptors in the skin releasing ions, which depolarize the neuron, which generates an electric signal, which...

The second thing is our subjective experience of all that as heat.


Yes, that's what I'm suggesting. But I would change the terminology in a small but crucial way: Both in ordinary language and from a phenomenological perspective, "heat" is the subjective experience. Everyone knew what "heat" meant long before chemistry. So I don't think we ought to talk about "our interaction with heat." The only "heat" out there with which we can interact is "heat" in the first sense, molecular motion, etc. The "two different things" are the results of the two perspectives -- and again, I'm not arguing that the same thing/different thing question has to be settled firmly. After all, what makes a "thing"? Rather, what we should be clear about is that the situation is a peculiar one: We have two uses of the term "heat," both widely accepted by their communities of users. They refer to different events, phenomenologically and perhaps extensionally. Yet they also refer to one single event, seen objectively. No one, I think, will deny that the 1st and the 2nd ways of understanding heat are intimately connected, such that you can't get 2 without 1. (Can you get 1 without 2? . . . interesting.)

Quoting Patterner
The Hard Problem is that nothing about the first suggests the second.


So, you're asking whether this is a good analogy for consciousness. But is there anything about the molecular-motion description of heat that would suggest the subjective experience of warmth? We started, pre-science, with our experience of heat, and went on to discover the physical conditions upon which it supervenes, which are utterly unlike feeling warmth. Why couldn't this happen for consciousness as well? It seems like a good analogy to me, but maybe I'm missing something you have in mind.
Wayfarer November 13, 2025 at 20:46 #1024783
Quoting J
Heat really is two different things at the same time, from different perspectives


Another snippet from Michel Bitbol, this one from a paper Is Consciousness Primary? (I’m on a Bitbol bender at the moment.)

Let me give an illustration of this process of objectification, borrowed from the dawn of thermodynamics. The long and difficult process by which the thermodynamic variables such as temperature, pressure, and even volume (though at a much earlier period of history) have been extracted from their experiential basis is a locus classicus of the philosophical history of science (Bachelard, 1938, 1973 ; Mach, 1986). In the beginning, there were bodily “sensations”, ordinary practices, and an overabundance of qualitative observations about color of metals, fusion or ebullition of materials, expansion of liquids according to whether they are cold or hot etc. Heat and temperature were hardly distinguished from one another, and from the feeling of hotness. As for pressure, it was little more than a name for felt strain on the skin. But, progressively, a new network of quantitative valuations emerged from this messy experiential background, together with the laws that connect them (such as the ideal gas law). Even though sensations of hotness and strain still acted as a root and as a last resort for these valuations, they slipped farther and farther away from attention, being the deeper but less reliable stratum in a growingly organized series of criteria for assessing thermodynamic variables. At a certain point, the sensation of hotness no longer played the role of an implicit standard at all ; it was replaced by phase transitions of water taken as references for a scale of variable dilatations in liquid thermometers. This scale, which posits a strict order relation of temperatures, replaced the mixture of non-relational statements of hot or cold and partial order relation of hotter and colder which tactile experience together with qualitative observation of materials afford. Accordingly, the visual experience of graduation readings, or rather the invariant of many such visual perceptions, was given priority over the tactile experience of hotness. Later on, when the function “Heat” was clearly distinguished from the variable “temperature”, and its variation defined as the product of the “heat capacity” times the variation of temperature, tactile experience was submitted to systematic criticism : the feeling of hotness was now considered as a complex and confused outcome of heat transfer between materials of unequal heat capacities and the skin, and also of the physiological state of the subject. From then on, declarations about tactile experience, which had acted initially as the tacit basis of any appraisal of thermic phenomena, were pushed aside and locked up in the restrictive category of so-called “subjective” statements (Peschard & Bitbol, 2008).


It is an example of how the ‘primary/secondary’ distinction emerged in a real-world context.
J November 13, 2025 at 21:38 #1024795
Reply to Wayfarer Good description, thanks, very clearly explained. I'd like Bitbol better if he just told it straight, though, and stopped trying to scare his readers with phrases like "pushed aside and locked up." Come on, no one has forgotten what heat feels like! What would he have the scientists do, insist on a reference to tactile experience every time a measurement is taken?

The danger you and I both recognize comes not from the story Bitbol tells here, but from the further story which physicalists try to tell, in which heat is "really" or "actually" or "reduced to" its objectively measurable components.
Patterner November 13, 2025 at 21:58 #1024804
Quoting J
We started, pre-science, with our experience of heat, and went on to discover the physical conditions upon which it supervenes, which are utterly unlike feeling warmth. Why couldn't this happen for consciousness as well? It seems like a good analogy to me, but maybe I'm missing something you have in mind.
Indeed, we are miles apart on this. Consciousness and the feeling of warmth are not two different things. The feeling of warmth is an example of a conscious experience. It is only [I]through consciousness[/I] that we have the experience. Just as it is through consciousness that we hear music, see colors, and taste the sweetness of sugar.
J November 13, 2025 at 22:49 #1024816
Quoting Patterner
Indeed, we are miles apart on this.


No, I don't think so. I agree that the feeling of warmth is an example of a conscious experience. We also agree, I suppose, that being conscious as such is a conscious experience -- sounds awkward, but how else could we put it? I certainly experience being conscious, and so do you. So I'm hypothesizing that, as with warmth, there's a compatible story to be told about the "outside" of our conscious experience.
Wayfarer November 13, 2025 at 23:49 #1024826
Quoting J
The danger you and I both recognize comes not from the story Bitbol tells here, but from the further story which physicalists try to tell, in which heat is "really" or "actually" or "reduced to" its objectively measurable components.


Of course! Neither you or @Patterner are the kinds of reductive materialists that Bitbol (and Chalmers) have in their sights - but plenty are, and that is who he's addressing.

[quote=Daniel Dennett, The Fantasy of First-Person Science] In Consciousness Explained, I described a method, heterophenomenology, which was explicitly designed to be 'the neutral path leading from objective physical science and its insistence on the third-person point of view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable subjective experiences, while never abandoning the methodological principles of science. [/quote]

That is the polar opposite of Bitbol's phenomenology and Chalmers' naturalistic dualism. So they're not (and I'm not) 'attacking straw man arguments' - physicalists really do say that. It's important to get clear on the fault lines between the tectonic plates, so to speak. Which, from what you're saying, I'm not sure that you're seeing. (Incidentally I'm drafting a Medium essay "Intro to Bitbol" which I hope might be useful as he has an enormous amount of material online.)

J November 14, 2025 at 01:57 #1024832
Quoting Wayfarer
physicalists really do say that.


They certainly do.

Quoting Wayfarer
It's important to get clear on the fault lines between the tectonic plates, so to speak. Which, from what you're saying, I'm not sure that you're seeing.


I think I see some of them, but always happy to learn more. Appreciate all the thought you've given this.
Patterner November 14, 2025 at 12:42 #1024917
Quoting J
I agree that the feeling of warmth is an example of a conscious experience. We also agree, I suppose, that being conscious as such is a conscious experience -- sounds awkward, but how else could we put it? I certainly experience being conscious, and so do you. So I'm hypothesizing that, as with warmth, there's a compatible story to be told about the "outside" of our conscious experience.
Can you explain what you mean by "experience being conscious"? we come at consciousness from different directions. I'm happy to explore your idea, but not necessarily sure what it is.
J November 14, 2025 at 14:00 #1024922
Quoting Patterner
Can you explain what you mean by "experience being conscious"? we come at consciousness from different directions. I'm happy to explore your idea, but not necessarily sure what it is.


Fair enough. We'd have to start by agreeing on what can be an object of experience. As you know, many philosophers believe that con* can never be an object for itself, that it is properly a transcendental ego of some sort. To "experience consciousness," for these philosophers, would be like saying that the eye can see itself.

I don't find that persuasive, but let's say we agreed that it was a good description. In that case, we need a different term -- not "experience" or "be conscious of" or "be aware of" -- for what happens when con reflects on itself. Whatever term we decide to use, that's what I'd be referring to when I spoke about experiencing being conscious.

Or, we can allow, as I do, that self-con or the awareness of one's con is an experience on par with any other mental event. In that case, when I talk about the experience of being conscious, I mean the experience I have when I merely look at my looking (doing meditation is an excellent way to get there). It's separate from any content, whether perceptual or internal.

But I don't think we even need anything this esoteric to answer the question, "Do you experience con?" We can just reply, "Is experiencing con the same or different from being conscious?" If it's the same, then we all agree that we have that experience. If it's different, then we return to the Sartrean exegesis I began with. But in neither case is the phenomenon -- call it what you will, "experience" or not -- in doubt.

*consciousness. I'm tired of typing that word incorrectly!
Relativist November 14, 2025 at 17:37 #1024936
Reply to J Reply to Patterner
As far as I can tell, consciousness (=experiencing being conscious) entails the set of sensory sensations, thoughts and feelings one has in the present, where "the present" is a short period of time, not an instant of time.

These are all intertwined. Sensations and feelings can induce thoughts, and thoughts can induce feelings. It is the feelings aspect that the hard part, of the "hard problem". Most aspects of consciousness seem amenable to programming in software. Feelings are not amenable to this. IMO, feelings are the one aspect of consciousness that is inconsistent with what we know about the physical world. That doesn't mean it's necessarily inconsistent with naturalism - it could just mean that there are aspects of the natural world that are not understood and may be inscrutable.
Patterner November 15, 2025 at 05:09 #1025056
Quoting J
*consciousness. I'm tired of typing that word incorrectly!
:rofl:


Quoting J
Fair enough. We'd have to start by agreeing on what can be an object of experience. As you know, many philosophers believe that con* can never be an object for itself, that it is properly a transcendental ego of some sort. To "experience consciousness," for these philosophers, would be like saying that the eye can see itself.

I don't find that persuasive
I [I]do[/I] agree with this, as it happens. I think everything is an object of experience. But I don't think the experience is an object that, itself, can be experienced. I don't think the problem is that an eye cannot see itself. I think the problem is that vision cannot see itself.

I think what is normally called [I]human consciousness[/I] is the consciousness - the subjective experience - of being a human. We have mechanisms for mental abilities, and we experience them as, among other things, self-awareness. We are aware of our thoughts and feelings.

A bacterium experiences greater or lesser warmth, just as we do. But it doesn't think about it, or comment on it.


Reply to Relativist
I very much agree regarding what you say about feelings. I agree with most of what you say. But I don't agree that "Most aspects of consciousness seem amenable to programming in software." I think only mental abilities can be programmed, like sensory input, responses to sensory input, storage of sensory input and responses, referencing the stored data... I don't think subjective experience of all that is programmable. we can program feedback loops, but we can't program those feedback loops being aware of themselves.
J November 15, 2025 at 13:44 #1025092
Quoting Patterner
I think everything is an object of experience. But I don't think the experience is an object that, itself, can be experienced. . . . A bacterium experiences greater or lesser warmth, just as we do. But it doesn't think about it, or comment on it.


This gets at the gnarly, self-reflexive quality of the con* problem. Can my experiencing of, say, warmth also itself be an object of experience? Rather than give my answer, I'd toss it back and ask, What do your observations of your own mentality in this regard tell you? How does it seem? -- let's start there.

As for the bacterium, yes, it has no self-con, no self-awareness. Once again, this raises the question of how "experiencing experience" may relate to human con. I'm not saying we're the only animals who can do this -- I'm sure many others are to some degree self-aware -- but it characterizes so much of our sense of what it means to be conscious, of "what it's like to be a human." And let's not forget that the practice of meditation can show us the opposite: what it means to experience non-experience, if I can put it that way. Or at least it may do; some doubt this.

*consciousness
Patterner November 15, 2025 at 14:55 #1025098
Quoting J
Can my experiencing of, say, warmth also itself be an object of experience?
I think so. A bacterium experiences warmth, and that's maybe all there is to say. I experience warmth. But I have mental abilities the bacterium does not, which I experience as self-awareness. So I'm aware that I'm experiencing warmth, unlike the bacterium.

Is this an infinite situation? I experience the knowledge that I'm experiencing warmth. And I experience the knowledge of the knowledge that I'm experiencing warmth. And…
J November 15, 2025 at 22:25 #1025164
Reply to Patterner Yes, I also think I can have a self-aware experience, without running into the "eye seeing itself" problem. What I experience, in such a case, is not a "pure" experience without an object, but rather "what it's like to experience X" (warmth, in this example). In my phenomenological world, there is a difference. Being warm is certainly an experience, but not the same one as "experiencing warmth."

The threat of the infinite regress is hollow, I'm pretty sure: How many iterations can a mind really retain?
Janus November 16, 2025 at 01:36 #1025194
Quoting Patterner
Is this an infinite situation? I experience the knowledge that I'm experiencing warmth. And I experience the knowledge of the knowledge that I'm experiencing warmth. And…


No, I'd say that's just empty playing with words.
Patterner November 16, 2025 at 05:33 #1025210
Reply to Janus Or that. :rofl:


Quoting J
Yes, I also think I can have a self-aware experience, without running into the "eye seeing itself" problem.
I really don't see that problem, either. We are made up of many information processing systems. Some, are shared with many species, right down to single-celled bacteria and archaea. Even if our sensory input from light is much more precise and complex than theirs, they also subjectively experience it. (When it comes to light perception, plants leave us in the dust in some ways.) But we have mental abilities that nothing but us has. Our self-awareness is our subjective experience of some of those abilities.

My thinking is that, what we are conscious [I]of[/I] is not what consciousness [I]is[/I]. Consciousness is not self-awareness. So it's not a case of consciousness viewing itself. I think we have a subjective experience of the warmth. And we also have the subjective experience of feedback loops that are much more complex than single-celled (and many multi-celled) creatures.

Sorry for the disorganized structure of this post. I'm exhausted and this is the best I can get out right now before collapsing in bed.
J November 16, 2025 at 14:08 #1025255
Reply to Patterner Pretty good for a sleep-deprived philosopher! :smile:

Quoting Patterner
what we are conscious of is not what consciousness is.


This is unproblematic until we consider what I've been calling the experience of being conscious. I agree that con is not self-awareness, but when we are self-aware, we are having a conscious experience of . . .. what, exactly? Is it multiplying terms too far to discriminate between "con" and "being conscious"? Sorry, I'm wide awake, and I already feel like I'm drifting into a Husserlian dreamworld, where terms and loops proliferate!

Maybe it helps to refer once again to meditative states, in which it's possible to experience a very simple, seemingly objectless state of awareness. Am I "viewing con itself" in such a state? What's especially interesting is that the literature of meditation claims that the ego, the (possible) source of conscious awareness, is largely absent in such states. Should we conclude that "I" am not doing anything at that moment, so the whole loop question can never get started?

Patterner November 16, 2025 at 16:59 #1025271
Quoting J
but when we are self-aware, we are having a conscious experience of . . .. what, exactly?
Feedback loops in our brain. Mental feedback loops, as opposed to loops that are involved with, for example, homeostasis.

I always go back to Ogi Ogas and Sai Gaddam in [I]Journey of the Mind: How Thinking Emerged From Chaos[/I]:
Ogi Ogas and Sai Gaddam:A mind is a physical system that converts sensations into action. A mind takes in a set of inputs from its environment and transforms them into a set of environment-impacting outputs that, crucially, influence the welfare of its body. This process of changing inputs into outputs—of changing sensation into useful behavior—is thinking, the defining activity of a mind.

Accordingly, every mind requires a minimum of two thinking elements:
•?A sensor that responds to its environment
•?A doer that acts upon its environment
It is difficult to think of the simplest [I]molecule minds[/I] ("All the thinking elements in molecule minds consist of individually identifiable molecules."), such as those of archaea or bacteria, as minds that are thinking. But it must surely be the first step on the evolutionary road. Thinking and mental processes are physical events. We are conscious of - we subjectively experience - these events. These events are not consciousness. That's what I mean by "what we are conscious [I]of[/I] is not what consciousness [I]is[/I]."


Quoting J
Is it multiplying terms too far to discriminate between "con" and "being conscious"?
From my standpoint, that's like discriminating between "mass" and "being massive".



Quoting J
Maybe it helps to refer once again to meditative states, in which it's possible to experience a very simple, seemingly objectless state of awareness. Am I "viewing con itself" in such a state? What's especially interesting is that the literature of meditation claims that the ego, the (possible) source of conscious awareness, is largely absent in such states. Should we conclude that "I" am not doing anything at that moment, so the whole loop question can never get started?
I can't speak from experience. But everything I've read makes it sound to me that the meditator is (how to say it?) not engaging in thinking/mental processes. Thinking might be an automatic response to sensory input and other thoughts. But if they are doing what is claimed, that automatic response can be prevented. Maybe "suppressed". Perhaps better to say "not engaged in", because that sounds more passive. In essence, as far as consciousness goes, the meditator subjectively experiences only the sensory input, much as other species that do not have or mental abilities.
Wayfarer November 16, 2025 at 23:33 #1025329
Quoting Patterner
I always go back to Ogi Ogas and Sai Gaddam in Journey of the Mind: How Thinking Emerged From Chaos:
A mind is a physical system


That's an immediate red flag for me. According to their definition, a thermostat is a mind. From a review:

Despite the splashy blurb, Journey of the Mind is essentially a pancomputational, complexity-theoretic evolutionary narrative. It treats mind as something that emerges when matter organizes into systems capable of learning, in the broadest, most information-theoretic sense.

Think of it as:

Daniel Dennett + Integrated Information + Complexity Theory + Evolutionary Just-So Storytelling,
but written for a general audience with lots of nice pictures.

They argue that:

A mind is any system that takes in information, updates internal structure, and generates adaptive behaviour.

Minds scale: archaeal ? amoebic ? worm ? reptile ? bird ? mammal ? human ? societal “supermind.”

Consciousness = a sufficiently complex, recursively self-modelling prediction engine.

It’s an attempt to unify everything from basic chemotaxis to human language under one computational umbrella.

Me, I don't think it qualifies as philosophy. It's pop science.
Patterner November 17, 2025 at 00:03 #1025334
Quoting Wayfarer
According to their definition, a thermostat is a mind.
In what way does thermostat's outputs influence the welfare of its body?
Wayfarer November 17, 2025 at 00:09 #1025336
Reply to Patterner True - it's function is not directed at itself but as part of a larger system. Nevertheless, it does meet their definition: it senses (temperature) and does (turn on or off). But it's allopoeitic rather than autopoeitic, in enactivist terms.

Quoting J
Maybe it helps to refer once again to meditative states, in which it's possible to experience a very simple, seemingly objectless state of awareness. Am I "viewing con itself" in such a state? What's especially interesting is that the literature of meditation claims that the ego, the (possible) source of conscious awareness, is largely absent in such states. Should we conclude that "I" am not doing anything at that moment, so the whole loop question can never get started?


Very insightful question! 'Non cogito ergo non sum'. I tried diligently to practice Buddhist meditation for years, but one of the early understandings I had was, so long as you're aware of yourself meditating, then you're obviously not in that kind of 'contentless consciousness' state, as you're still aware of 'I am doing this'. Getting to that kind of complete cessation of self-consciousness always eluded me. In yoga terminology, such states are called 'nirvikalpa', meaning 'nir' (no) 'vikalpa' (thought forms). But in my experience realising such states is very rare in practice. That's why genuine yogis and real spiritual adepts are often reclusive and keep away from society. It's the opposite of our over-stimulated high-tech culture.
Patterner November 17, 2025 at 00:17 #1025338
Quoting Wayfarer
True - it's function is not directed at itself but as part of a larger system. Nevertheless, it does meet their definition: it senses (temperature) and does (turn on or off).
They say influencing the welfare of its body is crucial.