Replacing matter as fundamental: does it change anything?
Lately, I have noticed a trend. There are some who admit the existence of "the hard problem of consciousness", but only when it comes to matter. That is, there is the following opinion. Yes, the properties of matter are not adequate to produce or explain subjective experience. However, if we replace matter with another fundamental substance, X, then the problem disappears because X can create consciousness. Personally, I don't understand exactly how it can work.
1. If there really is a problem of consciousness, is this a matter-specific problem?
2. If we replace matter with another fundamental substance (except consciousness itself) can something change?
1. If there really is a problem of consciousness, is this a matter-specific problem?
2. If we replace matter with another fundamental substance (except consciousness itself) can something change?
Comments (89)
Information is not a substance (in the philosophical sense.) The word has no meaning without specifying what information is being referred to.
Information requires both energy (communication) and matter (storage).
So information as a substance certainly does as you say dissolve the matter-centric hard problem of consciousness. As matter cannot be taken in isolation as the source of awareness (collections of information). It is only part of a larger dynamic that confers these abilities.
When you get down to 'fundamental constituents of existence', what are the choices? Any suggestions?
No, the hard problem exists if we start with something (anything) that isn't consciousness, and try to explain consciousness in terms of that. Depending on what we start with, the 'hard problem' might be more, or less, difficult.
Quoting Eugen
I haven't heard anything so far that changes the situation. You get things like 'panprotopsychism' but that's basically just importing consciousness into substance. There's @Apokrisis 'pansemeiosis' which puts meaning as fundamental, or near-fundamental, and then, by stages, as complex systems evolve, they gain more of the constituents of consciousness (attention, predictive ability, some other stuff (can't remember)) until eventually we have a creature that can be said to be fully conscious. Personally I don't think that touches the hard problem, but it's an interesting approach nonetheless and may well be a good way to explain some mental functions if not consciousness.
I think without a clear, precise conception (or theory) of "consciousness", saying "isn't consciousness" doesn't actually say anything; ergo, at best, the so-called "hard problem" is underdetermined.
:up:
Yep. Our sensory apparatus wouldnt work.
Quoting Eugen
Yep. Every experience ever by anybody. Then our heads will surely explode.
Guy looks at a picture, sees what used to be a dump truck, back before matter was replaced.
But wait .when matter is replaced, pictures and dump trucks disappear, along with eyes and humans and .
..To know what questions we may reasonably propose is in itself a strong evidence of sagacity and intelligence. For if a question be in itself absurd and unsusceptible of a rational answer, it is attended with the dangernot to mention the shame that falls upon the person who proposes itof seducing the unguarded listener into making absurd answers, and we are presented with the ridiculous spectacle of one (as the ancients said) milking the he-goat, and the other holding a sieve. .
(CPR A58/B83)
Just sayin ..
Information is brought about by "difference". And difference require at the very minimum 2 separate or distinct things or states. There is no "difference" in a singular existant. A "singularity." It is 1. There is no 2nd entity to be different from 1 (oneness). Thus no information.
Now, in order to have 2 separate states and thus information, we require relativity. As to be "relative" to something means you are separate from it. You are something else. Distinct in quality/characterisation.
This is described by Einsteins equation E=mc2. Where we see relativity in action: a departure of Matter from Energy. 2 separate states of being and thus information between them (relativity).
The information brought about by this separation of energy from matter is "space-time". As energy doesn't occupy space nor time, whilst matter occupies both space and time. Matter is at a different "rate" of existance (distance/time, or spacetime) or "speed" relative to energy.
Matter can thus come into a steady or consistent relationship with other matter (gravity) in a now freshly minted "4 dimensional" system. Energy, time and space now having becoming distinct separate entities relative to it.
This is the process of emergence of new properties and phenomena as a direct result of the previous.
Where does consciousness come into play? Well, information is stored in a stable way (primitive memory) in matter and the system naturally emerges or evolves into new phenomenon and relationships, and by natural selection the matter becomes more complex (thus the memories become more complex) and thus the ability to perceive becomes more complex and agency emerges.
I don't believe there is a discrete line between living things and non living chemical changes. It is a spectrum of small changes, with awareness and perception becoming more and more sophisticated in the process.
I won't clutter up your thread with my somewhat idiosyncratic views beyond saying this - The idea of "fundamental substance" is a metaphysical one, not a scientific one. It's a way of thinking, not a matter of fact.
Nuff said.
This is the song I sing over and over. [laughably untrue statement]People love it when I do.[/laughably untrue statement] I learned it from R.G. Collingwood, who wrote "An Essay on Metaphysics." Metaphysical positions have no truth value. They are not true or false. Many people on the forum and elsewhere in philosophy don't agree.
Upshot - for me, the question of what, if anything, is fundamental is a matter of attitude, preference; not fact. As such, the question is not resolvable by logic - or science for that matter.
Let's leave it there and take it up in a different thread sometime.
:100:
As the OP makes clear, @Eugen is incorrigibly confused on this point.
The mystery here is not the basis for consciousness, its the framing of consciousness itself. We want consciousness to be a thing so we can feel that I am something specific and unique, instead of just individual (my body, not your body). We are conscious if we are aware; or awake. We do not need the certainty of tying ourselves to something hard to differentiate ourselves from others or expectations. If we do not, we, in a sense, arent an individual; I dont exist as me (apart from anyone else I follow or mimic or quote, etc.).
Quoting 180 Proof
Yes, we want a clear, precise concept of consciousness because it has been abstracted from its ordinary contexts in order to stand in the place of Descartes I and the doubt of our existence. Undermining is halfway there; Im asking we consider not only why we want a hard solution, but why we need to have a fixed me as it were, our consciousness. You have an experience, say, a majestic fleeting moment of a sunset, maybe even something you cant express in words; we want the picture of our entire human condition to be based on this occurrence (we always have our experience) so that we are by nature, as a given, unique and that that specialness dictates, for example: our meaning the things we say, our subjectivity, or being inscrutable to you, or that our expression is ensured, our actions always intended by us.
As I noted in my response to @Eugen, this is a matter of disagreement among philosophers and those of us who pretend. I think you and I are in the minority.
The issue is not that simple.
Here's what the Internet Encyclopedia of Philosophy says about THPC:
"The hard problem of consciousness is the problem of explaining why any physical state is conscious rather than nonconscious. It is the problem of explaining why there is something it is like for a subject in conscious experience, why conscious mental states light up and directly appear to the subject. The usual methods of science involve explanation of functional, dynamical, and structural propertiesexplanation of what a thing does, how it changes over time, and how it is put together. But even after we have explained the functional, dynamical, and structural properties of the conscious mind, we can still meaningfully ask the question, Why is it conscious? This suggests that an explanation of consciousness will have to go beyond the usual methods of science. Consciousness therefore presents a hard problem for science, or perhaps it marks the limits of what science can explain."
"The hard problem was so-named by David Chalmers in 1995. The problem is a major focus of research in contemporary philosophy of mind, and there is a considerable body of empirical research in psychology, neuroscience, and even quantum physics."
(See more at https://iep.utm.edu/hard-problem-of-conciousness/
In a past comment of mine regarfing the subject, I mentioned that this problem actually sould belong to Science and not to Philosophy, in which the methods of exploring and studing life differ radically. This doe not happen in the Eastern philosophy who have kept their wisdom form their very long past. But in the West, Science has penetrated so deeply and infleunced so widely our world, that our Philosophy tends to become one with Science as in antiquity! (Re: science Greek philosophers).
Quantum mechanics, for instance, has penetrated the philosophical minds of a lot of sicentists-philosophers --i.e. who have PhDs in both fields-- of our time. Yet, QM is plenty of incertitudes. And trying to apply it to philosophical matters like life, mind and consciousness, is IMO walking on thin ice.
But evem so, it would be fine do do that, only that there should be a special branch of Philosophy for it. (I don't mean Philosophy of Science, but rather something like "Scientific Philosophy".)
Quoting Eugen
I am saying there is not a problem of consciousness in coming at the other end, which is to say we create the fantasy of the subjective experience; that consciousness is a construct to gain theoretical certainty. We make it an intellectual puzzle because we cant handle our actual human condition of separation from others (and ourselves).
Maybe? At least a lot of people seem to think so. Information theory is arguably the biggest paradigm shift in the sciences in centuries. Quantum mechanics and relativity rewrote how we think about the world, but for many fields they were largely irrelevant.
By contrast, information theory has had a huge impact on physics, biology, neuroscience, economics, etc. It's a paradigm that has allowed us to link together phenomena in the social sciences with phenomena in physics using not only a common formalism but a shared semantics (complexity studies does this too). And obviously the technology that theories of computation and information helped create have dramatically reshaped human society by giving us the internet, digital computers, etc.
My take is that it is too early to tell if "information" theories will end up radically transforming how we think of the natural world, or will simply fizzle out. Currently, it's widely accepted that definitions of information all have major problems, at least from a philosophical perspective, the formalisms have been amazingly useful.
There is a reason computational neuroscience is probably the biggest theory of consciousness right now or why many of the more well known physicists publishing popular science books today seen extremely excited about pancomputationalism and "it from bit," theories, even if they don't fully endorse them. That said, information is a notoriously vague concept. I feel like every paper on the philosophy of information starts by stating this fact, and so it's not always clear what this new vision actually is in a systemic sense.
Information theory ties into the hard problem by showing how signals in the enviornment, e.g. light waves bouncing off objects, can be picked up by the eyes and encoded in patterns of neuronal activation. It seems like a potential way across the objective/subjective gap, but such explanations are in no way close to being complete and rely on vaguely defined terms to do a lot of leg work.
Suprisingly, there hasn't been much philosophical work on "what is computation," (but Liebnitz actually has some interesting, very ahead of their times ideas of computation as logical entailments).Turing was thinking of human computers, people whose job was to run through computations, when he wrote his seminal paper on computation. He was thinking "what are the minimal instructions and inputs a person needs to receive to perform a computation and what are the minimal things they need to be able to do to carry it out." This is strange when you think of it. Computational theory of mind is a theory that says consciousness is caused by/reducible to, a formalism based on a conception of what conscious human beings do while performing mathematical calculations. It is, at least in its historical conceptualization, circular in this way.
I think digital physics, the idea that all reality can be reduced to 1s and 0s, that bits can be swapped in for fundemental particles in old corpuscular models, has been pretty well debunked. It's important not to conflate this with all information ontologies, something that seems to happen fairly often. Digital physics is sort of the strawman for attacking "it from bit," it seems.
IDK, I could write a lot about this but I figured many people might already be aware of these things or uninterested. If anyone wants some recs I have a sort of "information reading list," I've been collating. The Routledge Philosophy of Information Handbook is particularly good though for an in-depth conceptual look that also has specialized articles grounded in the philosophy of specific natural and social sciences.
Youre right, I missed that. What Im claiming is that, in response to #1, there is not a problem of consciousness at all, hard or otherwise. Science isnt missing something, though nor should it imagine it is solving what is a philosophically mis-conceptualized issue. Where philosophy used to need to catch up to the discoveries of science, now science needs to stop thinking in the terms of 16th century philosophy.
Quoting Alkis Piskas, quoting the interwebs
I would suggest that we have not examined how asking this question is meaningful. In what context? Why or when is there a further issue? Why do we need more?
I (and Wittgenstein) would claim that the formation of the picture of consciousness is manufactured to have something to solve in order to have certainty in ourselves and in relation to others. It is not physical things (sensations, feelings, etc.) that make up who we are; our having them is not special. You have a headache; hey, I do to. Yours is throbbing behind your left ear; wow, me toothats crazy that we have the same headache. Our relation to others (identifying pain, having the same experience, etc.) is not based on our biology, its a function of living with each other through the history of our human condition.
Quoting Count Timothy von Icarus
This gap is not the difference between individual experience and generalized certainty; I am separate from you. My knowledge of you has a limit (you may be faking, hiding, lying)there is a real truth to the fears of skepticism. So its not knowledge we lack (from science or otherwise). I cant be sure (know!,) that you are in pain, because the way it works is I react to your pain, I respond to or ignore it. Our feeling that we want something more is not a riddle, it comes from a need for control.
I am not sure that emergent properties are explained. Evolution seems to be an attempt of trying to explain human biology and biology per se as a series of emergent events leading to ever sophisticated mechanisms and the idea seems to be that if you have millions of years of random events something like the human brain will eventually emerge.
A bit like The Infinite monkey theorem https://en.wikipedia.org/wiki/Infinite_monkey_theorem
"The infinite monkey theorem states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will almost surely type any given text, such as the complete works of William Shakespeare"
The thing to remember is the hard problem is a problem about fundamental stuff. It is an argument about physical materials and their putative properties. Dualists are confounded by their inability to escape their mistake of thinking of consciousness as another kind unformed ultimate simple.
I am instead a structuralist so subscribe to a completely different ontology. Neither matter nor mind could be an ultimate simple. All things are structures and so irreducibly complex.
The hard problem is simply not an issue on that score. Indeed, structuralism says our models of reality must be dualistic in the Aristotelean systems sense. Substantial being is irreducibly hylomorphic.
An apparent division of causality - such as between mind and matter - is the feature that the ontology predicts rather than the bug that bedevils the metaphysics. Nothing can exist except by being a system that marries Aristotles four causes in bottom-up material construction and top-down immaterial constraint fashion.
Believing there is a hard problem is thus a symptom of being locked into a reductionist metaphysics. It is built into the worldview. The only escape is a radical shift in worldview.
I wonder if those could be conceived as analogous to the fundamental existence-enabling constraints identified in cosmology (e.g. Martin Rees' 'six numbers')?
I think that since intention is personal, the immaterial final cause acts in a bottom-up freedom fashion.
Hence you arent a structuralist or systems thinker.
Quoting Wayfarer
The constants of nature are ratios or balances. So they are fundamental numbers that emerge from processes in opposition.
The fine-structure constant alpha, for example, is the effective balance of the electrostatic repulsion between two charged electrons and then the quantum vacuum contribution of all the virtual particles that the close proximity of two such classically-imagined particles creates.
So you have the two aspects of physical reality - the classical particle and the quantum vacuum descriptions - as the limit state constraint descriptions of the cosmological system. And then a local constant of nature - alpha, with its measured ratio of near enough 1/137 - popping out as the average of these two sources of action.
The constant is constant enough at low temperature or large scales. But then making things very hot or very small will turn up the sizzle of quantum fluctuations and so alpha reduces to something more like 1/128.
See https://www.forbes.com/sites/startswithabang/2019/05/25/ask-ethan-what-is-the-fine-:grin: structure-constant-and-why-does-it-matter/?sh=3f6f77145671
So constants speak to the fact that the constraints of reality are emergent or effective balances that themselves can evolve.
Alpha tells us that science has to arrive at its fundamentals by framing its observations in terms of opposing limit state descriptions of the Cosmos.
We have two theories of nature - the classical and the quantum limits on this useful notion of being. We have formally reciprocal accounts of the top-down ultimate boundaries of nature. Everything is to be found somewhere between the dichotomy of absolute counterfactual definiteness and the absolute lack of counterfactual definiteness. :grin:
We can then get on with measuring where the balance point between the repulsion of two classically imagined electrons, and the matching attraction of a small and warm region of bubbling quantum charge, actually falls.
It turns out to be a sliding scale, depending on the larger thing of how small/hot or cold/large the Universe happens to be at that point in its developing history.
The take home is that physics sounds reductionist to most ears, but it is actually structuralist in its metaphysics.
Reality is neither fundsmentally classical, nor even quantum. These are just the two matched limit state descriptions we need as our dichotomous metaphysical frame so as to actually be able to measure anything of any use, like the predicted charge between two particles at some size and temperature scale.
:fire: :100:
I do read Ethan Seigel's posts although I notice that he's been dropped from Forbes. He's a great explicator.
From that article:
Sounds awfully like 'an idea' to me.
(Apparently, 1/137 turns up in a wide range of contexts.)
So your argument is an ad hom against a professionals exposition so as to create pro hom support for your own amateur opinion?
Hmm. :cheer:
Think of a mathematical constant like pi, e or phi. Are they values or are they ratios?
Do they represent some magic quantity - some measured amount of substantial being, that thus raises all sorts of counterfactual questions as why the Universe chose that particular number and not any other that would seem, prima facie, just as good? Or do they instead represent just the relation of two ideal limits that simply have to have the number that they do?
That is the philosophical point here.
A lot of people make a big deal about the physical constants for that reason. It seems the Universe could pick any value. But if you understand the deeper structure - the metaphysical dichotomies that represent the relations which can divide unformed potential so as to give it actual and real dimensionality - then the constants cease to be surprising. Your attention can turn instead to focusing on the possibilities of the relations which are Platonically fundamental, and so must characterise any actualised existence.
So what physicists call constants are not fundamental values but emergent ratios. And being ratios, they speak to the deeper dichotomies which are the structure - the Platonic-strength necessities or constraints - that are what force an organised and logical Cosmos into concrete being.
Things exist because there is nothing to prevent the anythingness of unformed potential interacting enough with itself to become restricted to its most basic dialectic possibilities.
Anaximander, Aristotle and Peirce laid all this out. This is the metaphysics worth knowing. Not all the tired old hard problem crap and other reductionist tropes.
They're ratios. And what I said was responding to
Quoting apokrisis
So, my reasoning went, if matter (matter~energy) represents the 'hyle' of hylomorphic dualism, what represents the 'morphe'? You said it yourself - immaterial constraints. And the point about the 'fine structure constants' is not that they're 'spooky' but that they're irreducible - no reason can be given for why they are just as they are (which is precisely what is meant by 'the naturalness problem'.) They are, as it were, the terminus of explanation. And furthermore, they are not in themselves physical - I can't ask you to show me one of them, as the demonstration would consist solely of mathematical arguments and proofs (which I'm the first to acknowledge I wouldn't understand). They are, in that sense, perceptible only to an appropriately-trained intellect; they are, in classical terms, intelligible objects.
I agree that C S Peirce may well have laid it out. And Peirce, as you well know, obtained to a form of scholastic realism.
[quote=C S Peirce, Reasoning and the Logic of Things, HUP 1992]The only end of science, as such, is to learn the lesson that the universe has to teach it. In Induction it simply surrenders itself to the force of facts. But it finds . . . that this is not enough. It is driven in desperation to call upon its inward sympathy with nature, its instinct for aid, just as we find Galileo at the dawn of modern science making his appeal toil lume naturale. . . . The value of Facts to it, lies only in this, that they belong to Nature; and nature is something great, and beautiful, and sacred, and eternal, and real the object of its worship and its aspiration.
The soul's deeper parts can only be reached through its surface. In this way the eternal forms, that mathematics and philosophy and the other sciences make us acquainted with will by slow percolation gradually reach the very core of one's being, and will come to influence our lives; and this they will do, not because they involve truths of merely vital importance, but because they [are] ideal and eternal verities. [/quote]
All I'm adding is that perhaps, even if only by analogy, such irreducible constraints may answer to that description.
Rhetorically speaking, as a simple matter of interest, Pierce may have laid it out .
. The only end of science, as such, is to learn the lesson that the universe has to teach it .
.or, he may have merely polished someone elses coin:
. Reason must approach nature with the view, indeed, of receiving information from it, not, however, in the character of a pupil, who listens to all that his master chooses to tell him, but in that of a judge, who compels the witnesses to reply to those questions which he himself thinks fit to propose. To this single idea must the revolution be ascribed, by which, after groping in the dark for so many centuries, natural science was at length conducted into the path of certain progress.
But my structuralist or systems metaphysics is saying that they are irreducibly complex. Thus not reducible to monistic simples. However capable of being reduced or explained as an inevitable relation, such as is represented by a ratio.
So you are thinking monistically. And reading my replies in that light. I am instead saying that things like a constant are the product of triadic complexity. They are that type of dichotomous relation where there is a separation into two that then produces the third thing of their mixing - their arrival at a self-stabilising balance.
This is the Siegels neat point about alpha. It speaks to the fact that the Cosmos evolves into a dichotomous story of atoms in a void. At one extreme, you wind up with real electrons that have a located charge and thus a spatiotemporal repulsion. But balanced against that is all the quantum fluctuations of the gap that is defined by having two exactly located classical particles. The sum of these virtual contributions then amounts to a small countercharge, a positive attraction.
So that is a good example of how - when you lift the veil - the material universe exists because there are processes in opposition that can arrive at a balance that is distinctive. The charge is distributed in asymmetric fashion so its value - its ratio - sits at some definite point between the contrasting extremes of a classical atom and a quantum void. It becomes something measurably inbetween and produces a world in which charge plays an interesting emergent role. The electrostatic force can be a thing.
Imagine if the charge of the quantum void was the same as the charge of the located particle. You would have no electromagnetism to speak of as it would all cancel out. And indeed, that is what happens down at the Planck scale before the relevant symmetries are broken. No charge or particles to speak of.
So when anything exists, it is already complex in this triadic systems sense. Monism is too simple a metaphysics to account for an interesting universe of any kind. That is why - per the OP - it is the structure of relations that is fundamental.
Quoting Wayfarer
How are these remarks about epistemology relevant to this discussion of ontology?
Yes, human discovery takes an abductive leap of imagination. We can see the outcomes and guess at the complex triadic relation that was their probable cause.
And yes, by analogy, we can say the Cosmos bootstrapped itself into existence as some kind of abductive leap - a retrospective justification for why its evolution could only pan out a certain way. In the face of radical quantum instability, only our observed Universe had the right balances of its component processes to become the definite something we inhabit.
But I dont believe this is the Peircean argument you had in mind.
If light waves are information and patterns of neuronal activation are information, and we can describe both using the same information theoretic framework, it becomes easier to see how an event in the environment is tied to specific events in the brain.
There is a causal chain to follow. We can also see how the brain is subjecting information coming in from the sensory systems to computation. Most incoming sensory data is quickly scanned for change or relevance, then dropped. Many of the more interesting experiments on how human sensory systems work hinge on how sight is "constructed," in the brain, rather than essentially being a video feed from the eyes. The idea is that, if computational models can explain the "why" of profound aspects of first person experience, it may also be able to explain the why of experience existing itself.
This has been a useful model for understanding why sensory experience is the way it is and why we have persistent illusions that experimentation shows to be false.
That said, I actually don't think it tells us anything about "where does first person experience come from." What you get is a lot of good work on how what the brain does can be seen as computation, how agents can be modeled computationaly, and then an unsupported move to "and so a complex enough informational process that feeds into a global workspace creates first person perspective." That is, all the complexity kind of masks that the Hard Problem part is only vaguely addressed.
However, this is because we're still asking for information based explanations to turn back to the old physicalist frame work and explain consciousness in those terms. If you had a different ontology, one based on information, then maybe it gets easier? That's pure supposition though.
TBH, I think computational theory of mind is either a blind alley or requires a different model of the rest of nature to work.
That's right. I see significant flaws in systems theory. The "system" when used as a theoretical tool, is an artificial structure, a human construction which is produced in an effort to model an aspect of reality. The theory utilizes a boundary to separate the internal, as property of the system, (part of the system), from the external, environment, as not a part of the system. No system can have a closed boundary in an absolute sense, as experimentation seems to demonstrate, and the second law of thermodynamics, and the concept of entropy stipulate.
The problem with systems theory is that it does not provide a second boundary to distinguish between what is not a part of the system by being on the other side of the boundary to the outside (external environment), from what is not a part of the system by being on the other side of the boundary to the inside of the system (what is inherent to the theory, stipulated as not part of the system). By assuming only one boundary which separates "being part of the system" from "being not part of the system", anything which changes its status must cross that one boundary. But this renders certain aspects of reality as unintelligible, such as the entropy demanded by the second law,. This concept dictates that there is something which is lost from the system, i.e. no longer a part of the system when it was a part of the system at an earlier time, yet it is not apprehended as moving through the boundary such that it can be detected as being on the outside of the system. So entropy refers to something which changes its status, but not by crossing the one boundary, but through stipulation as inherent to the theory.
Quoting apokrisis
This is how the problem I've described above manifests in your metaphysics. The idea of something "irreducibly complex", is an admission of the unintelligibility of that feature. But you are making a false claim, a misrepresentation, to say that this irreducibly complex thing can be "represented by a ratio". To produce that ratio requires that we impose a separation, and this requires a reduction an analysis. To say that something is "irreducibly complex" is to say that it cannot be represented by a ratio.
This end product, 'that which is irreducibly complex', is what systems theory provides us with, due to the failure outlined above. When something, energy for example, is lost to a system, i.e. is no longer a part of that system, and it has not been observed to have crossed the boundary of the system, there is no way to know whether the energy has passed to the outside of the system in some undetected way, due to the limitations of observational capacities, or it has been lost inside the system to what is called entropy. "Entropy" therefore, is irreducibly complex, because no separation between the energy not accounted for because of failure in observation (failure in practise), and the energy assumed to be lost to entropy by systems theory (failure of theory) can be produced. Therefore the content of this irreducibly complex concept, "entropy", cannot be expressed as a ratio between those two aspects which actually make up what is commonly known as entropy.
This is entirely incorrect. Currently not understanding exactly how matter and energy interact to create a subjective experience does not negate the observed fact that matter and energy can interact to make a subjective experience.
:100:
No. It is to say that reduction is perfectly possible. Just not to the simplicity of a monism. Only as far as the complexity of a triadic or hierarchical relation.
This is why I would upgrade the Second Law a reductionist story with the holism of pansemiosis or dissipative structure theory.
The Universe is a good example. Does it actually increase its entropy if it is both cooling and expanding? Doesn't the loss of heat energy get made up for by the increase in gravitational potential?
The Universe can in fact only exist if it strikes this flat balance where all its entropy as local degrees of freedom cooling particles is matched by all its negentropy in terms of an ever increasing gravity debt. It is because the two sides of the equation the dichotomy of atom and void are tied together in this yo-yo fashion that the Universe can "emerge from nothing".
So in the most general sense, the Universe is a dissipative structure. It exist by tumbling into its own heat sink. It is closed within its own boundaries by the trick of always cooling because it is expanding, and also always expanding because it is cooling.
It indeed has two boundaries the limits of this cooling and the limits of this expanding. But it approaches then asymptotically in infinite time, never actually needing to cross them so as to exist for infinite time.
Quoting Metaphysician Undercover
As you are an Aristotelean albeit of the scholastic stripe it is surprising you don't immediately get all this.
Aristotle is the inspiration for the systems science movement. He analysed the irreducible complexity of nature in logical detail with his four causes, hylomorphic substance, hierarchy theory, etc.
His hylomorphism spells out the basic Peircean triad of potentiality/actuality/necessity the dichotomy of pure material potential and pure formal necessity which combine to create the third thing of actual or substantial material being. Prime matter plus Platonic constraints are the bottom-up and top-down that give you the hierarchy of manifest nature. A world of in-formed stuff.
The four causes expands this analysis to reveal the further dichotomies to the fundamental dichotomy.
The bottom-up constructive causes and top-down constraining causes are split by the dichotomy of the general and the particular.
You have material and efficient cause as the general and the particular. And you have formal and final cause as the particular and the general.
So Aristotle provided a rich analysis of how reality reduced to a system of relations rather than to some kind of monistic stuff. Reality is irreducibly triadic at base as it self-organises into concrete being via a self-contained causal logic.
The parts make the whole, and the whole makes the parts. This starts in the "less than nothing" that is Anaximander's apeiron, or Peirce's vagueness. Or what cosmology today likes to call a quantum potential.
I've seen your Aristotelian influence. you conflate formal cause with final cause. That's why you have no principles to separate the downward causation of formal cause from the upward causation of intention, and the individual's free will, final cause.
Quoting apokrisis
The pure potential of matter cannot properly act as a cause, so you need to place intention, final cause at the base of the "bottom-up constructive cause'. But this is inconsistent with the common notion of "emergence", because it is teleological and emergence is not.
The issue is that the fine-structure constants are prior to anything evolving whatever. If they were different in some slight degree then there would be nothing to evolve.
I just did the exact opposite of distinguishing them as the general and the particular when it comes to the downwardly acting constraints of a system.
The desire is the generality as it only cares for the achievement of its end, and not the particularity of the form needed to achieve it.
Quoting Metaphysician Undercover
More muddled blathering.
Of course chance and spontaneity as the character of pure material potential - must be entrained by top-down finality to produce an in-formed stable state of actualisation.
So "cause" is always too strong a word with its monistic modern overtones when Aristotle was breaking causality down into its four constrasting "becauses".
Quoting Metaphysician Undercover
Again you just waste my time by conflating monistic reductionism and triadic holism.
It is quite right that emergence as understood within the reductionist causal paradigm can't properly deal with teleology. It has to reduce global constraints in some fashion and so collapses into the familiar range of bad metaphysical choices, such as Cartesian dualism, epiphenomenalism, theism, panpsychicism, microcausal supervenience, and so on.
The failure of monistic reductionism produces a thriving marketplace of metaphysical blame-shifting. Trying to bandage the wound becomes its own considerable academic industry.
Instead of just insta-replying with babble, why not stop and think. Get to grips with the true Aristotle. :cool:
But the effective breaking of the electroweak symmetry was only going to produce some ratio, right? Given the input structures the symmetries to be broken some stabilising balance was going to emerge in post-hoc fashion.
So I don't see how it makes sense to claim ? was prior. It was already implicit as a thing in the fact there was a symmetry, and thus in short order the first billionth of a second of the Big Bang the breaking and rapid thermal stabilisation of that symmetry.
Again, like pi or other mathematical constants, the ratios as values are already implicit in the symmetry breaking. They only "pre-exist" in the sense the input geometry can arrive at no other self-stable and scalefree balance.
Pi is 3.14159265358979..etc in a dimensional world constrained to exact flatness. Positively or negatively curved spaces would have ratios of radii to circumferences ranging from the pi = 2 of a sphere to the pi = infinity of a hyperbolic plane.
So if a universe can only persist, hence exist, if it is flat, then that is what selects for a flatness as near 3.14159265358979..etc as can be managed.
Our universe would have quickly collapsed it it had started with a hyperspherical geometry and hence a pi less than that "magic ratio". And it would just have quickly spread out to a contentless nothing if it had had a hyperbolic geometry.
Our actual universe turns out to be more geometrically complex than either of these two stories in that it may have had inflation to first stop it being too hyperspherical at the beginning, and then by about 10 billion years, it also had a tiny touch of hyperbolic tendency in the cosmological constant to do the opposite thing of ensuring it will keep on expanding to infinity forever.
So perfect pi is a flat balance ratio. But making real universes involves producing inevitable material clutter as further symmetry breakings with their own ratios become possible. Shit happens like the electroweak and electromagnetic symmetry breakings, disrupting the flat flow with their gravitating particles that screw with the local geometry of the universe.
The fine structure constant is an example of one of those extra ingredients that needed the "luck" of compensation constraints to keep the general evolution of spacetime at near enough a flat balance to go on "forever" in its familiar cooling~expanding way.
So the details of the cosmic metaphysics has a lot of explaining still to do. But you are very focused on issues which don't seem like the actual issues.
The fine structure constant might seem fine-tuned for a cosmos capable of life. A lot of popularisations like to stress that pseudo-theistic point. Yet it is also fine-tuned simply not to fuck up the Big Bang in general.
And gee, which was the major evolutionary bottleneck that a flatly thermalising universe had to survive, which is the special pleading on the part of us, its biofilm-sustained linguistic monkeys?
Talk about muddled blathering. Intention, will, is proper to the individual, the particular, while "form" as the formula is general.
Quoting apokrisis
But finality is known to be a bottom-up cause, as the will, the cause of motion of the individual. So this bottom-up cause, which is inherently free, as the free will, enabled in its freedom by the potential of matter, is constrained in its bottom-up causation by top-down formal constraints.
Quoting apokrisis
There is no such thing as "the true Aristotle". It's a matter of interpretation, as is the case with any good philosopher.
But you need to get a grip on the true reality. Final causation is very clearly bottom-up. It is basic and fundamental to every action of organic matter, as purpose driven activities. You know that. So why do you claim final causation to be top-down, when you know that the purposefulness of living activities stems from the very existential base of the material organism?
I would imagine an example of this would be something like language generation creating exponentially greater cultural learning which then favors a trajectory away from fixed innate instinctual mechanisms for purely learning mechanisms. In this way, the higher level language creation influences lower level instinctual mechanisms (in this case reducing its efficacy).
Sure. Will and purpose can arise in biosemiosis as the local particular in contradistinction to the global generality that is the Universe entrained to its "law" of thermodynamcs.
So what is particular at the globally general level of the Comos its will to entropify becomes the context that makes sharp sense of its own "other" the possibility of tiny critters forming their own local wishes and ambitions within what remains still possible in a small, but personally valued, way.
We can't of course defy entropy. But we can apply ourselves to the task of accelerating it. We can buy local negentropic freedom of choice by burning stuff faster than the Cosmos has been able to consume it on its own.
Quoting Metaphysician Undercover
This is just your special pleading for a theistic metaphysics. You haven't dealt with my naturalistic argument.
If the wavelength of the electromagnetic spectrum is of the red frequency, and this hits rods and cones, and this goes down the optic nerve and the cortical layers, and the neural networks, and the peripheral environmental things of time and space.. how does any of this account for the actual sensation of "red"? No matter how much computation you add to one side of the bifurcation doesn't cross that line to the other side. Other than already placing the consequent in the premise I'm not sure how you can say that it can or does.
So orthogonality is the natural or dichotomy-based measure of a dimension. X is the other of y and y is the other of x, in mutually defining fashion.
Having broken the symmetry in this extremal fashion taken its "thisness" to two limits so opposed they are no longer even in sight of each other you then set up the third thing of all the angles of lines which express some cos~sine trigonometric ratio. You have the universe of lines that are some blend of x-ness and y-ness.
That a constant like phi might have a weirdly specific value 1.61803.... may seem fundamentally inexplicable. But step back and realise it is just the number marking the point where a broken symmetry achieves its self-similar balance point a unity under the constraint of growth operations and this golden ratio is not mysterious at all.
It is the stable attractor that must emerge as the new feature of an orthogonal symmetry-breaking under the further constraint of its own self-compounding growth. It is a Platonic inevitability. But that is hard to see until you get used to how Platonism organises dynamical systems and not just a realm of static entities.
We've been through all this. You seem to have completely forgotten about opponent channel processing and how the brain sees "red" as also counterfactually not "green".
It is difficult to even begin to give you a neuroscience account that connects to a general biosemiotic or modelling relations account unless you can keep these kinds of beginner facts straight in your discussion.
A "red" cone cell responds to all the light. It switches off when it "sees" too much "green" light. It can switch on when it "sees" a general lack of "green" light. So right from the get-go, it is turning physics into information. It is reacting to electromagnetism with its own interest-driven logicism.
You have to account for this interaction in terms of biosemiotic mechanism the very clever way that molecules can be messages.
Junk your boring old computationalist tropes. The way brains work is just fundamentally different. And you need to immerse yourself in that difference at the point where semiosis meets world - as in the actual biophysics of sensory receptors.
The key here to what I was saying, is to see language development as a freely willed activity of individuals, which is a bottom-up form of causation. We tend to think of language as a structure of rules which we must necessarily be obliged to follow, in order to be understood. But this is a false necessity. If it were true, it would render the creation of language, and its evolution, as something impossible. So we must consider the creative power of the individual, with free will, as the true essence of language, being the necessary condition which allows for the existence of language by causing the existence of language, in the sense of final cause.
There is what I would call a faulty interpretation of Wittgenstein's "Philosophical Investigations", which assumes a "private language argument", as demonstrating the impossibility of the individual's "private language" as having a relationship with language as a whole. This is analogous to the interaction problem of dualism, the private language is portrayed as incapable of interacting with the public language. But this is a misinterpretation because what Wittgenstein's so-called private language argument really demonstrates is how it is possible for the private aspect of language to incorporate itself into, and therefore become a feature of the more general public language, through this causal relation which Wittgenstein saw as necessary to the existence of language.
Quoting apokrisis
By no stretch of the imagination can "entropy" be conceived as a particular. This is the problem encountered when you incorrectly portray final cause as top-down causation. You have to assign purpose to the most general, the most global, and this is exactly opposite to what empirical observation shows us, that purpose is a feature of the most particular, the most local.
This can be clearly understood in the principles of holism. The part has purpose in relation to the whole. "Purpose" therefore, is a property of the part, not the whole. And if we were to attempt to assign purpose to the whole, we would have to relate that whole to something else, make it a part of a larger whole, to say that it has a function in that relation.
To see how "purpose" is causal, as a property of the part, in its relation to the whole, requires an understanding of final cause, and it's associated concept, free will. When the part acts purposefully toward being functional in the existence of the whole, the part does this freely, without causal coercion from the whole. Therefore the "principle" which the part adopts, and which gives it purpose, is derived from something other than the whole of which it is a part of. This principle is fundamental to the part's existence as a part, and is causal (bottom-up) in the sense of final cause.
Quoting apokrisis
Your naturalist argument is flawed for the reason I explained. You wrongly portray final causation as top-down. This is because you incorrectly conflate final causation, which is bottom-up causation empowered by the freedom of choice, with the top-down constraints of formal cause, of which "entropy" is one. It is very clear, from all the empirical evidence that we have of the effects of final cause, that the purpose by which a thing acts, comes from within the agent itself, as a bottom-up cause, and it is by selecting this purpose that it may have a function in relation to a whole.
I totally buried the lead in my first attempt to answer you and muddled it all.
Summary: The big benefit of information theoretic models of nature is that they can show how phenomena traditionally seen as "mentally constructed," can have an independent existence in nature and how information about these entities can enter the human nervous system. Bridging the subjective/objective gap and finding a solution to highly counter intuitive efforts at eliminitivism helps to make physicalist theory of mind more plausible, even though it also changes that theory in some ways.
Second, information is necessarily relational. Information doesn't exist "of itself," but as a contrast between possible values for some variable. Such a frame work denies the reality of any sort of "view from nowhere," or "view from anywhere," as contradicting our observations of how physics actually works. This helps us understand why we would experience things relationally, and debunks the idea that perspective (the relation of a system to an enviornment) is an arbitrary hallucination unique to consciousness. Information exchange between a rock and its enviornment follows the same sort of logic; the ability or inability to discern between different signals affects the behavior of enzymes as well as people, making elements of "perspective," less mysterious.
---
More detail if you're interested information theory allows us to explain how words in a piece of paper, signals in a cable that are part of the internet, DNA codons, the path a river cuts in rock, etc. can all be thought of as the same sort of thing. It connects different levels of emergence (this can also be done using Mandlebrot's concept of fractal recurrence, and the two concepts complement each other).
What this let's us do vis-á-vis the subjective/objective divide is identity entities that we previously thought must exist only in the mind, out in the world. For example, Galileo thought color did not "really," exist; color was reducible to the motion of fundemental particles. This sort of reduction has been popular throughout history, but comes with significant conceptual problems, not the least of which being that it says that many objects of study are somehow unreal despite their explaining large scale physical events. This is the view point that something like "Japanese culture," is not real, it is something we can eliminate and/or reduce to patterns of neuronal activation. The same is said to go for color, taste, economic recessions, prices, etc. They are "mental and/or social constructs," with a hazy ontological status.
Of course, the view that Japanese culture is reducible to diffuse patterns of synapse development, physical media, etc. is different from the eliminitivist view that such things are somehow "unreal," but they often go together. Information-based conceptualizations of nature give us a way to locate incorporeal entities like recessions or cultures in the natural world. A key benefit of information is that it is substrate independent, so we don't have a problem speaking of an entity that exists as a collection of neurons, printed symbols, vases, films, etc. Conceptually we can talk about morphisms within an entity that remain even if its physical components shift radically. E.g., if I wrote this post on paper with a pen, then typed it into the browser, then submitted it so that it now exists in a server and is reconstituted when accessed, we would be able to identify the signal throughout its shifts in physical media, including how the signal reaches human eyes and is then encoded in neuronal behavior.
This seems to at least partially dissolve the subjective/objective barrier provided we already believe the body causes consciousness. It addresses the Hard Problem by filing in gaps in the physicalist view. If the body generates mind, then we can see how interactions in physical systems can bring information from a chair into the brain, thus creating a holistic model. But the "how is first person experience generated," question does remain unanswered here. The most the concept can do there is explain how any system, conscious or not, will have a "perspective;" different signals are relatively indiscernible depending on the receiver.
Great point. This seems to be key to popular computational frameworks for investigating AI (e.g. Kowalski's "Computational Logic and Human Thinking or Levesque's "Computation as Thinking"). These embrace the idea of a private language, but because the language is itself a logical system it can be translated into a social language via computation.
This translation isn't always effective. Understanding communication requires that we understand that agents have goals, and that communication is a means of fulfilling these goals. If current public languages are insufficient for communicating something an agent wants to communicate, it can use other means to try to transmit the semantic content, e.g. drawing a diagram of inventing a new word. You see this with kids all the time. They want to convey something, but lack the relevant linguistic knowledge base, and so attempt to combine existing words into new ones.
Such combinations can enter the public language, but diffusion varies, e.g. in the US we say "sandbox" but it seems like in the UK "sandpit" is more popular. Once established, the phrases can be mapped to new semantic content, hence the sandbox/pit differences appears even when the term is referring to the more recent concept of a computer programming "sandbox."
Right, this is why, for the universe as a whole to have a "purpose," its relation to God, an agent who creates it, is often invoked. However, does this rule out theories of natural teleology to you?
These have a conception of teleology/final cause that isn't dependent on an agent, at least not in a straight-forward way. Nagel's "Mind and Cosmos," proposes a sort of teleology of immanent principles underlying the universe that in turn result in its generation of agents. That is, the principles come first and in turn generate the agents that fulfill them. Aristotle's teleology is generally considered "natural teleology." Max Planck seems to have had ideas of this sort too, maybe Liebnitz for another example. I'd add Hegel but it's unclear if it fits the same sort of type, but his system is certainly interpreted that way fairly often.
I find these hard to conceptualize at times. The principles are what generate the agents who can recognize the principles and whose existence is part of the process of actualizing them. But then it seems like the agents are essential to defining the principles as teleological, even though the principles predate them, which, if not contradictory, is at least hard to explain in a straight forward fashion.
[/quote]
It can be framed in those terms but doesn't have to be. It's a problem for anyone who thinks that consciousness arrived late in the universe, however that is construed.
Thats an assertion and not an argument.
Provide the evidence for a belief that consciousness had to arrive early. Provide a definition of consciousness that could even meet the counterfactuality criteria such that you could have evidence either way.
I can't answer your challenge to bert1 regarding the scientific theory of consciousness as a development that started without it and appeared after some time. Bateson approached that in a paradigmatic fashion where Chalmers is trying to rank different kinds of reduction.
On the other hand, the Aristotelian interpretation of structure you have presented does bring the problem of time front and center as a matter of principle.
Aristotle did not have a "hard problem" because he had an unmoved mover contemplating what it had set into motion.
Yep. Time is tricky. But at least modern physics agrees on some general things, such as the Universe embeds a cosmic temporal asymmetry. There is a global thermodynamic arrow pointing every event in the same general direction.
More than this, the means for communicating is often chosen on the grounds of simplicity. Communication in general is a tool formed for the purpose of facilitating action. So in many cases the public language is sufficient, but sort of like overkill, so the agent may create a very simple demonstration to take the place of a long explanation which might be required if conventional language was employed. This is sort of like the way we use acronyms and short forms. As we gain experience we find simpler ways to do (or say) the same thing.
Quoting Count Timothy von Icarus
I don't know how one would conceive of "natural teleology", so I cannot answer this.
Quoting Count Timothy von Icarus
From my perspective an "agent" is something active, and something active is required for causation. So we can't really remove the agent from final cause, but you might have something different in mind for "agent", which is an ambiguous term.
Quoting Count Timothy von Icarus
The problem I find with much of this type of metaphysical speculation is the difficulty in determining the active principle which is responsible for causation in a teleological explanation. So for example, you mention "immanent principles" which result in the "generation of agents". Well, a "principle" is fundamentally passive, and so we still need something active, to act as the actual cause of this generation. But this active thing, acting in a teleologically generative way, would really be an agent itself. So it doesn't make sense to say that this would result in the generation of agents, which from this precept must already exist. And if we remove the prerequisite prior agent, we just have a disguised form of emergence.
The need for the prior agent, the actuality which acts as cause, is explained by Aristotle's cosmological argument. If we remove all actuality, to start with a pure potential, like prime matter is supposed to be, then we supposedly have a time, at the beginning, with pure potential, and nothing actual. But any potential needs to be actualized by something actual, to become actual, so the pure potential could not actualize itself, and this would mean that there would always be pure potential, and never anything actual. This is the problem I find with Plotinus' One. It is supposed to be a pure potential which is the source of all things. But this idea falls to Aristotle's cosmological argument, so the Christian God is a pure actuality.
Quoting Count Timothy von Icarus
Right, it seems like you grasp the problem I described above, quite well. Notice that the problem is really the result of a reversal of the actual-potential order expressed by the cosmological argument. The principles which are posited, in the idea you expressed, are supposed to be responsible for the actualization of the agents, but it's really the activity of the agents themselves which accounts for the actualization of the agents.. The problem obviously, is that the action of the agents is supposed to generate (cause) the existence of these agents. So we have a vicious temporal circle where the agents, through their actions, cause their own existence. The point of the cosmological argument is to show that an agent (in the general sense of something actual) must be prior to any actualization of potential. So the actualization cannot be the cause of the agent, it is necessarily caused by the agent. Keep in mind though that "agent" is used in the general sense, so God as an immaterial "agent" is somewhat different from a human being as an "agent", existing with a material body.
How is a series of this responding not some sort of Cartesian theater fallacy? How is "sensation red" that experience I have, the same as "A "red" cone cell responds to all the light. It switches off when it "sees" too much "green" light. It can switch on when it "sees" a general lack of "green" light. So right from the get-go, it is turning physics into information. It is reacting to electromagnetism with its own interest-driven logicism."
Why does:
Red (the experience of)
=
"A "red" cone cell responds to all the light. It switches off when it "sees" too much "green" light. It can switch on when it "sees" a general lack of "green" light. So right from the get-go, it is turning physics into information. It is reacting to electromagnetism with its own interest-driven logicism.
And why is it not rather
"A "red" cone cell responds to all the light. It switches off when it "sees" too much "green" light. It can switch on when it "sees" a general lack of "green" light. So right from the get-go, it is turning physics into information. It is reacting to electromagnetism with its own interest-driven logicism.
=
"A "red" cone cell responds to all the light. It switches off when it "sees" too much "green" light. It can switch on when it "sees" a general lack of "green" light. So right from the get-go, it is turning physics into information. It is reacting to electromagnetism with its own interest-driven logicism.
Brains and nervous systems model the world, they dont display the world. Just start with that thought.
The fallacy is only being committed by those who believe in homuncular reifications like consciousness and experience.
The only way I can reconcile everyone's claims to be non-representational direct realists, is to interpret each and every person as referring to a different world.
I wasn't trying to make a substantial point, merely to rebut your mischaracterisation of the hard problem in too narrow terms.
You wont even support a definition of consciousness that could be counterfactually determined one way or the other. You dont even have the beginnings of a real argument.
Wheres the problem with always different and yet also usefully similar?
The standard pragmatist answer that has the benefit of explaining both the similarity and the differences - the differences being constrained to the degree they are differences that dont make a difference and hence ensure the useful degree of similarity observed.
OK, I'm not being clear. You said the hard problem was about fundamental stuffs. It isn't necessarily, Chalmers doesn't characterise it that way. It applies to any view that says consciousness 'arises' (pick verb of choice) from a physical system or entity or whatever. Acknowledging the existence of this challenge doesn't mean it isn't solved. Maybe your theory solves it. Maybe it's not a hard problem at all, only it seems hard to people, like me, stuck in outmoded habits of thought. It's just a name for an issue (possibly a pseudo-issue) that needs addressing. Acknowledging that there might be a burden to explain such an emergence does not commit you to thinking that the hard problem is unsolvable. Similarly, I'm a panpsychist, but I don't deny that there is a serious issue called the 'combination problem' that panpsychists have a burden to address. Maybe it's easy to address, maybe not.
Sure. And I always address it with the specific anti-reductionist stance that is enactivism, pragmatism, biosemiotics, Friston's Bayesian brain, Rosen's modelling relation, systems science, and so on.
I've addressed it plenty.
Quoting bert1
So address it. All I ever see is folk saying consciousness is a fundamental simple of the Cosmos, but somehow the complex functional neurology of creatures with evolved nervous systems are needed to get it to the point of being able do stuff that gives evidence it exists.
It feels like the panpsychists just copy our homework. :wink:
How about, consciousness is a fundamental simple of experience? Even despite the fact that I comprise billions of cellular operations, many existing on a sub- or un-concious level, nevertheless the fact is I also possess subjective unity of experience. I don't learn about a pain in my foot by being informed of it.
Why do we attribute agency to evolution? Saying that evolution does things or creates things or produces outcomes? When the way natural selection acts is as a filter - it prevents things that are not adaptive from proliferating. Evolution pre-supposes living organisms which adapt and survive, but to say that evolution is the cause of the existence of organisms seems putting the cart before horse. I think there is a tendency to attribute to evolution the agency that used to be assigned to God. It's kind of a remnant of theistic thinking.
As regards consciousness being the product of an evolved nervous system - what about the panpsychist (or maybe even pansemiotic) idea that consciousness is an elemental feature of the Cosmos, that exists in a latent state, and which then manifests itself through evolution. Not that consciousness should be reified as some existing force that can be identified as a separate factor or influence. The lecturer I had in Indian philosophy used to say, 'What is latent, becomes patent'. I'm pretty sure this is conformable with C S Peirce's metaphysics also.
But the idea that it is real as a latency in the cosmos, taking form as organic life, at least addresses:
Quoting apokrisis
And yet it is "fundamentally" dichotomised into attention and habit. I can drive a car in busy traffic on automatic pilot. Not to mention that I can go to sleep, get drunk, or feel time freeze in bike crash, etc.
Neurology explains the vast variety of our mental states. It also explains the generality of "being conscious" in terms of being an organism in a pragmatic modelling relation with its world.
So calling consciousness fundamental is wrong from a naturalistic point of view. It ain't fundamental so far as our best models of natural causality.
And calling it a mereological simple is also wrong. What we lump under the singular title of "consciousness" could not be gunkier. Where do the neural and biological aspects of being an organism with mindful unity bottom out in some particular necessary parts exactly?
Quoting Wayfarer
Well yeah. That is what the holists in biology keep telling the reductionists. Evolution is fine and dandy, but don't forget the other dichotomous thing of development.
My departure point here is that view from within biology which says evolvability itself had to evolve. It was a pretty purposeful step in its own way.
So reductionism always leads to chicken and egg issues. Holism instead focuses on the dialectical logic of mutually dependent co-arising, or what Haken called synergistics.
Quoting Wayfarer
Or rather, German and Russian biologists tended to be pretty comfortable with holistic thinking and so were ready to read agency into organisms, thus never had to be too hardline in their rejection of theistic versions of agency.
Peirce likewise.
But the Anglo world did embrace hardline material reductionism and so had to police its language, rid itself of any hint that evolution was anything other than blind chance.
A lot of this is just where you were brought up. A cultural thang.
Quoting Wayfarer
Hand-waving. What is latent consciousness when it's at home? What kind of causal model lies behind this "manifesting"? How can it be both a general elemental feature, and yet not an active feature, except in the most exceptionally particular and materially atypical circumstances like life on Earth?
Quoting Wayfarer
Sure. And biosemiosis is a theory of exactly how that happens. It can specify the physical conditions where semiosis first becomes a possible thing.
But panpsychism is just hand-waving. There is no causal theory of how a potential got actualised. Unlike biosemiosis, it can't pinpoint a moment when the latency became present in the Cosmos due to the very particular circumstances of the being a watery planet circling the free energy source of a sun and so just handwavingly says "the latency was always present as a fundamental simple of material existence itself".
It's not a reification that I am sensing things :roll:. That there is this persistent "experiential quality" is what is at question. You can call it "illusion" but then that has to be accounted for.
Who is this "I" if not a reification? It is the socially constructed objectification of the quality of "you-ness" that arises as a necessity of semiosis.
So yes, we do feel like a self in it is world as that is the essence of the modelling relation which makes for a sentient organism. That is what the enactive view is about.
But the idea that this "I" is an inhabiting spirit, a soul, a fundamental simple, is just the dualistic claim that underwrites panpsychism.
Semiotics says it is just how modelling gets done. A sense of self emerges in opposition to a sense of world. Both the self and its world are the two halves of the one-ness that is the modelling relation.
No need to rewrite physics. You just need to look for the point at which a machinery of semiosis could begin to earn its entropic keep in organismic fashion.
Feeling the self as "other" to the world is how the organism functions. Feeling the self as "other" to society the burden you always complain of is just this same organismic organisation being lifted to another semiotic level.
Semiosis in terms of genes and neurons becomes colonised by the even more abstracted semiosis by words and numbers. We have the rise of society as a super-organism.
You as a person in his world now have to be able to talk about being a person within the "other" of the social collective.
So it is no surprise that propaganda about spirits and souls, or eventually the magical material property of "consciousness" and "the authentic self", etc, becomes such a big social deal.
Individuals must be taught to objectify their existence in this fashion to become the suitably constrained elements composing the next level of an entropy-driven modelling relation.
Why does socially constructed change the fact that there is a sensation any more than the rods and cones? Causation doesnt equal ontological identity. So its the same Cartesian theater trick at a different level. How is this semiotics equivalent to an experience of sensation. Map and terrain. You know the argument. The terrain is matter not experience and the map is semiosis, but wheres the experiential aspect? It doesnt add up even if you mention top down causation of social construction. Its just more map but also a bit of consequent on the premise, etc. the feedback mechanism of higher and lower skips the part to be described. social construction needs minds that can sense already in the equation.
Also, information isnt necessarily experiential just computational. Youd have to prove this information can be identical to experience.
You get taught to see that a postbox is red rather than merely being able to see the postbox easily because your neurology is designed to dichotomise small hue differences into striking shape-revealing differences.
So the brain "looks through" the redness as all that is is a way to really emphasise the fractional wavelength differences that can give away something ecologically important like a ripe fruit among green foliage. This was the reason primates added a third cone to their vision to create an exaggerated visual boundary between red and green so that hidden shapes would pop out.
But human society turns it around. It makes use of red as a hue that really forces itself on our attention for these 10 million year old reasons. It paints postboxes in the easiest to spot visual differences. And it creates a whole vocabulary of descriptions for red hues, making us even more conscious of how we might think and react to the "colour itself". Through culture, we learn to objectively notice what nature never designed us to do. That the world is full of colours as well as shapes.
Then along come philosophers pushing warmed-over theistic beliefs about eternal souls and heavenly rewards. They too all gravitate to talking about colours when they want to motivate arguments about Hard Problems and ineffable qualia.
Colour perception just seems so arbitrary once you pull the trick of completely ignoring the role it actually plays in the ecology of perception.
If you put blinkers on, the horse doesn't stray.
Quoting schopenhauer1
I'm bored with explaining the same thing again and again. The model is a model of the self in its world. It is an Umwelt.
You need to think more deeply about how the "map/territory" thing is just Cartesian representationalism all over again. The real thing and its mental image.
Semiosis stresses the three-way relation where the map is the sign that is pragmatically interpreted.
Does it get you where you want to go? Great. Were you moving in the real world or a Metaverse simulation? Well was there a difference that made a difference?
Haven't you come across the Noumenal Problem yet? :rofl:
You mean, manifested.
Lets not trivialise something this amazing - the discovery that a convergence zone of physical forces allowed semiosis to become a thing on a watery planet some 10 billion years into the Universes existence.
Life and mind could only have existed because there was this remarkable intersection of lines
So contrast this level of hard evidence for the metaphysical claims biosemiosis might make with the wishy-washy pretentiousnes of Panpsychism.
Something like this had to be the case to close the causal gap and conclusively put paid to the Hard Problems causal gap. The natural philosophy route really came up trumps.
As I explained
Actually, the uncertainty principle, to begin with, is an obvious demonstration, of the reality of this need. Producing a metaphysics which incorporates the deficiencies of science, instead of recognizing them as deficiencies, and seeking the way to resolution, is a meaningless exercise.
You keep touting "naturalism", as if this categorization was sufficient to justify your metaphysics. But naturalism just reifies mother nature, in a similar way to the way that theology reifies "God". The principal difference is that theology allows that the aspects of the universe which appear to us as unintelligible, actually are intelligible, but only appear not to be intelligible because the method we are applying toward attempting to understand them is inadequate. This method is the method of natural philosophy, the scientific method, which has its limitations. Naturalism, on the other hand, treats the unintelligibility of these aspects of the universe as inherent to the nature of the universe itself, rather than as the consequences of deficiencies of the mind and its method of understanding. The difference therefore, is that naturalism approaches something which appears as unintelligible as inherently unintelligible, where theology approaches it as inherently intelligible, but appearing as unintelligible due to a deficiency in the approach.
So naturalism and scientism wrap each other up in mutual support of denying the reality of the supernatural (that which could only be understood by a superior intelligence). But this mutual support is really nothing other than a vicious circle of unintelligibility. Use of the scientific method reaches its limits and finds anything beyond that to be unintelligible to this practise. The naturalist metaphysician models the aspects of the universe which are rendered by the scientific method as unintelligible (chance, randomness, etc.) as ontological (symmetry breaking, etc.). The proponents of scientism take these ontological models as "truth", and therefore proof that the scientific method is the only means to the goal of truth. So the scientific method continues to produce more support for naturalism by demonstrating that these aspects of the universe are inherently unintelligible, as naturalism continues to produce the ontology which represent them, in its support of scientism.
No it doesn't. Sentience makes for a sentient organism. Why can't you have a modelling relation without sentience?
Then explain it better. Why does a self have to be sentient?