Aristotelian logic: why do first principles not need to be proven?
I have recently read a paper about Aristotles logic and first principles or as he named these: principia prima. If you are interested, you can read it here: The Arch of Aristotelian Logic
Our attempt to justify our beliefs logically by giving reasons results in the "regress of reasons." Since any reason can be further challenged, the regress of reasons threatens to be an infinite regress. However, since this is impossible, there must be reasons for which there do not need to be further reasons: Reasons which do not need to be proven. By definition, these are "first principles" (?????, principia prima) or "the first principles of demonstration" (principia prima demonstrationis).
The "Problem of First Principles" arises when we ask Why such reasons would not need to be proven. Aristotle's answer was that first principles do not need to be proven because they are self-evident, i.e. they are known to be true simply by understanding them.
Aristotle thinks that knowledge begins with experience. We get to first principles through induction.
OK. Exactly, in this point, there are other philosophers who proposed a solution to the first principles such as Kants perspective:
synthetic a priori propositions are first principles of demonstration but are not self-evident.
I start this OP because it makes me wonder about two questions:
A) Which are the first principles Aristotle is referring to?
B) If they are not need to be proven... their premises are universal affirmative? (According to Aristotle's syllogisms)
Our attempt to justify our beliefs logically by giving reasons results in the "regress of reasons." Since any reason can be further challenged, the regress of reasons threatens to be an infinite regress. However, since this is impossible, there must be reasons for which there do not need to be further reasons: Reasons which do not need to be proven. By definition, these are "first principles" (?????, principia prima) or "the first principles of demonstration" (principia prima demonstrationis).
The "Problem of First Principles" arises when we ask Why such reasons would not need to be proven. Aristotle's answer was that first principles do not need to be proven because they are self-evident, i.e. they are known to be true simply by understanding them.
Aristotle thinks that knowledge begins with experience. We get to first principles through induction.
OK. Exactly, in this point, there are other philosophers who proposed a solution to the first principles such as Kants perspective:
synthetic a priori propositions are first principles of demonstration but are not self-evident.
I start this OP because it makes me wonder about two questions:
A) Which are the first principles Aristotle is referring to?
B) If they are not need to be proven... their premises are universal affirmative? (According to Aristotle's syllogisms)
Comments (142)
I think you have put your finger on a sore-point of Philosophy & Science : the necessity to take some "facts" for granted without empirical proof. The only evidence to support such unproven premises (axioms) is logical consistency. But, even that assumption is based on the presumption that the human mind and the real world are inherently logical, hence share a firm foundation. I suppose Aristotle's Universal Principles are the metaphysical analog of physical atoms : not reducible to anything more fundamental. First Principles are simply labels for First Causes : the cornerstone of all practical knowledge. Example : the distinction between Substance (matter) and Essence (form ; qualities).
However, some fundamental premises are themselves subject to disproof, by stumbling across an exception to the rule. For example, physicists rejoiced when the long quest for the Democratean Atom seemed to be fulfilled in the 1800s, when Dalton & Thompson inferred that they had found the smallest possible piece of matter. Yet, no sooner had Rutherford produced his plum-pudding models, it was replaced by the planetary model of Bohr, introducing even smaller bits of stuff. Unfortunately, their dissecting & reductive methods soon hit a softer underlying layer of reality, which we now label, not as compact lumps of stuff, but as extended Fields of potential. Therefore, the 21st century foundation of the material world, now seems to be somewhat fuzzy & mushy, acausal & non-classical. Yet the operations of those amorphous immaterial mathematical fields have proven to have a perverse holistic logic of its own, as proven by the real-world success of "weird" Quantum Theory*1.
Apparently, Aristotle's First Principles were presumed "self-evident", based on his self-confidence in his own reasoning ability. But quantum scientists are no longer so self-assured, regarding their ability to make sense of the evidence available for the fuzzy logic of the sub-atomic realm of reality. It even calls into question our long-held assumptions about the linear logic of the Universe. Maybe our time-honored First Principles should be considered as local rules-of-thumb for taking the measure of the immense universe. :smile:
*1. Famously, physicist Feynman advised his bewildered students to avoid the trap of trying to make philosophical sense of quantum non-mechanics. Instead, "just shut-up and calculate".
Quoting Gnomon
I understand it now! Both labels have always been a classical debate among all philosophical schools or doctrines.
I think is important to bring here some thoughts of John Locke -as an example- about "primary" and "secondary" qualities:
These I call original or primary Qualities of Body, which I think we may observe to produce simple Ideas in us, viz. Solidity, Extension, Figure, Motion, or Rest, and Number. Such Qualities, which in truth are nothing in the Objects themselves, but Powers to produce various Sensations in us by their primary Qualities, i.e. by the Bulk, Figure, Texture, and Motion of their insensible parts, as Colours, Sounds, Tasts, etc. These I call secondary Qualities. [An Essay Concerning Human Understanding, Book II, Chapter VIII]
Quoting Gnomon
That's true.
Nevertheless, I think Aristotle's principles of logic are still important in some ways. After thousands of years the system of reasoning by syllogisms can help us. I understand that is a very basic pattern if we compare it with the complexity we currently live in. But the "essence" :grin: keeps flourishing!
The term translated as 'induction' is epagoge.
It is not something worked out by reason (dianoia) but something the intellect (nous) sees.
:up: I see and understand yout point and argument. But I think I have made a mistake because I didn't quote all the phrase you were referring to. The quote ends in this way: (and I think it probably fits in your arguments and point of view)
Then, I think here is when (nous) appears. Probably we can know thanks to how the intellect sees.
link
[quote = "Aristotle, Metaphysics Book IV"]
Part 1
"THERE is a science which investigates being as being and the attributes
which belong to this in virtue of its own nature. Now this is not
the same as any of the so-called special sciences; for none of these
others treats universally of being as being. They cut off a part of
being and investigate the attribute of this part; this is what the
mathematical sciences for instance do. Now since we are seeking the
first principles and the highest causes, clearly there must be some
thing to which these belong in virtue of its own nature. If then those
who sought the elements of existing things were seeking these same
principles, it is necessary that the elements must be elements of
being not by accident but just because it is being. Therefore it is
of being as being that we also must grasp the first causes...."
In part 2:
"...And there are as many parts of philosophy as there are kinds of substance, so that there must necessarily be among them a first philosophy and one which follows this. For being falls immediately into genera; for which reason the sciences too will correspond to these genera. For the philosopher is like the mathematician, as that word is used; for mathematics also has parts, and there is a first and a second science and other successive ones within the sphere of mathematics."
"Now since it is the work of one science to investigate opposites, and plurality is opposed to unity-and it belongs to one science to investigate the negation and the privation because in both cases we are really investigating the one thing of which the negation or the privation is a negation or privation (for we either say simply that that thing is not present, or that it is not present in some particular class; in the latter case difference is present over and above what is implied in negation; for negation means just the absence of the thing in question, while in privation there is also employed an underlying nature of which the privation is asserted):-in view of all these facts, the contraries of the concepts we named above, the other and the dissimilar and the unequal, and everything else which is derived either from these or from plurality and unity, must fall within the province of the science above named."
[/quote]
I'm just pulling quotes from The Metaphysics which mention first principles and first philosophy, because that's what I thought was referred to by Aristotle as "the first principles"
I imagine the unmoved mover would probably count -- but notice in these examples (and in the text surrounding the quotations) what Aristotle does to refute first proposed first principles, like atoms or water/earth/fire/air or the One or Contraries, to get a better notion of what he means by "first principles". They seem to be at the top of the species-genus chain, and somehow explain how everything is made of or comes from some primary thing, and if we take it in conjunction with the logic, then I think it'd be fair to say it was be a Subject, and not a Predicate.
Completely agree and you are, of course, on the right path because the paper I have read was referring and quoting to The Metaphysics. So, I appreciated all the big quotes you shared with us. The paper I used is not that complete and drafted.
Quoting Moliere
:100: :up:
Quoting Moliere
This is why, I guess, we can treat it as universal affirmative premises inside Aristotle's syllogisms. Or as @Gnomon previously said: The only evidence to support such unproven premises (axioms) is logical consistency.
Heh, these are pretty hastily pulled, I'll admit -- so this is more at the idea-bouncing phase than carefully pulled quotes, just to give a little context. And the last time I read Aristotle in real depth was over 10 years ago. I did, however, check the physics and the prior analytics for "first principles" as well, just out of curiosity, and didn't find as much that seemed to grab me as relevant.
No worries! It is a very opened and beautiful debate because "first principles" is a very general term and it leads us to wonder what does really means when we try to specify it. So, I even thinks it can take hours this debate.
Aristotle was a clever man when he wrote about these fundamental principles because after centuries we still debating.
Quoting Moliere
In my case, it was over 4 years and was Nichomachean Ethics! It brings me back good memories :100:
:fire:
The specific quote I am looking for! Fantastic. This explains everything. Aristotle brought a very important axiom to develop logic.
I'd hesitate to put it in the terms of "axioms" though. And even logic, because this is dealing with questions of first philosophy rather than reasoning what statement necessarily follows from premises, by necessity.
He's using logic here, of course, and perhaps the first principle would fit the form you're talking about -- a universal affirmation.
I'm not sure how I'd parse the god of the philosophers thinking the universe into the logical form, though -- and it wouldn't be a syllogism, I don't think either, because you'd actually construct syllogisms that terminate in the first principles, right?
But then with the examples that he's using, he just names things proposed as fundamental -- and as I understand the system, God thinking the universe and himself into existence is the unmoved mover, and would seem to count, right? But that's not exactly a universal affirmation, ala the logic.
It's a metaphysical proposition about the nature of reality and how everything relates back to something fundamental that predicates it all.
Quoting Moliere
Yes, I see your point. I am agree.
Quoting Moliere
Exactly. This is what I was looking for. I mean, what we should consider as "fundamental" which predicates it all?
Yes. If we couldn't agree on some universally applicable First Principles (starting point for reasoning), Philosophy & Science would be a political contest of whose personal opinions should rule. Most of Aristotle's principles have held-up to skeptical scrutiny over the years. But, logical reasoning from abstract principles can still be questionable. For example, even if you accept a particular axiom, as the first of a series of logical causes & effects, you could still go wrong.
That's because of the skeptical distinction between "relations of ideas" and "matters of fact". David Hume noted that there is no "logical necessity" between a cause and its effect. He said that our intuition of logical cause & effect is basically a "habit of thought". Hence, one experimental outcome doesn't prove anything. So, scientists can't accept a single result as typical, until it has been repeatedly replicated. Nevertheless, reasoning from First Principles is a stubborn, and useful, habit.
Kant said he was "awakened from his dogmatic slumber" by Hume's skepticism. So, he tried to find a way to justify our intuitive "habit of thought" by means other than endless inconclusive experiments & observations. Yet, he was forced to conclude that we can't know anything about the world with absolute certainty. We can only know our own minds. Even our sensory Perceptions are filtered through our metaphysical Conceptions. Hence, it is only "knowledge of causation itself that is a priori (i.e. knowable prior to experience)"*1. We seem to be born with a mental template of metaphysical Logic and physical Cause & Effect, which we refine over time by adding confirming experiences.
Yet we must always be on the lookout for the exception that proves the rule : miracles are rare & usually based on trust in someone else's experience. So, who do you trust : Aristotle or Augustine? :joke:
*1. Reference : Philosophy Now Magazine, June/July 2022
I think the question is a bit foolish and undecidable. There is no fundament or ultimate principle that all knowledge can be derived from. Knowledge is hard won, slow, painful, and limited. In order to be able to derive such a principle, and be certain that it is true, we would have to be able to check all knowledge -- be omniscient. Otherwise, you just fall into metaphor and traps of reason (as we blabber-apes tend to)
Quoting javi2541997
Undecidable (à la problem of the criterion). What matters is (Peirce, Wittgenstein et al might say) "Aristotle's first principles" work ... until they don't, just like other "first principles" in domains other than logic (vide S. Haack's foundherentism as critique and alternative to foundationalism of "first principles").
Quoting Gnomon
Aristotle! I trust whatever comes from logic and metaphysics not from faith! But I respect every point of view and beliefs. Everyone is free to trust more one than the other!
I think there are not "foolish" question when someone is asking with aim of learning...
Thanks for sharing, 180! :up:
That's true it can happen a scenario where Aristotle's "first principles" don't work. In this context, the paper I read yesterday, shows diverse solutions according to different philosophers., for example: The Rationalists, such as Descartes, Spinoza, and Leibniz: Self-evidence breaks down as a solution to the Problem of First Principles because there is no way to resolve disputes about whether something is self-evident or not.
Hume sharpened the Problem of Induction by noting that no generalizations whatsoever are logically justified. The Empiricist tradition thus culminated in Skepticism, Hume's conclusion that knowledge in the traditional sense does not exist.
Finally, Karl Popper resolves the regress of reasons, at least for scientific method, by substituting falsification for verification
True, you're right.
I should say it is a foolish question to believe you can have an answer to.
The desire to know, and intellectual curiosity, are good things!
But it is possible for human beings to want to know something that they are unable to know.
I think that questions asking after ultimate foundations are like that.
Absolutely, you are right! :flower:
Quoting Moliere
This is one of the most humanistic acts or virtues we have inherited in ourselves. Interesting, doesnt it? The desire of searching for complex answers that are unable to know. This is why philosophy is based on tricky questions.
For example: why the first principles do not need to be proven? Is very complex itself. So I guess this is the trick of the OP: there is not necessary to answer, but at the same time we want to know about because we sapiens sapiens love to go further of basic explanations and thoughts! :eyes:
Most perceptive!
[quote=Master Yoda]The force is strong with this one.[/quote]
Allow me to assist you as best as I can.
Act I
(Your) Conclusion: There are statements that are true sans proof.
(Your) premises: ?
Act II
What are these statements?
Interesting trick indeed. So, according to your puzzle if I am able to find out which are the premises, then I would be able to find out what is the meaning of principia prima
The fact here is not use premises as a tool of logic but trying to understand it previously! :eyes:
Have you read Gödel's incompeletenss theorems? It might come in handy. The Gödel sentence G is true but, here's where it gets interesting, unprovable. For the moment ignore the constraint in a/the given axiomatic system, it kinda kills the vibe if you catch my drift. Trust a genius to take the fun outta living!
Quoting Agent Smith
:flower: interesting. The paper I have read yesterday quoted Kant. Specifically: synthetic a priori propositions are first principles of demonstration but are not self-evident
I guess with different terms or propositions they tend to end up in the same path.
Synthetic a priori?
Never had the time nor the brains to dig deeper into Kant's ideas.
Quoting Agent Smith
Agreed. It could take some years of our lives to do so! :grin:
What makes an observation true or false will be helpful!
I just read it and I find the following lines so interesting:
This quote from @Banno is very helpful to keep going further forward on this topic!
"Photosynthesis is what takes place in plants" is true only if photosynthesis is what takes place in plants.
[i]And generally, "P" (note the quote marks) will be true only if P. This is called a T-sentence. T-statements set out the general form of all true sentences. Although T-sentences appear uninformative, they make a few things clear. For example, for "P" to be true nothing further is needed than that P. Including being observed.
In logical form,
"P" is true IFF P
That is, "Photosynthesis is what takes place in plants" will be true regardless of whether or not it is observed to be true.[/i]
Nice summary/narration. On this topic, I like to think of a community trying to rationally settle what they ought to believe. The issue seems to be what they'll use for premises. I think they'll only let one another get away with choosing (relatively) uncontroversial statements. In my vision, their logic is not going to be as exact and reliable as a proof in mathematics, and they also don't have to take any particular relatively uncontroversial statements as definitely true. They'll just generally establish more complex and doubtful claims by working from those that are less so, without need perfect certainty about any claim but certainty sufficient for practical purposes ( from murder trials to bridgebuilding.)
Well yes, it is a good way to watch this topic. I think most of the philosophers since Aristotle era tried to debate or explain the big problem of logic. Because we the humans, as rational beings, tend to go further than simplistic emotions. But There can be a problem: the infinite doubt of our possibilities. This is why I personally think Aristotle was a very clever thinker because he proposed that there are, at least, basic patterns that are true just for basic rationalism. I have tried (wrongly) search what these principia primae are about because I was so lost when I published the OP yesterday.
Nevertheless, the answers from the other mates are pretty drafted and they help me to get a more clear interpretation.
Quoting Pie
Interesting because I have felt the same thought too. But I think this issue is morbe related to philosophy of language. The limits of understanding all the philosophical doctrines about logic depends on the art of language too. When I read Gödel or Kant it makes me feel a very complex situation because they express themselves in their works with a very complex language.
True & False are opinions, not facts. They don't exist apart from human minds. That's why Kant labeled them "synthetic" (artificial instead of natural). But, all animals have an interest in determining which appearances are Real (true & natural) from which are Unreal (artifacts of mind).
For instance, the appearance of tall grass may, or may not, indicate edibles for ruminants. There could be a tiger lining-up its stripes with grassy shadows. But fawns don't need to know that "fact" from personal experience. So, most prey animals are jumpy, because they were programmed -- a priori by evolutionary education -- to err on the safe side, and be prepared to run, if the grass moves when the wind is not blowing.
Homo Sapiens have inherited that habit of synthesizing physical percepts into meta-physical concepts, to mentally compare true grass with fake grass (a thought experiment). But humans have expanded that analytical talent to include complex meta-physical concepts in their appearance-vs-actual scrutiny. But out there in harsh Reality there is only "is" or "ain't". True is only "true" in Ideality.
So, philosophers invented new words to differentiate non-physical noumena (ideas, beliefs, opinions) from physical phenomena (facts, percepts, sensations). Those abstract logical categories all distill-down to True vs False. But, it's seldom that black & white. Anyway, since noumena are not empirical (known by physical evidence) they exist only in the abstract realm of Logic & Reason. Which Kant assumed was inherent in the human mind, not learned from experience. Yet, "a priori" could be interpreted as "from creation" or "from evolution". So, which belief is true, and which false? :cool:
Kant and Evolution :
https://www.cambridge.org/core/books/abs/problem-of-animal-generation-in-early-modern-philosophy/kant-and-evolution/DF6CE471233694FEC1A8B45AABBA8EB9
No, I havent read this book yet. Thanks for the recommendation! I going to write it on my agenda of next books :up:
Completely agree! :up:
Quoting Sam26
It is interesting how your friend, Dr. Bitar correlates inference and proof with parasitic. I see his metaphor. Exactly as it is, inference, proof, knowledge, etc... extend themselves as parasites to what is known.
I guess we can see the parasitic example in a positive side! Faraway from pandemics or illnesses!
:chin: So "human minds" are human minds-dependent "facts"?
:
Say what?
Alls I'm sayin is that our understanding of what's real or unreal, true or false is subjective phemomena, not objective noumena. That's why Kant concluded that we KANT know the ding an sich (true ultimate perfect reality, which I call "Ideality"). All we know is our own concepts about perceived reality. So our "facts" are "human-mind-dependent". G*D only knows what's what out there in the Real world.
Unfortunately, most of us assume that our mental models are perfect representations of Reality. Although empirical scientists do generalize, they are aware that their models are never perfect, and fall short of absolute Facts. Hence, the necessity for methodological skepticism.
That's also why Aristotle made a distinction between Universal Ideal Generic Forms (morph), and particular physical Instances (hyle) of those Ideal Abstractions. Science attempts to generalize universal Facts from a few instances. In practice though, our common language too often allows us to confuse physical real Instances (Things ; Facts) with metaphysical ideal Forms (Universals ; Truths) ; the Ding with the Ding An Sich. :worry:
Ding an sich :
noumenon, plural noumena, in the philosophy of Immanuel Kant, the thing-in-itself (das Ding an sich) as opposed to what Kant called the phenomenonthe thing as it appears to an observer. Though the noumenal holds the contents of the intelligible world, Kant claimed that mans speculative reason can only know phenomena and can never penetrate to the noumenon. Man, however, is not altogether excluded from the noumenal because practical reasoni.e., the capacity for acting as a moral agentmakes no sense unless a noumenal world is postulated in which freedom, God, and immortality abide.
https://www.britannica.com/topic/noumenon#ref182175
Universals :
The Problem of Universals asks three questions. Do universals exist? If they exist, where do they exist? Also, if they exist, how do we obtain knowledge of them? In Aristotle's view, universals are incorporeal and universal, but only exist only where they are instantiated; they exist only in things
https://en.wikipedia.org/wiki/Aristotle%27s_theory_of_universals
Methodological skepticism is distinguished from philosophical skepticism in that methodological skepticism is an approach that subjects all knowledge claims to scrutiny with the goal of sorting out true from false claims, whereas philosophical skepticism is an approach that questions the possibility of certain knowledge
https://en.wikipedia.org/wiki/Cartesian_doubt
Platonic Form :
The theory of Forms or theory of Ideas is a philosophical theory, concept, or world-view, attributed to Plato, that the physical world is not as real or true as timeless, absolute, unchangeable ideas.
___Wiki
Note -- Those perfect Ideals exist only in the mind of G*D (or Spinoza's Nature), not in the minds of mortal men. If there is no eternal state of Being, there is only imperfect ever-evolving reality -- no absolute Truth. G*D's ideals are the ultimate objectivity that fallible humans futilely strive for in Science & Philosophy. Hence, if there was no G*D, we would have to invent one to serve as the Ideal Objective Observer.
"Spinoza argues that there is only one substance, which is absolutely infinite, self-caused, and eternal. He calls this substance 'God', or 'Nature'.
https://en.wikipedia.org/wiki/Philosophy_of_Spinoza
MIND-DEPENDENT FACT
Exactly! :smirk:
(The implication of what you wrote, G-mon. )
Anyway.
Yeah, the map =/= territory but that is precisely why the map is useful as a map. The map (i.e. mind-variant "representation") is an aspect of the territory (i.e. mind-invariant "ding-in-such") used to track other aspects of the territory and, in this way, the "ding-an-such" is approximately (partially) though, yes, not completely known (pace Kant). The efficacy of map-making/using for relationing to the territory is factual and not merely "a matter of opinion" (i.e. mind-dependent). Enactivism, a subset of embodied embedded cognition (EEC) ever heard of it? :roll:
Clearly, Gnomon, you don't drink bleach no doubt because the "representation" of its toxicity corresponds sufficiently with the bleach's "ding-an-such" for you to heed the poison warning label. Anti-realism (i.e. immaterialism) is demonstrably bad for your health. :mask:
Oyxmoron.
e.g. Asylums are filled with "Jesuses", "Napoleons" & "Klingons". :sweat:
I also don't imbibe 180 proof Materialism. It's bad for your mental health; even for those who don't believe in immaterial Minds. :joke:
Anti-Idealism :
[i]Type-A materialists hold that phenomenal facts (insofar as there are such facts) are necessitated a priori by physical facts. Such a materialist denies that physically identical zombie worlds or inverted-qualia worlds are coherently conceivable, denies that Mary (of the black-and-white room) gains any factual knowledge on seeing red for the first time, and typically embraces a functional (or eliminative) analysis of consciousness.
Type-B materialists accept that phenomenal facts are not necessitated a priori by physical facts, but hold that they are necessitated a posteriori by physical facts. Such a materialist accepts that zombie worlds or inverted-qualia worlds (often both) are coherently conceivable but denies that such worlds are metaphysically possible, holds that the factual knowledge that Mary gains is knowledge of an old fact in a new way, and typically embraces an a posteriori identification of consciousness with a physical or functional property.[/i] ___David Chalmers
http://consc.net/papers/modality.html
PS__Maybe "G-mon" is a type A Materialist. Phenomena is a function of Noumena. In that case, Phenomena are recognized as models of Noumena because the a priori template of Aristotelian Categories (Quanta/Qualia) fits the incoming information. Partial fit = questionable; No fit = false. :cool:
Assuming evolution is true,
1. Our senses would've evolved to get as close to the truth as possible as any deviation from veracity would significantly lower one's odds of survival, sensu amplo, oui monsieur?
2. If one studies/examines life rationally, it sometimes feels pointless, vide the alleged Sisyphusean nightmare scenario as depicted by Albert Camus. Ergo, evolution could've/should develop systems that in a sense lessen the burden of existence and one way of doing that is to create illusions that deceive us into thinking life is, to put it mildly, abso-fucking-lutely amazin' (maya).
If evolution is to succeed with humans, it has to balance reality with illusion, hit the sweet spot so to speak just so that we stay alive long enough to transfer our genes to the next generation. Wicked!
Yes. Evolution weeds out un-fitness, but useful (pragmatic) "illusions" (models of reality) are fit-enough to pass the survival test. Donald Hoffman doesn't deny that there is a real world out there. He just argues that our mental models of reality are based on limited information & experience. He uses the analogy of computer screen icons as abstract & simplified symbols of the underlying complexities hidden inside the processor.
Hence, he agrees with Kant, that we don't have direct knowledge of (real) things, just our indirect (ideal) mental representations of them. And he concludes that our imperfect replicas of reality are "good enough" to guide us through the exigencies of evolutionary extraction (culling of the herd). Good enough is near the balance point ("sweet spot") between too much and too little. Even if it doesn't hit a home-run every time at bat, it will be sufficient to result in a high batting average. :smile:
PS__Even as the technological extensions of our senses add more detail to our world model, we discover that, like fractals, the subtleties go on toward infinity.
The Case Against Reality :
[i]As we go about our daily lives, we tend to assume that our perceptionssights, sounds, textures, tastesare an accurate portrayal of the real world. Sure, when we stop and think about itor when we find ourselves fooled by a perceptual illusionwe realize with a jolt that what we perceive is never the world directly, but rather our brains best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasnt, wouldnt evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what its really like.
Not so, says Donald D. Hoffman, a professor of cognitive science at the University of California, Irvine. Hoffman has spent the past three decades studying perception, artificial intelligence, evolutionary game theory and the brain, and his conclusion is a dramatic one: The world presented to us by our perceptions is nothing like reality. Whats more, he says, we have evolution itself to thank for this magnificent illusion, as it maximizes evolutionary fitness by driving truth to extinction.[/i]
https://www.theatlantic.com/science/archive/2016/04/the-illusion-of-reality/479559/
Note -- Hoffman insists that our survival was not due to a "true" picture of reality, but to a model (that I call "Ideality") that is true-enough for minimal fitness. For example, our ancestors survived for millennia without knowing much about Physics, or Quantum Physics, or the vastness of the universe. So, they "got by" with their superficial models of the entangled complexities of the underlying & overlying world that is hidden from our eyes -- but not from our sense-extending technology. :nerd:
Given the choice truth or survival, we've been programmed to opt for the latter. A delusion/illusion can make the difference between life and death and hence the abundance of cognitive biases which, though leads us away from the truth, keeps us safe and sound.
So interesting point of view, indeed. :100:
But I do not understand why truth and survival are connected and why do you think an illusion can make the difference between life and death?
Probably I am wrong, but I guess they both complement each other.
Syllogism: Thanks to the act of survive, I can find the truth. Thus, If I found out the truth, it does mean that I have survived.
Quoting Gnomon
@Agent Smith Ok, I just read it. Sorry for the comment, I understand it better now!
[quote=Bartricks]Now for the philosophical point (remembering that I don't care what Catholics think, I only care about what makes sense - which seems very different).[/quote]
Not "given." False dichotomy. 'Partial truths' have survival value; in fact, most of our "truths" are only partial / approximate, ergo fallibilism. Adaptive organisms are selected for traits which are effective enough (e.g. truthful enough) for finding food, mates and fending off predators long enough to reproduce profligately. The further removed from evolutionary pressures, the greater the opportunities to extend the scope of "truth"-making/telling beyond managing / satisfying the requirements of bare survival. Natural selection, Smith, generates only suboptimal solutions, and those which are effective enough tend to survive.
Fiction & so-called sublime doodads seem to take the edge off dukkha - by just the right amount and just long enough - to make us wanna procreate. Ah, but I repeat myself.
[quote=Ranjeet]A thousand apologies.[/quote]
Yes. Another example of "principle of sufficient reason" could be: "the whole is greater than the sum of its parts" by Aristotle.
When you connect more objects in a way, there will be a system emerge. That system somehow hold new properties that does not exist in those object that form the system.
Or Cogito Ergo Sum by Descartes. At least, we can consider it as a "principle of sufficient reason" of my awareness of existence. :chin:
Are you by any chance referring to holism and/or superorganisms? Of course you are; silly me!
I am referring to holism! The parts of a whole are in intimate interconnection, such that they cannot be understood without reference to the whole.
But I am thinking right now that this theory could be so generic...
Trying to study each part specifically is important too. I do not want to be attached to any theory neither sound radical about it.
Probably, essence and substance can be better understood if we study it individually.
This :point: The Unexplainable is right up your alley.
An interesting perspective! It reminded me that lower animals have no illusions. For example, an ant is not concerned with "Truth", and doesn't worry about "Death", but only with what works right here, right now. Homo sapiens is a different animal though. Our rational ability to project here & now into the near future, causes us to worry about things that are not things, and about events that may never happen. We sometimes treat those imaginary possible futures as-if they are the wolf at the door. That's the root of most anxiety disorders. But the stoics among us understand, that if an imaginary wolf is at the door, all we need to do is not open the door.
Discerning True-from-False is sometimes taken to extremes by philosophers. That's why we need to be reminded by thinkers like Kant and Hoffman, that we have no way of knowing Absolute Truth. So, we have to do the best we can with our little cache of personally proven facts, and whatever useful truths we can glean from the experiences of others. We weave all those particular truths together with links of Logic, to fill-in the gaps in our direct & indirect knowledge. The patchwork result is Pragmatic Wisdom, not seamless Divine Revelation.
The BothAnd Principle is based on viewing "True-False" as a continuum, not as absolute extreme positions with nothing in between. So, instead of going to one-end-or-the-other of those simple-minded oppositions, philosophers are advised to shoot for the "sweet spot" at the Golden Mean. Apparently, species that succeed at maintaining an "even keel" (stability ; consistency) survive long enough to reproduce, and to propagate their informed genes (molded by experience) into future generations. That's not out-dated Lamarckism, but merely the observed fact that genes are not merely inert carriers of information, but are modified by the experience of their host (neo-Lamarkism). Moreover, humans have invented an artificial form of embodied experience : writing & recording (techno-Lamarkism).
Life is not simply a stark choice between door A (true?) & door B (false?), but a more interesting game with multiple options, some more true than others. Perhaps, what 180 Proof labeled "partial truths". The first of all Principles in the game-of-Life is "choose life". However, Wisdom is the talent to know how to choose the least-bad option, from a spectrum ranging between Good & Evil. Evolution seems to reward such Pragmatic Truth, instead of the vain treasure-hunt for the Holy Grail of Absolute Truth. Nevertheless, idealistic humans tend to err on the side of Truer Truth (e.g. philosophy ; science ; technology), thus advancing cultural evolution from Cave Man to Rocket Man -- from bare survival to thrival. :smile:
A cognitive bias is a strong, preconceived notion of someone or something, based on information we have, perceive to have, or lack.
https://www.masterclass.com/articles/how-to-identify-cognitive-bias
Don't give up on us yet. I hope our good qualities are not being "discarded". But sometimes one talent comes to the forefront, and another recedes. For example Darwinian Evolution emphasized the role of competition in the "struggle for survival" : mano y mano ; one-on-one. But other naturalists, such as E.O.Wilson, saw that cooperation within cohesive systems (Group Selection) was a major factor of evolution. The "honing" process works in more ways than one, to "maintain" a balanced system.
That maintenance includes Cultural selection & progression. In the 20th century, most Western societies rejected cooperative Socialism, and focused on competitive Capitalism as the main driver of social evolution (measured in terms of money). But, that "greed is good" policy resulted in some dire social consequences, as economic competition tended to let the cream rise to the top (the super-rich 1 or 2%), while the watery whey sank to the bottom (Zion in the Matrix). In reality though, what we now have is a hybrid (off-setting) system of Socialism & Capitalism.
However, Nature tends to automatically react to re-balance an out-of-whack system; sometimes via violent natural disasters. And Culture ("artificial world") also seems to offset extremes in order to harmonize the general human welfare. However, apologies to Marx, Social systems are unnatural, and seldom automatic. So the polarity may have to get very disproportionate before civil wars break-out. Hence the history of human culture seems to follow the up & down path made famous by Hegel. Yet, somehow the general trend seems to keep us, as a world-wide social system, on a fairly stable path. That may be because natural & cultural Evolution have an inherent stabilizing force to keep it on track. Being a pragmatic optimist, I call that implicit equilibrator "EnFormAction". :smile:
PS__One example of balancing Aristocracy (the few) and Proletariat (the many) is in the Parliamentary proportioning of Lords (few) and Commons (many). It acknowledges the social disparity, but tries to provide a political counterbalance. This is a cultural example of the natural balance between Predators (few) and Prey (many). It's an eccentric symmetry, and a dynamic balance, but it seems to work . . . . in the long run.
Social Re-Balance :
[i]Picture a country plagued with financial struggle, unaffordable food, looting and rioting due to heavy disdain for the current regime, the wealthy exempt from paying taxes, and an expanding urban poor.
Thinking of the United States during the good year of our Lord 2020?
Think again. Were talking late 18th century France.[/i]
https://www.polljuice.com/vive-la-revolution-comparing-u-s-inequality-with-1789-france/
THE EVOLUTIONARY DIALECTIC
PYRAMID PROPORTIONS
I see. So is that an absolute truth ? Or just a guess ? Just 'appearance' or 'phenomenon' ? Does the person a in private dream somehow figure it out ? And assume that everyone else must also be in a private dream ? But isn't this just more of that private dream ? Mere illusion ?
The trick is its vainglorious humility, its wilting arrogance.
Is that your humble way of implying that, contra Kant, you do have personal access to "absolute truth"? What "trick" are you referring to? Do you think that Empirical Science reveals "absolute truth" that is hidden from "arrogant" philosophers? :smile:
Kant vs Scientific Rationalism - Do we need the Ding an Sich? :
Science deals with what we can perceive (empiric knowledge = empiric truth), not with the Ding-an-Sich. We don't have access to it, and reaching it is not the goal of science, it is impossible.
https://philosophy.stackexchange.com/questions/84710/kant-vs-scientific-rationalism-do-we-need-the-ding-an-sich
What evolution selects for isn't gonna be something we can control if we don't do anything about our natural built-in reward-punishment system; as is obvious, that critical component of ours is not something we're in charge of. We follow our hearts and though our emotions tend to, by and large, make sense to our minds, sometimes we like what we aren't supposed to like and detest stuff we ain't supposed to detest. The conflict or disharmony between heart and mind (Xin), how well/badly these two work (together), will decide, in my humble opinion, humanity's fate!
:100: :up:
That's true.
I think your thoughts can be related to Taoism, New-Daoism, Yin and Yang, Confucianism, etc...
Verse 39.
[i]Being in harmony with the Tao way
The sky obtained clarity and the earth became stable.
In harmony things were gradually created.
Out of the Tao way the man is not in harmony with the sky
He is not stable on the earth.
Without this equilibrium, the man disappears.
The Wise Person sees everything in equilibrium,
He doesn't manifest his Ego, or intervene.
First he will monitor the Tao Way,
Uniting with the Tao Way, he is in equilibrium as well.[/i]
Which Tarot card features my portrait today ? Is it grizzly Scientism or spooky Mysticism ?
The gist of my plaint was what I saw as a knee-jerk Kantianism that should maybe doubt itself, taking itself as it does for the figment of a dream. That we should justify our beliefs, and hold them fallible, is almost to be taken for granted among philosophers. No need to dress it up with Kant/Hoffman, both indulgently ornate theorists, to make such a point. As I've been arguing in other threads, doubting the world while taking the self for granted, however traditional, is not so sensible. The concept loses its intelligbilty just as 'it's' body has its ears and eyes and nose dissolve into the theory's pixels ---without only made sense as the output of those worldly devices, battered by light and air, the real stuff.
The ermit. This is the card which features your portrait today. Why? Because he carries his Lamp of Truth, used to guide the unknowing,
Nice pick, friend !
I love the image.
That's OK. No offense taken. I was just riffing on one implication of your post : that humanity might be devolving due to unfitness : not having the "right stuff" for survival. Au contraire mon fre're, the Enformationism worldview implies that humanity is now a major driver of evolution -- for better or for worse. Humans have added Cultural Selection to Nature's weeding-out mechanisms. And one aspect of Cultural Selection is the Moral Dimension. It's an unnatural (artificial) way of guiding the selfish masses toward the common good. Animals don't have a formal Moral Code, because they are driven mainly by emotional instinct, instead of rational planning.
Some cynical philosophers see only the sensationalized media view of humanity's immoralities. But, a few scientists have dug up evidence to tell a mundane story of man's humanity toward man & nature. Steven Pinker's The Better Angels of Our Nature ; Michael Shermer's The Moral Arc ; and Rutger Bregman's HumanKind, are just a few examples of a more hopeful outlook for the future history of humanity. There are plenty of negative "truths", if that's your thing. But I prefer to focus on the much more common positive "truths" that can be interpreted as upward moral evolution. Our technological progress is undeniable, but moral progress is not so obvious. That's why Steven Pinker wrote Enlightenment Now, to present the case for Reason, Science, Humanism, and Progress. These books give some "reasons" for "it being true". :grin:
Enlightenment Now, Again :
Pinker is optimistic about human flourishing, fostering, enhancing, and progressing, as we overcome inherent and environmental limitations with grit & reason.
http://bothandblog2.enformationism.info/page41.html
We must try, we absolutely must! That, in my humble opinion, is our ultimate purpose.
That pun KANT go unacknowledged.
Quite interesting and educative! Thanks for posting this topic.
Quoting javi2541997
I don't think that Aristotle referred to specific first principles. I think that his first principles apply to any subject: scientific, philosophical, religious ... pertaining to language, art, ... to everyday life ... anything. One starts by asking "What is that, the truth of which we know for certain and we don't have to prove?" It has millions of applications.
Now, something relevant and well known as a subject comes to my mind: the "First Cause". Only that there's a big difference between the two: "reason" refers to something intentional, whereas "cause" may refer to somthing random, accidental.
Anyway, all this needs to be analyzed ... I'll come back to it if I have some workable and useful ideas ...
Quoting javi2541997
Quoting javi2541997
I guess so. But the problem is, how many cases must be satisfied, i.e. the principle be applied to, to be considered as "first principle". Also, do we arrive to such a principle simply because we can't think of any other that precedes it?
Questions to explore and feed our minds with ...
'The theorem states that in any reasonable mathematical system there will always be true statements that cannot be proved.'
In the other hand, you are right in your argument on what we should consider as first principles. I remember that my main error was to think in "specific" truths, while those are accessory. Aristotle used such premises in different subjects to promote a basic notion of logic (I wish I am not mistaken and I am remembering well).
Quoting Alkis Piskas
Aristotle's syllogisms and logic explore our minds and cause a lot of questions. Sadly, I am not capable to answer them but there are members in this forum with a high level in mathematics and philosophy of science and they offer a lot of answers.
What I remember about this thread is the fact that Aristotlean logic is now so simplistic compared to modern logic problems...
I think that for Godel, the matter of valid forms of demonstration was paramount. Aristotle certainly was concerned with the matter but also saw first principles as being a proper fit for what was to be inquired into. Some natural things had particular differences that required different primary points of departure. There were other qualities they all shared.
There is substantial debate amongst ancient scholars regarding such a distinction in Aristotle's text. MELINA G. MOUZALA gives a nice summary of the issue.
My impression is that Aristotle was not trying to provide the last word on these matters.
Plainly - I'm able to read the encyclopedia description of Godel's proof, but I'm not equipped to understand the math. It just occured to me, however, that there is a kind of resemblance between the two principles.
No doubt. :up:
Well, I have read and know very little about logic literature-wise, Aristotelian or other.
I prefer using it! :grin:
F = There are some truths
If F is false then ~F = there are no truths. ~F is true (F is false) AND ~F is false (~F says there are no truths).
If F is false then ~F is true, but ~F says there are no truths so ~F is false (self-refuting)
@Wayfarer (self-referential paradoxes, re the Gödel sentence: this statement is unprovable).
Global skepticism too is said to be self-refuting and so is relativism according to quite a number of philosophers.
Well, you need ro assume something, don't you?
Unless you want to assume the empty assumption of not assuming anything... :-)
In which case you cannot argue anything.
Some axioms are called self-evident by virtue of how certain they seem. Others however, use "self-evidence" in a more narrow sense, meaning "self-evidencing through self-referentiality". Axioms that are self-evident in this sense are an even better candidate for the claim of not needing propositional justification.
Take Descartes Cogito, ergo sum. If you assume that thinking implies a thinker, and if you assume that thinkers exist, then you can prove your own existence through thinking. If your thought happens to be I think, therefore I am, then the proposition is self-referentially proven (granted we take the aforementioned assumptions as givens), because the proposition is a thought, and thus acts as the evidence for the existence claim; but to do so, the evidence must be referred to (which happens via the I think part). This reference is then a self-reference, since the proposition is its own evidence.
This is far from the cleanest example of self-referentially justifying propositions, and it is also not the most impressive, given that it requires external assumptions. However, it is probably the most famous example of this kind of self-evident propositions.
There is definitely a resemblance. Aristotle's rationale was that syllogistic reasoning is not self-supporting, and requires non-syllogistic first principles. Godel was probably motivated, at least in part, by the recent attempts to develop exhaustive and definitive systems of formal reasoning, especially in the logical positivism movement. They may have both been responding to the same sort of error, but the error was almost certainly more common in Godel's day.
Whereas Godel was addressing a specific issue, Aristotle was treating the issue in the context of a larger whole. In order to understand how the intellect knows things, one needs to understand the difference between first principles and premises, demonstrations and arguments, etc. So for Aristotle this would be a small chapter in an introductory logic course.
(Is it okay to resurrect older threads? This thread is closely related to the current thread on BonJour's epistemology and 'intuition', so I thought it might be appropriate.)
Of course! Especially if youre going to agree with me ;-)
I don't see first principles as being capable of proof, or as being self-evident. I think they represent the presuppositions we must make in order to even begin thinking about anything. There is nothing to say those presuppositions cannot change over time; we find new ways of thinking based on new presuppositions, which may even contradict those held previously.
I understand that first principles cannot be proven since they are accepted or rejected upon a basis of priority where one can go no further back from a particular starting place. Does putting forward that criteria not require some kind of self-evidence?
For example, when Aristotle establishes the principle of non-contradiction, is that not an appeal to a limit of experience? We could, theoretically, ignore the principle. Or say it is one theory amongst others. Those speculations do not capture the necessity Aristotle argued for its acceptance.
Non-contradiction is simply a necessary condition for coherent and consistent thought; we cannot be coherent and consistent if we contradict ourselves.
What is the difference between "conclusions are generally based on presuppositions" and the attempt to establish first principles in the fashion of Aristotle?
I agree with your judgement regarding non-contradiction. Should that sort of thing be counted as self-evident?
But the further corollary is that anyone who believes themselves to be coherent and consistent is presupposing the principle of non-contradiction. That is, they are presupposing that the principle of non-contradiction is true.
One can attempt to bracket the question of coherence and consistency, but when one is already writing arguments in a natural language on a philosophy forum the bracketing is merely academic. They have already accepted the onus of coherence and consistency.
---
- Very good. :razz:
I'm not sure there is a difference...do you think there is?
Is it self-evident that sensible discussion would be impossible if people routinely contradicted themselves? It seems obvious that would be the case, but I'm not sure if that is exactly the same thing as it being self-evident.
Quoting Leontiskos
As I said before I don't think it is so much a matter of the principle of non-contradiction being true as it is a matter of it being necessary for sensible discussion to be achieved. And i would see it more as a recognition than a presupposition.
You seem to be saying that everyone who posts on a philosophy forum has accepted that their arguments must be coherent and consistentmaybe, but does it follow that everyone's arguments are coherent and consistent, or that if they are not and this is pointed out to them, that they will consequently modify their views?
Right, but for Aristotle the principle of non-contradiction is not something that you can take or leave. It's not as though you can say, "Ah, I feel like being coherent today, so I will don the garb of the principle of non-contradiction along with my other garments." The principle of non-contradiction is more than a linguistic tool or even meta-tool. It is an indispensable presupposition which is in play whether you recognize it or not.
Quoting Janus
@Paine was right to point to the principle of non-contradiction in response to this claim. Are you of the opinion that the principle of non-contradiction might change over time?
I am not sure either. Both Plato and Aristotle argued against the 'relativity' of Protagoras. From that point of view, the matter is something that needs to be hammered out rather than treated as an uncontestable condition.
But as an appeal to a condition, the argument is about evidence.
Quoting Leontiskos
As I said I see it not as being a presupposition, but as a recognition of something necessary to thought and discussion.
So, of course, it will not change over time unless people become content to babble at each other incoherently and self-contradictorily.
The kinds of presuppositions I had in mind that could change are things like the earth being flat and at the centre of the solar system, or that there must be a first cause or that there is a God who would not deceive us, or that universals must exist independently of us and so on.
Quoting Paine
How would you hammer it out, though, unless you were thinking coherently and consistently? I don't understand your last sentence; could you explain?
The Republic begins with Thrasymachus saying that justice is merely the order of those who presently have power. There is a lot of evidence to support this view. The argument against this is an appeal to see life in a different way.
So, what is that set of evidence against what it would bring into question?
Are you asking what arguments there could be for an ideal of justice that is not grounded on power?
If so, I would ask whether there is any rational argument to support the idea that some people should be priveleged over others. I mean we already know that, in keeping with Thrasymachus' claim that justice is merely the order of those presently in power, some people are priveleged over others, so Thrasymachus has it right perhaps that justice in its actuality does commonly serve power. The question is then whether this should even be counted as justice, if there is no rational justification for treating people differently before the law.
Quoting Janus
I'm not sure you are appreciating that the things that I am saying to you are responses to the things you have said. Hence, if you are right, and the principle of non-contradiction is "a recognition of something necessary to thought and discussion," then those who have not experienced the recognition are not making use of the principle of non-contradiction. Whereas, if I am right, they are presupposing it whether they have recognized it or not.
So are you of the belief that those who have not experienced the recognition are therefore not making use of the principle of non-contradiction?
(Again, the deeper problem as I see it is that you are underestimating the depth and importance of the principle of non-contradiction, as if it were a relatively superficial linguistic tool or else a device that is consciously deployed after recognition.)
No, I haven't said or suggested that. I said that discussions are usually coherent and consistent, just because if they were not, they would not be sensible discussions at all. So, people who are involved in discussions don't usually contradict themselves (because if they did, they would be presenting no clear position) or speak incoherently (because if they did, they would not be saying anything).
I haven't said or suggested that the LNC is a "relatively superficial linguistic tool" either; on the contrary it is the very basis of discursive or propositional thinking. How could you believe or propose anything if you contradicted yourself? If I said to you " It is raining at some specific location and it is not raining at that location", when would there be to respond to, what to say except "you are contradicting yourself"?
I claimed that the law of non-contradiction is a presupposition, and you have continually counter-claimed that it is a recognition, not a presupposition. So now we have this question before us:
I would say that the received and obvious view is: No, someone who has not had the recognition cannot still have X. You seem to be saying that the answer is 'Yes'. You seem to think that the law of non-contradiction is "a recognition of something necessary to thought and discussion," and that people who do not have this recognition are still in possession of the law of non-contradiction. Does this position seem as odd to you as it does to me?
Quoting Janus
Okay, that is good to hear.
Is it?
Consider that if I assert A, and you convince me of ~A, then when I join you in proclaiming ~A, am I contradicting myself?
No, of course not, you'll say. But suppose I say A at one time and ~A at another, without anyone having argued for ~A, then I'm contradicting myself? Apparently my thinking has changed, as apparently it had when you convinced me. Is that contradiction? Does being convinced magically absolve me of inconsistency?
How close together must my saying A and saying ~A be before it counts as a contradiction? How far apart must they be before you call it "changing my mind"?
Now consider the other claim made routinely around here: you say A, but A entails B and you don't want to say B so you ought to give up A. Chances are that I'll dispute the entailment or add in some condition that blocks it, or I'll say B is fine after all, or - or - or -. You try to hang a charge of being inconsistent on me and I weasel out of it somehow -- mustn't contradict myself! -- and this is what we want to hold up as the paradigm of rationality?
On the other hand it is a known fact that people do not appreciate particularly the implications of their beliefs and that inconsistency lurks on the edges of everyone's thinking. Now and then it makes conversation frustrating but it doesn't seem to make it impossible.
Quoting Leontiskos
Quoting Janus
I'm not convinced civilization would collapse if people were inconsistent and contradicted themselves, because I think they are and they do, consistently.
But that also means I'm inclined to throw out this framing of people as consistent or inconsistent. I'm not sure you can pull off partitioning people that way. Your ultimate backstop is going to be a single compound statement of the form P & ~P, with the usual caveats. If people don't ever say things like that -- leaving aside, though I'm loath to, rhetorical usages -- that's interesting, but it's not the same as only ever asserting P and never ~P, and it's not the same as having a set of beliefs that supports only one of the two.
Count me as the skeptic there is any such law.
Quoting Srap Tasmaner
Of course not, you would merely be changing your mind. To contradict oneself is to simultaneously claim two contradictory things. In other words if you contrdict yourself in the sense I am addressing, then you would have no position to defend.
Quoting Srap Tasmaner
I agree and I haven't anywhere said anything about civilization collapsing: I was only addressing what is required in order to have a sensible discussion, I wasn't claiming that the world is replete with sensible discussions.
Whitman is a poet, not an rational arguer, and in any case would you say he does actually contradict himself there?
The same question persists even if we want to talk about the PNC as the thing recognized rather than a recognition:
(Note that you are the one who first implied that the PNC as a recognition involves exclusion, namely that because it is a recognition it is therefore not a presupposition.)
Do you often say two things simultaneously?
Quoting Janus
He's already gone, that's the point of the whole passage and why I posted it. Our mental lives are oriented toward the future. What does it matter if a moment ago I thought there's no way there's a tiger in those bushes? And so it goes, we continually leave thoughts behind, continually update our beliefs. Our beliefs one moment are never consistent with the last, by design and a good thing too, else how would we learn about the world.
Here's my favorite passage -- and for @Wayfarer the most beautiful description I know of the "subject of experience" -- and in this one there's a direct contradiction:
"Both in and out of the game and watching and wondering at it." I feel that every hour of every day. But it's a contradiction.
Forgot that last pair of lines, which are weirdly on point.
I think you are likely correct to see it as a matter of recognition. I was discussing my ideas on that with Srap here.
In case anyones interested, in the name of philosophical accuracy, the law of non-contradiction states that A and ~A cannot both be at the same time and in the same respect. If both A and ~A are at different times or at the same time but in different respects, then the law of non-contradiction is not broken or violated.
Whitmans contradictions do not (or at the very least cannot be proven to be of the type that would) violate the law of non-contradiction. Just as saying Yes and no (i.e., not yes) or theyre the same but different (i.e., not the same) doesnt violate this law, since all such non-technical contradictions implicitly affirm either that A and ~A occur at different times or that A and ~A simultaneously occur in different respects.
Apropos, the law of non-contradiction as intended by Aristotle can well be interpreted as applying to everything, and not just thoughts and propositions and percepts: at the very least, all macroscopic objective objects abide by it. (And, if we wouldn't take this for granted, I imagine we'd be direly grateful for such a world, here including our own body parts.) On the other hand, if it werent for this law, or universal principle, then thered be no biggie to comprehending particle-wave duality in QM. But no one can intuit that X is both a particle and not a particle at the same time and in the same way. Hence the incomprehensibility of much of QM as its currently interpreted.
Yes, yes, we all know you can make this sound more precise, but ceteris paribus conditions always grow toward infinity. How fully do you think you can specify "in the same respect"?
Quoting javra
I'm not going to wade into QM interpretation -- I wear water wings even in the shallow end of that pool -- but I think you needed something here besides "QM is incomprehensible" else you're undermining your own case.
And don't forget the other major paradigm shift in modern physics. You casually invoke simultaneity in your precise definition of the LNC. Feel on solid ground there? No qualms at all about specifying some universal time-stamp for phenomena? We just recently here on the forum had a discussion of an event that will appear to have occurred in one frame of reference but not in another, and there's a paradox if those frames of reference can communicate about it.
Let's put it this way: the law of non-contradiction appears to be a rule that would be suitable for an omniscient god. Down here in mortal land, we frequently have good reasons for both P and ~P. Some of this just goes away if instead of laying down rules for the universe to follow, we just note that all of our beliefs are held with some degree of confidence, so a belief that P with a confidence of 0.90 is the same as a belief that ~P with a confidence of 0.10. Every opinion we hold is a contradiction viewed this way, which is just to say that the contradiction framing is not particularly useful.
TMK, its the way the LNC has always been worded and understood since the time of Aristotle.
Anyway - as an aside that I find interesting - wanted to point out that, as per Leibniz, the law of non-contradiction can be deemed entailed by the law of identity. As one example, one can word the law of identity this way:
At any given time t, A can only be equivalent to A, this in all conceivable ways. (otherwise, A would not be equivalent to A)
And then the LNC can be worded this way: at any given time t, A cannot be ~A in all conceivable ways. (which is the same as saying: A and ~A cannot both occur at the same time (i.e., simultaneously) and in exactly the same respect).
Hence, if this holds, then to deem the law of non-contradiction inapplicable will then be to then deem the law of identity inapplicable; for, if the LNC is violated, then so too is the law of identity. ... Unless one engages in dialetheism.
BTW, a belief that A which is held with a probability of .90 is not contradicted by a belief that ~A held with a probability of .10. Each proposition entails the other, for they address the same thing. The LNC however does affirm that it is not possible to hold a belief that A with .90 probability while at the same time holding a belief that A with .10 probability.
It depends.
I thought you were going to finish that paragraph with A at 0.7 and ~A at 0.7, which should also be impossible but is known to happen, at least when considering the implications of people's beliefs. Polls routinely show slightly (and sometimes not so slightly) inconsistent opinions, and are notoriously dependent on how the questions are worded. How the questions are worded suggests a certain framework, calls up particular associations, all that extra-logical stuff. I think the approach you take suggests it would be possible to word questions "perfectly" to account for all of this and only get consistent results. I not only doubt any such thing is possible, I'm not sure it's coherent to claim that it is. There's just too much language getting in the way when you put things into words, so your first step will have to be to make the questions non-linguistic.
And then what is it the LNC actually applies to? Is it the non-verbal intellections of God?
Sure. Its called hypocrisy or doublethink. But no one actively holds two (or more) contradictory beliefs at the same instant. Instead, one flip-flops between them while upholding both as true.
As to doubting: One can choose to doubt anything, including what is is. But doubt, of itself, does not affirm, i.e. posit, anything.
Quoting Srap Tasmaner
While I don't share many another's phobias of the possibility of divinity, the basic answer is no more or no less then laws of nature, such as that of gravity. Which is to say, who the heck can conclusively answer this & by no means necessarily. It could be as much an uncreated "just is" aspect of reality as matter is to the materialist.
This looks like magical thinking about the LNC to me. Humans aren't binary logic machines.
Okay, but you can't possibly find that satisfactory. That is the weakest conceivable position it is possible to take and still call this 'philosophy'. I'll pass. -- But having passed, I have to wonder about a position that says "who the heck knows" and then makes a claim about the nature of reality. Doesn't inspire confidence. Are you even sure you know what you're claiming?
Quoting javra
So when it comes to reasoning, what is it we're upholding again? What's the model of rationality we should aspire to? Flip-flopping and hypocrisy are fine so long as you don't contradict yourself? We're supposed not to contradict ourselves because it's a bad thing to do. (In some circles, the principle of explosion will be darkly alluded to.) But your position is that we don't just because we can't, and we do the next best thing, which is advocating contradictory positions seconds apart. If we want to say that's not okay either, evidently the "law" of non-contradiction won't be any help, and we'll need a whole 'nother principle to rule that out.
We don't seem to share the same wants when it comes to philosophy. I'm interested to ground my beliefs on what is. If I can't currently fully explain all that is, that's OK by me - so long as my beliefs regarding what is are sound. I dislike forsaking truths because they don't fit in with the explanatory model I so far have. What I'm claiming, in short, is that the LNC appears to be sound. The possible implications of this take a very distant second place for me.
Quoting Srap Tasmaner
This is entirely an issue of ethics (and value-theory): what ought we do. As I think you're by now very aware of, arguments are sometimes engaged in with the outlook of "winning at all costs" - such that snide remarks and innuendos intended to humiliate the "opponent" are given in arguments by those who uphold the just mentioned ought. Whether this is rational or not fully depends on the goal one has in mind: e.g., to win and subjugate at all costs or, as corny as this might sound, to better discover truths and only then their likely relations. If one intends the former, then its rational to belittle and dehumanize the other. If one intends the latter, then it is not. But, again, this is an issue of what one ought do and, hence, one of ethics.
p.s. the same then goes for whether contradicting ourselves in rational discourse is good or bad: it depends on one's overall goal in so engaging in discourse.
Indeed. I think reasoning serves a purpose.
Probably nothing more to be gained from further discussion, but it was fun. Appreciate you indulging my heterodoxy.
I should add: so do I (multiple possible purposes). But we will likely disagree on the details. It was good debating with you.
Obviously two things cannot be said strictly simultaneously. What I meant was that within the presentation of an argument self-contradiction would make it unclear what position was being asserted, or even mean that no position is being asserted.
So this kind of thing
Quoting Srap Tasmaner
I agree with and has really nothing to do with what I've been arguing. I would never deny that we can learn something new and/ or change our minds.
I agree with what you say except for this
Quoting javra
To say that something could be simultaneously wave and particle does not constitute a logical contradiction as far as I can tell. We might think there is an incompatibility between the two states, but maybe our understanding or imagination is just not up to the task, If it is a fact that something can be both wave and particle, then it is a fact, pure and simple.
Quoting wonderer1
:cool:
This, I think, will depend on what significance one imports into the terms "particle" and "wave". If the LNC does hold, however, then one can not have a photon be both a particle (A) and not a particle (~A) at the same time and in the same respect.
For example, it might be that the unobserved photon is neither spatially localized (particle) nor disperse fluctuations (wave) but something else that can account for both observations.
That said, as to our imagination likely not being up to par, as I tried to previously express, I agree.
I just don't see how you're going to cleanly partition what is and what isn't part of an argument.
Why am I even arguing about this?
I don't think the LNC is useful at all as a description of how people reason or how they argue. People are frequently inconsistent, and philosophers know that better than most, not least because they accuse each other of it all the time. I see no sign that communication requires the kind of perfect consistency suggested, and I suspect there's a terribly unrealistic model of language and communication at work there.
I doubt the LNC is even useful as an ideal to strive for. If our mental faculties are primarily geared toward making useful predictions, and those predictions are probabilistic, I don't see what the LNC even brings to the table. My beliefs are mixed, my expectations are mixed, the evidence I accumulate is mixed, and what's required of me is flexibility, continual updating and exploration. It's not a matter of adding or subtracting atomic beliefs from my store of truths; change is always cascading through the system of my beliefs, modifying the meaning even of beliefs I "retain".
I do think I get where you're coming from, as a reformed logic guy myself. I'm not really arguing to convince you, just giving you some idea why I don't find much of value or interest in the LNC.
That's probably true, but if inconsistencies in your position, which you were unaware of, are pointed out to you, would it not be intellectually dishonest to refuse to acknowledge that? And if your position is self-contradictory would that not amount to being no position at all?
BTW, I'm not advocating that people should take up a position; I actually prefer to avoid holding views about anything at all as much as possible.
Quoting Srap Tasmaner
I can relate to that, but what if you added "I don't doubt the LNC is useful as an ideal to strive for. I think our mental faculties are not primarily geared towards making useful predictions, and that those predictions are not probabilistic, so I can see the usefulness of the LNC.
You would then be contradicting yourself, and in that case how would I know what you were arguing for?
I will just point out that a photon being a wave and a particle is not logically equivalent to a photon both being and not being a particle, because it being a wave does not logically rule out its also being a particle.
Well, that's the thing. It's really already in there, because we're just talking about a working hypothesis, just pragmatism. All bets come hedged.
Quoting Janus
I don't know what to say to that because I don't see how it's a useful question. It's fighting the last war.
Should I be afraid that I might sometimes sound like I have an opinion when, unbeknownst to me, I don't?
Should I worry that I might try to predict whether that rock will hit me but somehow fail to make any prediction at all because a contradiction snuck in somewhere?
Reasoning as we actually do it is a rough and ready business, constantly on the move. I can imagine arguing that contradictions get weeded out because they're inherently useless, being necessarily false, but I doubt even that's right. We often have good reason to believe both sides of a story, so we keep our options open, and for a while they live side by side. So what?
TMK, a particle is localized thing with volume, density, and mass. Whereas a wave function is not. So a wave function is not a particle. And hence the term "wave-particle duality". Am I missing out on something?
To corroborate my current understanding:
Quoting https://en.wikipedia.org/wiki/Wave%E2%80%93particle_duality
edit: I get that a photon is considered massless. But wave-particle duality applies to mass endowed particles just as well. It even applies to some small molecules.
Quoting Srap Tasmaner
But you are also saying that you cannot imagine arguing that contradictions get weeded out because they are inherently useful, being necessarily true, and that you doubt that even that's right. And you are also saying that we never have good reason to believe both sides of a story, so we don't keep our options open, and they never live side by side, right?
So what?
Quoting javra
No, a wave function is not a particle, but light can (supposedly) be both wave and particle, and if that is correct it cannot be a contradiction, because it is not a proposition but an actuality. In any case it is not analytically contradictory to say that light or an electron can manifest as both wave and particle. It would be a contradiction to say that a wave is a particle, but that is not what is being claimed AFAIK.
In short, I don't agree with Einstein's assessment because if it is true that light really is both a wave and a particle, then the difficulty is not that that is a contradiction, but that due to our lack of some relevant understanding it is merely the case that it might appear to be a contradiction.
If reality could be logically contradictory, then it would be so much the worse for logic and all of our purported knowledge.
Remember, these are models of the quantum realm, models that have a very high degree of predictive value, but models just the same. In Einstein's quote, he doesn't say that reality is contradictory but that we have contradictory pictures of reality. This makes a world of difference in what is affirmed by him.
Right, I get that, but I still don't see why something manifesting as both particle and wave is logically contradictory. The laws of nature changing every few seconds would not be logically contradictory. If we think that light manifesting as a wave is equivalent to it not possibly being at the same a particle, then that would make it seem contradictory, but I don't see how it would logically follow that something manifesting a wave entails that it cannot also be a particle. The thinking seems to be that it cannot be both, that it must be one of the other, but its being both is not logically contradictory or impossible. Is it physically impossible (which would be a different thing to its being logically impossible)? I don't see any reason to think that something could be logically possible and yet physically impossible. Could the obverse obtain?
In any case all of this is kind of a red herring given the subject of discussion was concerning self-contradictory argumentation.
We are, yes, absolutely. I'm just kind of curious to see how it goes.
Quoting Janus
Maybe. I think @Isaac would agree with that -- rules of the game we play here.
If there is such a convention, I could certainly choose to follow it, and that might be worthwhile, depending on what I get out of playing the game. That would leave a couple questions: (1) is it anything more than a convention -- a law of the universe, say? (2) if it is a convention, does it have a purpose and if so what?
(1) I'm just going to ignore, but (2) is exactly what I'm interested in.
You've suggested a couple times that if I contradict myself, you can't tell what I'm advocating. Let's say that's true. If I contradict myself, there's no clear response for you -- at least agreeing or disagreeing with me don't seem to be options, but you can still call me out for breaking the rules, and you can indicate you don't intend to break the rules yourself. So that's a cost you willingly incur, making the effort not to contradict yourself, and that should count for something, a bona fide of your intention to engage seriously. Someone who breaks the rules has refused to ante up, and is not taken seriously. Everyone agreeing to incur some cost, to put in a modicum of effort, builds trust. That's clear enough.
If there's a cost to not contradicting yourself, if it takes effort, then we must be sorely tempted to contradict ourselves, must be on the verge of doing so regularly, and that doesn't sound right. I don't expect people to hold consistent beliefs, but direct self-contradiction is still pretty rare -- it's like we don't have an introduction rule for 'P & ~P', just not the sort of sentence we generate except by accident. (If there are contradictions or inconsistencies, they're generally more subtle. I searched the site for accusations of self contradiction, and, as you can imagine, the accused party universally denies that they have done so, and then there's a back and forth about whether what they said really is a contradiction or not. It's never dead obvious like 'P & ~P'.)
I mean, maybe the cost story holds up even if the cost is minimal -- it's the thought that counts -- or maybe it works better as a package, agreeing to something nearly amounting to all of classical logic and some induction and some probability and on and on. Now we're talking quite a bit of effort.
But is there something else? Some reason for this rule in particular? Do I have a motivation to make sure you have clear options of agreeing or disagreeing with me? I might, if we're choosing sides. Might just be politics. Anything else? There is the standard analogy of assertion as a bet -- you look at the odds but then you have to actually pick what to bet on to stand a chance of winning anything. (Cover the board and you'll tend to break even.) Do I have a motivation to gamble in our discussion? Do I stand to win anything by picking one of the two sides I have evidence for? Maybe, if it makes your response more useful to me. If I have evidence for both sides of an issue, it might not even matter which side I pick, so long as I can elicit from you more support for one side or the other, by giving you the opportunity to argue against me, or add your reasons for agreeing.
So that's two arguments for a strategy of respecting the LNC: (1) especially when taken together with other conventions of discussion, it represents a cost incurred by participants, which builds trust; (2) it's an efficient strategy for eliciting responses useful for updating your own views. (The latter is the sort of thing apo mentions regularly, the need for crispness, all that.)
Good enough for now, I guess. I'm still mulling it over.
Partially, yes. I'd take issue with @Janus's use of "taken seriously", and "be of any use to anyone". Both of these responses are both possible and regularly seen consequent to arguments which are self-contradictory. So the claim is just not true on it's face. People do take contradictory arguments seriously and many find them useful - presumably. As you say...
Quoting Srap Tasmaner
... so we can safely assume that, in posting, they take this self-contradictory argument seriously (let's assume the interlocutor is right here), and we can (less safely perhaps) assume they find it useful.
An attempt not to self-contradict, is part of the rules, and that, I think, is why we don't even have a grammar covering "P and ~P", it fails off the bat. But beyond that, actual self-contradiction doesn't seem to be much of a problem because up until the point it's 'uncovered' things seemed to be going along perfectly well for the party holding that belief set.
Which raises the question of what the importance of the LNC actually is, apart from as a rule in a game.
If it's to give us better belief sets (where 'better' here could be any measure for now), then we're putting the cart before the horse in our argumentation methodology, we should be saying "look how successful my belief sets are - that proves they cannot be self-contradictory", forget logic - point and counter-point should be various successes and failures in our personal lives!
But we don't. We think it the other way round, we think that one ought hold a belief set which adheres to these argumentative rules regardless of whether it's useful or not. As if there were some nobility to doing so. Perhaps we'll be rewarded by God...?
But, as you said earlier (I've been reading along), the reality of our thought doesn't adhere to these argumentative rules anyway. We are only capable of thinking A then ~A, we are never capable of thinking A and ~A, but not because of logical contradiction - rather because of a physical limit on the construction of propositional thoughts. We can't think A and B either, only A then B.
So where does that leave argument? It can point to a contradiction which a) isn't ever really there in our thoughts, and b) is claiming a flaw which can more easily be demonstrated than argued anyway and if not demonstrable, doesn't seem to be a problem.
I would say that is true only when it is not realized that the arguments are contradictory, unless you can offer a counterexample.
If it is only true in cases where the contradictoriness of the argument is not recognized, then it has no bearing on what I've been arguing.
This is the main thing I'm trying to get past. I think there's a typical assumption that our beliefs have a clear logical structure and if an inconsistency has snuck in then your beliefs are in a sort of defective state, you'll make worser predictions, and you'll end up mistakenly drinking bleach. Or at any rate, false beliefs get weeded out through contact with the real world, leaving behind true ones you can safely make sound inferences from. That kind of model. Representational, computational, and rational.
Certainly some chunks of our beliefs look to us like they were stitched together with some care, and some don't, but I'm not convinced that whatever consistency, whatever structure there is is there by choice. Even before "AI" became something people said everyday, there was talk of evolutionary algorithms at places like Facebook and Google, so complicated that none of their engineers understand them. I assume something a lot like that is true of our beliefs. There's probably something identifiable as structure in there but it's nothing at all like the two column proofs you learned in school and it's inconceivably more complicated. That's my guess anyway. The occasional dumbed down summaries of what's going on in there are what we call reasons and arguments.
That still leaves room for an account of reason as a social practice rather than, I guess, a cognitive faculty.
Is this roughly where you are?
I think Kahneman's view is that we can learn how to intervene in our own thinking process, correct our misguided intuitions using logic and math, and over time thus improve our habits of thought. I'd like to believe that...
But it's also whacko. I'm surprised you're nonplussed, but cheers.
Yes. Roughly.
I tend to frame the effect of reason in terms effects on our priors, so reasoning is still post hoc, but has an effect. Basically, if the process of reasoning (which is effectively predictive modeling of our own thinking process), flags up a part of the process that doesn't fit the narrative, it'll send suppressive constraints down to that part to filter out the 'crazy' answers that don't fit.
But all this is after the first crazy thought.
What I'm convinced doesn't happen (contrary to Kahneman, I think - long time since I've read him) is any cognitive hacking in real time. I can see how it might cash out like that on a human scale (one decision at a time), but at a deeper neurological scale, my commitments to an active inference model of cognition don't allow for such an intervention. We only get to improve for next time.
:up:
I'd likely have said "intuitions" rather than "priors" but there is a lot of overlap at the very least.
Quoting Isaac
I don't recall getting such an impression from Kahneman, but because Kahneman seems to have come to his conclusions from a more psychological than neurpsychological direction I wouldn't be too surprised if he made such a mistake.
In any case, I very much agree that shifting our fast thinking (or deep learning) generally takes a substantial amount of time. Though there can be sudden epiphanies, where a new paradigm 'snaps into focus', the subconscious development of the intuitions underlying the new paradigm may have been taking place over the course of many years.
Quoting wonderer1
It's probably me misremembering or misunderstanding, and I'll look again. Mercier & Sperber mention in the introduction to Enigma of Reason that their model is different from Kahneman's in not really having two different types of reasoning process.
I do remember feeling back when I was reading TFS (which, full disclosure, I didn't get all the way through) that the thrust of it was that we reason logically less than we think we do, but we can make an effort to notice when a bias has crept in and respond. (Remember the little self-help sections at the end of the chapters? "Gosh, maybe I'm letting system 1 get its way here, and I should slow down, have a system-2 look at this." To which my response was always that I already spend a hell of a lot of time in system 2, so, you know, "does not apply" boss.) If that's so, logic is still a system of rules for getting better -- meaning, more likely to be true -- answers and its status is still unexplained.
I'll just go look at the book, but another general impression I got from the book is that we rely on system 1 so long as it works well enough, but system 2 is there for when things go wrong, and the response to surprise is that the slow, careful process takes over, and it has different rules, actually looks at the evidence, makes properly logical inferences, and so on. Which, again, leaves what logic itself is and why it works unexplained.
But I'll go look.
Corrective rather than constructive, and the consistency being enforced is that of the narrative your current model is organized around, rather than "the way the world really is" or something.
Some of that seems almost obviously true, but here's what still bothers me: if logic is a system of constraints that enforce (or, as here, restore) consistency, even if that consistency is with something like a narrative arrived at by other means, that still leaves logic as a set of universal, minimal constraints that everybody ends up following. Our narratives may be handmade and idiosyncratic, but unless the consistency I enforce (with that narrative) is also handmade and idiosyncratic, logic is still universal.
We don't have to go straight there. One of the things @Joshs talks about is paradigm or culture as the constraints on what counts as evidence. You could see something like that operating at the layer we were describing here as the corrective constraints. The next level up from your narrative might be this cultural layer that enforces a specific sort of consistency that would be different in another culture or under another paradigm. That's plausible. And there could be any number of layers, a hierarchy of constraints, variously idiosyncratic or cultural or community-driven, or even species-specific. But it seems like that pattern points to a minimal set at the top that looks a lot like logic, which annoys me if there's no explanation for where that set of constraints came from.
If, on the other hand, the most general constraint level is constructed by successively generalizing from the lower layers, whatever they may be, then that sounds a bit like the story I was hoping to tell about logic emerging from our practices rather than pre-existing them. Once in place, of course they can cascade (selectively) back down through the hierarchy to constrain our belief formation and so on, so they play that normative role of something we strive to conform to, but we're striving to conform to rules we ourselves have made and can take a hand in remaking and revising. All that's needed is a mechanism for generalizing and some motivation to undertake such a project. (And I swear to god this sounds almost like the old empiricist theory of generalizing from experience.) It is still a little uncomfortable for us to be converging on very, very similar top-level constraints, but maybe it shouldn't be.
One thing I haven't paid much attention to yet is that logic, like language, needs to be usable while it's being built. You can generalize a new higher level constraint and begin cascading that back down as soon as you build it -- and handling some specific case immediately is probably why you've built it, though it might take like forever before you get around to enforcing that constraint everywhere -- it's more of an as-needed, just-in-time thing.
There's also some question about whether the constraints at any given level are consistent with each other. Could very well not be and that could go on until some major failure forces you to add a new level with a rule for sorting that out. And if it comes to that, this might really be a hierarchy only in the sense that it has a kind of directed graph structure where two nodes may not have a parent (only children) until there's a conflict and a parent node is created to settle that conflict.
We're not a million miles away from Quine's web of beliefs, but he tended to talk in terms of a core area of the most abstract rules like logic, and a periphery that is the most exposed to experience. And he continually waffled on whether the rules of logic at the core were subject to revision.
Is this all just empty model spinning, or does it sound reasonable? @Janus? @wonderer1
For me at work, it is often a matter of a 'picture' rather than a narrative, and I am trying to bring my mental image of how an electronic gizmo works into better compliance with how thing work in the world. If my mental image is out of compliance with the way things actually work in the world, the world may well inform me of this with flames, puffs of smoke, or minor explosions. (Although more typical is that the circuit just doesn't work as expected.)
That said, I agree with most of what you said, inasmuch as I am interpreting it correctly.
Yes, that's the idea. Our lower order modelling cortices spit out all sorts of junk all the time, white noise random synapse firing, and it's all filtered out by the same mechanism, which is a higher order model saying "that doesn't sound very likely - it's not what I'm expecting". So it's all corrective to expectations until something breaks, the noise overwhelms the suppressive feedback (literally overwhelms, as in more signals), or as wonderer put it...
Quoting wonderer1
...then the higher order model is expecting the noise (or what was noise) and starts suppressing anything which isn't it.
Quoting Srap Tasmaner
Hey! A lot of hard work went into that!
Quoting Srap Tasmaner
That's right. I've talked before about narratives being 'picked off the shelf' from the available narratives. Not that we can't make up our own, it's just easier to pick the available ones (less chance of surprise). Logic is just one such narrative model of how our various mid level cortices put data together and churn out belief states (tendencies to act as if). In fact I think Logic is even too broad to be a single narrative, it's more like a collection. I don't think we ever literally apply the rules, it's more a general feel for what might not work if we looked at it too hard.
That said, I don think there's scope for some hard-wired suppressive feedback models. There's evidence from infant studies of a few such mechanisms for basic physics, so I don't see any reason why there shouldn't be any for logic, but I expect they'd be limited as the physics ones are and they'd seem far more useful to have the full set hardwired in.
Quoting Srap Tasmaner
I think, if I'm honest, my gut feeling is that we've got the category 'logic' wrong. I get what you're pointing at here and I think it's right, we can generalise some of this from experience (the light is never both on and off, so data suggesting it is can be suppressed on the basis of empirical priors, not the law of non-contradiction). It's just that I think habits of thinking, ways we expect our systems to output results, are also cultural. I think the category 'logic' may be just too broad and in cognitive psychology terms isn't a 'natural kind' at all, but rather two (or three) completely separate processes, which involve both sensory data, and interoceptive modelling.
Okay, that's helpful. Toward the end I was starting to imagine an almost adhoc building up toward the general, generalizing just as much as you need to resolve a conflict. But it bothered me that once again I was starting to treat inference rules as premises, habits as beliefs.
I like this less abstract approach of considering what sorts of cognitive departments an organism might develop and then looking at what those could conceivably do and what that would look like. My whole approach in the last post was way way too abstract.
Quoting wonderer1
I do think that's really important. (Sellars used to actually draw pictures in his typescripts and commented once that everybody uses images it's just that he leaves them in. One of his two most famous papers has the word "image" in the title and the other has "myth".) These days I almost always approach probability problems by imagining a rectangle and then carving up the total space into areas. Numbers are decoration. (Bonus anecdote: Feynman describes an elaborate visualization technique he used to figure out whether a conjecture in mathematics was true or false, game he used to play I think as a grad student talking to guys from the math department. If he got it wrong and they pointed out the condition he missed, he'd reply, "Oh, then it's trivial," which is incontestable when talking to mathematicians, kind of an "I win" card.)
Blah blah blah, I'm just so focused on linguistic and symbolic reasoning that it's hard to know what to do with visual reasoning, but if it's not obvious then I must be doing something wrong. This is probably me being too abstract again and it would be clearer if we considered how organisms like us rely on visual "input".
I don't think I posted this but I did a little introspective experiment last week where I looked at objects on the porch and out in the yard and imagined them moving. I developed some skill at that kind of visualization as a chess player, though I'm rusty now. The result was that I did not hallucinate the objects moving, there is no interruption of the visual stream, which still shows the lawnmower in the same place, but it "feels" like I'm seeing it move. It's like hypothetical movement does fire the extra "what this means" pathway but stays off the main "what I'm seeing pathway", almost like the reverse of Capgras delusion. When I coached young players I used to tell them to imagine the pieces very heavy when they calculate so they could more easily remember which square a piece was on in their imagination. Curious.
A chapter into Mercier and Sperber and the model is pretty exciting.
Quoting Isaac
Quoting Isaac
This! I'm always forgetting how much of our mental processing is devoted to filtering. That's another point that makes my last post feel off.
The dialogue did not gloss over the central role of power. Whether the City is healthy or not as an individual soul is the ratio the gang of the powerful cannot answer for itself.
OK. Point taken. To then better address the issue youre pursuing:
While I stand by the belief that the LNC is sound, it of itself is in no way prescriptive. If indeed sound, it is strictly descriptive of what is. So I so far dont find that one can obtain an ought from the LNC.
That said, Ill present the outline of an argument for why self-contradicting arguments are bad. First some simplistically expressed premises:
I get that these premises can be debated and that they might be too simplistic in present format, but in here tentatively granting them all the same, the following then results. Truths will in such world never contradict; this because the singular and universal actuality, or reality, which truths conform to is itself coherently structured, hence consistent, hence noncontradictory. By comparison, an untruth will always be that which does not conform to what is actual and, because of this, two or more disparate untruths will always contradict each other as well as contradicting that which with is actual.
Here, an expressed contradiction in one's reasoning will signify either that all but one of the contradicting parts do not conform to what is actual or that all the contradictory parts do not so conform. In short, a contradiction will here always entail a lack of conformity with what is actual.
Conversely, an argument that is devoid of self-contradiction then givens no indication of being untrue.
Further granting that what is sought is conformity with what is actual (that we seek what is true), then self-contradictions shall in this case always be bad due to always entailing untruths.
That said, there are other goals that individuals can pursue, some of which will find untruths and the resulting contradictions quite useful so as best fulfill said goals. As one example, we can tell untruths to a murderer so as safeguard a loved one. As a more unpleasant example, we would not be able to understand the psychology to Orwells 1984 (complete with the Ministry of Truths dictums of War is Peace, Freedom is Slavery, and Ignorance is Strength), nor find the story-line believable, were untruths to not be beneficial in sustaining autocratic power within everyday life.
This is a rough outline of a general perspective I hold. In summation, contradictions always evidence untruths. But whether untruths are good or bad will be fully dependent on the ends which one seeks to fulfill. (That said, none of the contradictions here expressed which result from untruths will themselves be the logical contradiction which the LNC states cannot occur - in so far as hypocrisy and doublethink can occur despite the LNC nevertheless holding.)
Yes. It gets complicated when we get into redundancy and duplication, but I don't think either need disrupt this project right now, it's still reasonable to say that the brain is hierarchical and that those hierarchies are based on physiology primarily.
Quoting Srap Tasmaner
There was some work a little while back by Paul Allen at Kings reviewing data on hallucination in schizophrenia where he pointed to the regular importance in studies of the inhibitory networks (and even the actual neurotransmitters used to carry out these inhibitory functions). Even though the causes of schizophrenia seem to remain multiple, there is a trend toward a combination of hyperactivity on sensory processing regions, and lack of inhibitory function in the anterior cingulate and subcortical regions. Basically, all the stuff telling you that the visual pathway you stimulated by imagining the moving lawnmower was you doing it, not the outside world.
Back to filtering again... removing from the working memory of scene creation, the hypotheses about potential futures where they are just noise, like all the things you could say are just noise in the production of what you actually do say. We produce a lot of crap, it's a miracle of ruthless editing that anything sensible results at all.
So visualizing, imagining, hypothesizing, all that sort of thing, might be accomplished at least in part by inhibiting channels to an area involved in all sorts of practical issues (wiki says error detection, reward anticipation, decision making, on and on). That's extremely interesting.
Nifty segue!