Mathematical truth is not orderly but highly chaotic
Most mathematical truth is unprovable and therefore unpredictable, if only because most of its truth is ineffable ("inexpressible"). It is easy to get the wrong impression that mathermatical truth would be orderly.
This impression is based on our inability to "see" most of its chaotic truth because it is ineffable. It is simply invisible to us. Furthermore, we even fail to see its otherwise visible chaos, because our methods mostly fail to reach it.
In his paper, "True but unprovable", Noson Yanofsky focuses on explaining why most mathematical truth cannot be expressed in language:
The world of mathematical truth does not look like most people believe it does. It is not orderly. It is fundamentally unpredictable. It is highly chaotic.
https://en.wikipedia.org/wiki/Mathematical_beauty
Mathematical beauty is the aesthetic pleasure derived from the abstractness, purity, simplicity, depth or orderliness of mathematics.
This impression is based on our inability to "see" most of its chaotic truth because it is ineffable. It is simply invisible to us. Furthermore, we even fail to see its otherwise visible chaos, because our methods mostly fail to reach it.
In his paper, "True but unprovable", Noson Yanofsky focuses on explaining why most mathematical truth cannot be expressed in language:
http://www.sci.brooklyn.cuny.edu/~noson/True%20but%20Unprovable.pdf
There are more true but unprovable statements than we can possibly imagine.
We have come a long way since Gödel. A true but unprovable statement is not some strange, rare phenomenon. In fact, the opposite is correct. A fact that is true and provable is a rare phenomenon. The collection of mathematical facts is very large and what is expressible and true is a small part of it. Furthermore, what is provable is only a small part of those.
The world of mathematical truth does not look like most people believe it does. It is not orderly. It is fundamentally unpredictable. It is highly chaotic.
Comments (346)
Quoting Tarskian
Im not sure what true as ineffability is supposed to mean here in the context of chaos and unpredictability. Could you say a little more about what makes an unprovable mathematical proposition true? Im sure you wouldnt want to argue that the infinite task of ensconcing smaller axiomatic systems within more encompassing axiomatic systems involves a qualitative change of sense of meaning that prevents us from attributing all these systems to the same truth, and therefore one is not in fact dealing with an already defined infinity, but with a finite task whose sense is continually shifting . This would be Wittgensteins view, which i agree with. But I am guessing you would argue alongside Godel and Yanofsky that every iterative subsuming of axiomatic system within axiomatic system belongs to the same truth.
Consider:
A convolute argument, perhaps, but it shows that one must do more than simply assert that natural languages are at most countably infinite. Yanofsky must argue his case. " ...the collection of all properties that can be expressed or described by language is only countably infinite because there is only a countably infinite collection of expressions" begs the question. Indeed, the argument above shows it to be questionable.
There is something very odd about an argument, in a natural language, that claims to place limits on what can be expressed in natural languages.
The paper actually says:
So, it only insists that the sentence "PA cannot prove a contradiction" can be expressed in PA itself.
In the following paper, containing a version of the proof, the author expresses it by reifying the truth value for falsehood (?):
In another paper, with another version of the proof, the author insist that it is enough to express the unprovability of any arbitrary falsehood. No need to reify truth values:
So, in that case, let's consP be the sentence ¬BewP(Í1 = 2Î).
But then again, it is also perfectly possible to express the notion of consistency in full -- straight from its definition -- that PA does not prove both A and ¬A for all sentences A of PA:
PA ? ? A ( ¬Bew(ÍA ? ¬AÎ ) )
In all cases, regardless of how you express consistency of PA, the proof for the second incompleteness theorem always proceeds by considering the first incompleteness theorem:
PA ? ? A ( A ? ¬Bew(ÍAÎ ) )
The above means: There exists a sentence A that is (true and not provable) or (false and provable).
Say that G is such sentence:
PA ? G ? ¬Bew(ÍGÎ )
If PA can prove its consistency, then it can obviously also prove that G is consistent:
PA ? ¬Bew(ÍG ? ¬GÎ )
By using the Hilbert-Bernays rewrite rules -- with a few more steps -- we can then prove that this expression leads to the following contradiction about G:
PA ? ¬(G ? ¬Bew(ÍGÎ ))
There are many ways to formulate the consistency of PA, i.e. Cons(PA), but proving it will always lead to a contradiction. Therefore, Cons(PA) is unprovable. According to the first incompleteness theorem, the following sentence is true:
Cons(PA) ? Incompl(PA)
So, PA is inconsistent or incomplete. However, we do not know if Cons(PA) is true. We can only come to that conclusion by proving it, but how are we supposed to do that? So, I disagree with the author when he writes:
This statement is only unprovable.
Of course, we can use Gentzen's equiconsistency proof with PRA but that does not prove PA's consistency. It just proves that it is equiconsistent with PRA (primitive recursive arithmetic). Who says that PRA is consistent? We don't know that. Other authors sometimes write that we can prove PA's consistency from within ZFC. Fine, but who says that ZFC is consistent?
Hence, we can only assume PA's consistency. We cannot just state Cons(PA) to be true. This cannot be done.
The fact that we can prove that it exists.
Let's start from Carnap's diagonal lemma. In the context of Peano arithmetic (PA), for each property ?(n) accepting one natural number n as input argument, there exists a true sentence S that does not have the property or a false sentence S that does have it:
PA ? ? ? ? S ( S ? ¬?(ÍSÎ ) )
This is, in fact, the only hard part in Gödel's proof. The proof for the lemma is very short but it is widely considered to be incomprehensible:
https://proofwiki.org/wiki/Diagonal_Lemma
Say that Bew(ÍSÎ) is a property in PA that is true if it proves S and false when it doesn't. In that case, the lemma applies:
PA ? ? ? ? S ( S ? ¬Bew(ÍSÎ ) )
There exists a true sentence that is not provable or a false sentence that is provable. Hence, PA is incomplete or inconsistent. Let's denote this sentence as G:
PA ? G ? ¬Bew(ÍGÎ ) )
So, now we have a sentence that is (true or unprovable) or (false and provable). In fact, G is also a truly constructive witness for the theorem. But then again, we do not even need this particular sentence, because in the meanwhile, we also have Goodstein's theorem that is true but unprovable in PA:
https://en.wikipedia.org/wiki/Goodstein%27s_theorem
It is very hard to discover this kind of true but unprovable sentences. But then again, we also know that they massively outnumber the true and provable sentences. True but unprovable is the rule while true and provable is the exception. This is the paradoxical situation of the truth in PA. The truth in PA is highly chaotic but it is very hard for us to see that.
You can enumerate every sentence in natural language in a list. Therefore, it maps one to one onto the natural numbers. Therefore, their set is countable.
For natural language to be uncountable, you must find a sentence that cannot be added to the list. To that effect, you would need some kind of second-order diagonal argument.
No, Gödel does not assume consistency. In Gödel's theorems, consistency is exactly the question. In mathematics we implicitly assume consistency. In metamathematics, we don't.
I didn't read the rest of this interesting thread yet so I'm just responding to the top post.
I believe Chaitin made a similar point. He has a proof of Gödel's incompleteness theorems from algorithmic complexity theory. I believe he says that mathematical truth is essentially random. Things are true just because they are, not because of any deeper reason.
This sounds related to what you're saying.
Quoting Banno
The set of finite-length strings over an at most countably infinite alphabet is countable. There are countably many strings of length 1, countably many of length 2, dot dot dot, therefore countably many finite strings.
If you allow infinite strings, of course, you can have uncountably many strings. That's the difference between positive integers, which have finitely many digits; and real numbers, which have infinitely many. That's why the positive integers are countable and the real numbers uncountable. It's the infinitely long strings that make the difference. But natural language doesn't allow infinitely long strings. Every word or sentence is finite, so there can only be countably many of them.
I didn't completely follow what you're doing, but in taking the powerset of a countably infinite set, you are creating an uncountable one. There aren't uncountably many words or phrases or strings possible in a natural language, if you agree that a natural language consists of a collection of finite-length strings made from at an most countably infinite alphabet. I think this might be a flaw in your argument, where you're introducing an uncountable set.
Yes, Yanofsky's paper also mentions Chaitin's work:
This means that most (but not all) mathematical truth is essentially random.
Yanofsky's paper mentions an even larger class of random mathematical truth: unprovable because ineffable ("inexpressable"). There is no way to prove truths that cannot even be expressed in language. Because in that case, how are you going to express the proof? That class of random truths is even larger than Chaitin's random truths.
But then again, there exists a small class of true and provable statements.
In fact, nature of mathematical facts is quite similar to the nature of facts in the physical universe. Mostly random but with a relatively small class of facts that is still predictable. Unlike what most people believe, math is not more orderly than the physical universe itself.
Nevertheless, and to all practical purposes, mathematics enables a very wide range of successful predictions, doesnt it? The mathematical physics underlying the technology on which this conversation is being conducted provides a high degree of prediction and control, doesnt it? Otherwise, it wouldnt work.
Thanks, I'll check out that paper.
Quoting Tarskian
True yet inexpressible in language. Great concept.
This is a far cry from the point that math can be difficult to put into words. The proof is in the very fact you're able to post online consistently for us to read your posts. That was all capable through math.
There are two directions.
If it is provable, then it is always true. (aka, soundness theorem) In this direction, everything is very orderly. That is the only direction that we really use. That is why works so well.
If it is true, then it is almost surely not provable. In this direction, everything is very chaotic. We almost never use this direction. In fact, we cannot even see most of these random truths. So, why would we try to prove them?
It took Gödel all kinds of acrobatics in metamathematics to discover that these unpredictable truths even exist.
Before the publication of Gödel's paper in 1931, nobody even knew about these random truths. Most mathematicians were actually convinced that if it is true, then it is surely provable. Pretty much everybody on the planet was wrong about this before 1931. They were all deeply steeped in positivism. David Hilbert even asked for a formal proof of this glaring error. In fact, there are still a lot of people who believe this. Almost a century after its refutation, it is still a widespread misconception.
All of this is the result of using just one direction ("soundness"):
If it is provable, then it is always true.
That is the only direction that we use in engineering. We never use the other direction:
If it is true, then it is pretty much never provable. It is a rare exception, if it is.
In math, we mostly don't even see these unpredictable truths. How would we? In the physical universe, we can definitely see the unpredictable chaos, but we mostly ignore it. Mathematical truth is as chaotic as the truth in the physical universe. In my opinion, there is not much difference. We typically just don't want to know about it.
Perhaps some can see this as chaotic, but math itself is quite logical and hence quite orderly. Unprovability or uncomputability doesn't mean chaotic. Math is orderly, we just have limitations on what to compute or prove.
Of course it matters just how we define Chaos. If it's logical, it surely can be also mathematical.
The fact that people have a difficult time is to grasp that mathematics can be uncomputable (and unprovable). Non-computable mathematics sounds like an oxymoron, right? Wrong, only part of mathematics is computable. Or countable or provable.
Let's take a simple example just how easily we can get true, but not known mathematical entity. Assume a, b and c are distinct numbers that belong to the Natural numbers.
Let's have the equation
a + b = c
if we know two of them, we know the third one. So if a is 2 and b is 3, then c has to be 5. The equation, which is a bijective function, is obvious and easy.
It isn't so obvious when we have an inequation, which isn't a bijection:
a + b < c
If a is 2 and b is 3, then c has to be something bigger than 5 and when c belonged to the natural numbers, it's then 6 or larger. And that's it! Even if it obviously c is a natural number and has a precise point on the number line, not some range, we cannot prove c exactly. The only equation or bijection that we can do is that c=c (and c is 6 or a higher natural number).
The problem rises because we just assume that everything in math has to be provable. And the real culprit here is that when mathematics has risen from the need to count, we have put counting/computing as the basis of all math. That is an error, because we have non-computable math, and hence if we want mathematics to be consistent and logical, somethings got to give.
I started using unpredictability as somewhat a synonym for unprovability because of how Stephen Hawking put it:
So, we are sitting on a system that is largely unpredictable because most of its truths are unprovable. A system that is largely unpredictable is deemed chaotic:
The only minor difference between the universe of arithmetical truth and a chaotic system is that there are no "initial conditions" that we can change in order to produce a completely different version of arithmetical truth.
Quoting ssu
Imagine that we somehow have the information that c=17. Without additional information, it is not possible to prove it. In that sense, c is true but not provable. It could even be impossible to prove. In the standard model of arithmetic, i.e. in the natural numbers, we can somehow see that c=17 but in various nonstandard models, we can see that c is not 17. In those circumstances, proof is not even possible. Only when c=17 in all models of arithmetic, a proof is possible.
Quoting ssu
Yes, David Hilbert even wanted proof for that. In his view, every true statement must have a proof:
Hilbert believed it so strongly that he insisted that all his colleagues should work on proving the above. A lot of people still believe it. You can give them proof that it is absolutely impossible, but they simply don't care about that. They will just keep going as if nothing happened. You can't wake a person who is pretending to be asleep.
The basic problem is that people simply have these ideas what mathematics should be like and don't notice that their own premises, which they hold as axioms (obviously! What else they could they be?), aren't actually true. And when those "axioms" aren't true, we end up somewhere in a paradox.
Easiest misunderstanding to understand was the idea of all numbers being rational. Why? Because math had to be perfect! And then when obviously there were irrational numbers, the story goes that the man, Hippasus, who found irrational numbers was ostracized and when he drowned at sea, it was the "punishment of the Gods". So that at least show how some Greeks thought about it. Yet since some irrational numbers were so useful, irrational numbers were accepted.
Then there's the mess that Russell found out and the collective panic attack that only subsided with ZF-logic simply banning the paradox. There's obviously still a lot of confusion. But we can look at this in a very positive light: there's a lot for us to discover still!
Yes, these people want a kind of certainty that simply does not exist ...
Quoting Tarskian
Gödel didn't make it easy. In my opinion Cantor's diagonalization is an easier model. Or basically just use negative self reference with avoiding a Cretan liar situation.
What I don't get is just how little interest the diagonalization (or negative self-reference) gets. Yet with using it Cantor showed that the reals cannot be put into a one-to-one correspondence with the natural numbers. And Turing used it in the Halting Problem and Gödel in the incompleteness Theorems. And here's the key: if we disregard this, we end up in a paradox.
For example, I can write (if I do write it correctly) the following self-referential statement, which is true:
"I can write anything what I write" meaning, that I have no limitations on what I write and what I write is then defined to be something that I wrote or, my writings.
Then let's turn into a negative self-reference, which is also true: "I cannot write anything what I don't write". OMG! Can I write anything? Obviously I can. Does this somehow limit what I can write? No, but it shows that obviously there also is something that I don't write, these writings exist.
Now here's the tricky part: If I make the false assumption that "I can write anything" means that there cannot be anything that I cannot write (if we skip the physical limitations and stick to the theoretical) what would that imply?,
I would have to write also what I don't write, which cannot be.
So in a way, negative self reference in my opinion is a very essential building block for logic. And everytime when someone makes an universal statement that ought to apply to everything, watch out!
I think youre mis-using the word there. If everything were chaotic, nothing would exist, and if everything were perfectly ordered, nothing would change. Existence requires both. Beyond that, I cant see the point, if there is one.
You seem to have missed the argument presented. It shows that such a list would have no fixed cardinality.
Ok.
Does this principle apply to this statement? :wink:
People seem to understand this about the truth in the physical universe. They tend to reject this about the truth in arithmetic. I wanted to point out that the situation is the same.
That may very well be in violation of Carnap's diagonal lemma:
"For each property of logic sentences, there exists a true sentence that does not have it, or a false sentence that does."
But then again, it still needs to be a property of logic sentences. For example, a property of natural numbers can apply to all natural numbers.
Let p be "I can write anything". Let q be "I know everything".
Consider the statement "If I can write anything then I know everything"
"If I can write anything then I know everything" seems reasonably true.
"If I can write anything then I don't know everything" seems reasonably false.
"If I cannot write anything then I don't know everything" seems reasonable true'.
However, as regards logic using the Truth Tables, "if I cannot write anything then I know everything" is true, regardless of whether it initially seems unreasonable.
In logic, negative expressions are as important as positive expressions, but can lead to strange places.
But a sentence is not the same as a string.
The interpretation of a sentence depends on the context/axioms. The same string in two axiomatic systems is two distinct sentences.
In regard to the paper you referenced in the OP this can probably be fixed up to reference only the possible statements in a single axiomatic system. However, the assertion that natural languages are countably infinite no longer holds given there are an uncountable infinite number of contexts for any given sentence.
Way out there take
I would go significantly further.
The interpretation of statements within an axiomatic system are determined by the context/axioms.
In order to consistently and correctly interpret a given statement within an axiomatic system it is necessary to have an accurate and complete statement of context.
That is, the axioms for an axiomatic system must, themselves, have a complete statement of context (axioms must have axioms to determine how those axioms should be interpreted).
This results in infinite regression.
Axiomatic Mathematics/mathematicians do try to mitigate this problem by using previous axiomatic systems to specify axioms for subsequent axiomatic systems; but this only obfuscates the problem, it doesn't resolve the fundamentally unresolveable issue:
Without a full and complete initial specification of context, it is impossible to derive a full and complete specification of context.
Ah, it depends on how you're using the word 'truth'. If you mean absolutely truth or, "what is," yeah, its hard to find those. If you're talking about propositional logic or terms in math, then true/false is fine. I just think you're being a bit dramatic. :)
Knowledge is a tool. Because its not precise to the nano-meter, does that mean a wrench is highly chaotic and unpredictable? Of course not. Our language, while imprecise at times, is useful for its imprecision for efficiency. Just like I wouldn't grab a wrench if I were studying the atomic level of the universe, one shouldn't use certain language and terms when dealing with the foundations of knowledge and mathematics.
The hyperbole just isn't true. Its like standing in a white room and noting, "Look how chaotic the colors are, flying every which way around this room! The chaos!" And of course there's someone looking at you from the outside wondering if they should pad the walls and give you a jacket to go with it.
The true nature of the universe of mathematical facts makes lots of people uncomfortable.
Imagine that we had a copy of the theory of everything?
It would allow us to mathematically prove things about the physical universe. It would be the best possible knowledge that we could have about the physical universe. We would finally have found the holy grail of science.
What would the impact be?
Well, instead of being able to predict just 0.1% of the facts in the physical universe, this would improve to something like 0.3%; and not much more.
Scientism is widespread as an ideology in the modern world. Any true understanding of the nature of mathematical truth deals a devastating blow to people who subscribe to it. This is exactly why I like this subject so much.
Quoting Philosophim
That is wishful thinking.
You may not want it to be true, but it is.
In 1931, Gödel's incompleteness theorems dealt a major blow to positivism and scientism, but it was just the beginning. It is only going to keep getting worse. As Yanovsky writes in his paper:
In my opinion, scientism needs to get attacked and destroyed because its narrative is not just arrogant but fundamentally evil. It is a dangerously false pagan belief that misleads its followers into accepting untested experimental vaccine shots from the lying and scamming representatives of the pharmaceutical mafia; and that is just one of the many examples of why all of this is not hyperbole.
It sounds as though you yourself hold some rather specific and rigid beliefs that likewise are not entirely objective in their genesis.
Well, yeah, I rigidly believe that we should not give powers to people that only Allah should have, and if Allah does even not exist, then so much the better.
Thanks for clarifying that.
If you refer to "an universal statement that ought to apply to everything", I would agree (assuming I understood your point).
Provability, if I have understood it correctly, means that a truth of a statement/conjecture can be derived from some axiomatic system or logical rules.
With diagonalization, we get only an indirect proof. Here the proof lies on a contradiction that if the statement/conjecture would be false, then we would have a contradiction. Yet here we lose a lot from the direct proof as there's no means to grasp similarly information about statement/conjecture as in the direct proof. That's why the "true, but unprovable" statements have been such a mystery, because we want to have more information about them as we have a direct proof. And of course, people haven't been interested to find "true, but unprovable" statements. Hopefully it's changing now.
So one hypothesis would be this:
Is diagonalization a way to find mathematical statements that cannot be proven by a direct proof, but only can shown to be true by reductio ad absurdum?
And the next, even more outrageous hypothesis:
Is then this also a limit that we can compute and give a direct proof?
Let's just think what our current definition is on what is computable: the Church-Turing thesis. It states that what is computable is what a Turing Machine can compute. What the Turing Machine cannot compute is found exactly by using diagonalization (or negative self-reference) that we are talking in the first place.
But not only is this a informal definition, it is also only a thesis. Meaning literally something that we want to prove. And here we find again an issue where we want mathematics to be something else that it is, if we want to make a direct proof about it, ie a theorem of the Church-Turing thesis. This isn't possible in my view because we are talking about the limits of what is computable or directly provable and what is not.
So what are we missing here?
Basically a proof that defines what is both computable and directly provable in mathematics and what isn't. Because this proof also states what isn't computable and not directly provable, to be consistent with itself, this part itself cannot directly provable, but only be an indirect proof.
My five cents on the issue: The diagonalization itself here holds the key. It could solve a lot of the confusion that mathematics has now.
Love to hear your comment @Tarskian and others too. And if I made a mistake somewhere, please tell, I'm not an mathematician/logician, so I an ad hominem attack on my credibility.
Your statement is a bit different from mine.
This invigors a deep curiosity in me. Something that does not change, does not exist? How so?
This was my first thought. Natural languages would seem to need to be computable, which would entail countably infinite.
Again, hyperbole. I can assure you if we were able to predict how everything in the universe worked, we would solve all of quantum mechanics for starters. That's pretty huge. We would also master quarks and gluons. That's not insignificant.
Quoting Tarskian
Right, people of all stripes can fall into the intellectual trap of, "Nothing is true!" and think that gives them an insight that others don't see. After all, if nothing is true, no more thinking right? Except its really just an illusion of intelligence. Want to really impress? Try coming up with ways to make sense of the world despite the 'chaos'.
I say this not to insult, but to kick you in the pants a bit because I see too many people fall into this trap that stunts their further growth. No, nothing you have discovered here has shaken the foundations of math or science. Knowing some limitations in how it comes about or what it can do, does not invalidate what it can do and is useful for.
Quoting Tarskian
You can't prove arithmetic from arithmetic because we created it. The concept of "One" is from our ability to create discrete experiences in the world. For example, look at your keyboard. Now your keys. Now a portion of the key. Those are all your ability to create the concept of "one". "Two" is the concept of one and one grouped together. And thus the logic that continues from there is math. Again, just because math can't prove math, doesn't mean that its not a viable and useful tool that results in amazing leaps in technology and understanding of the universe.
Quoting Tarskian
That's new! Why is it arrogant and evil?
Quoting Tarskian
Well lets say this is true. What method did you use to find out that its true? Can you be confident that your own method is sound, or at least more sound then science?
The complete and perfect theory of everything cannot do that. It won't be able to predict everything. It would improve our ability to predict the physical universe from 0.1% to 0.3% of the true facts. So, it will possibly triple the predictive power of physics but not more than that.
We already have the theory of everything for the natural numbers, which is PA. It does not help us to predict the vast majority of mathematical truths. Most of the truth about the natural numbers is still unpredictable.
Quoting Philosophim
I did not discover anything. Gödel certainly did. Chaitin also did. Yanofsky moderately did. I just mentioned their work.
Quoting Philosophim
It is an opinion and not a theorem. There is nothing wrong with mathematics or with science. My problem is with positivism and scientism. I find these ideological beliefs to be very dangerous.
Right, and despite their work being concluded for quite some time now, people several times smarter than both you and I combined still hold math and science as tools of precision and meaningful discovery.
Quoting Tarskian
I find this point more interesting. Why?
If you feel threatened by its chaotic nature, it means that it disturbs your ideological beliefs. Someone who really uses them as tools of precision and meaningful discovery would never feel threatened by that.
Quoting Philosophim
It is probably best to use an example from the Soviet Union but in fact modern western society does exactly the same:
It is very convincing, because it sounds scientific, and because it insists that it is scientific, and especially because you will get burned at the Pfizer antivaxxer stake if you refuse to memorize this sacred fragment from the scripture of scientific truth for your scientific gender studies exam.
As you can see, everybody who craves credibility insists on sailing under the flag of scientism and redirect the worship and adulation of the masses for the omnipotent powers of science to themselves and their narrative.
Noson Yanofsky's book on this subject sounds quite interesting, it's been on my reading list. Still, from what I understand of his thesis, I don't think he is trying to motivate any sort of thoroughgoing rejection of "science" as a tool for decision-making or developing knowledge
Now, this might be a bit of hyperbole, no? Didn't the NHS itself publish a study suggesting that vaccination might not be a net positive for young British males, even if it was still warranted due to its downstream effects on overall population health?
And no one got arrested for selling people all the horse dewormer they wanted to gobble down. I happened to catch it live when our former President proclaimed that he was "on the hydroxy right now" despite not even being sick, and he seems fairly likely to become POTUS again, rather than having been burnt alive.
Does one have non-ideological beliefs? Is that a thing now? You wear double-layer body armor in your words because you know the environment (truth) presents an imminent danger (your baseless ideology), Ironic no? Prove me wrong mate.
Quoting Tarskian
Im detecting a distinct political slant here. Is it Libertarianism? Trumpism? Anarchism? Would I be right to surmise that you are not a backer of climate change science?
I would agree with @Tarskian, especially a mix of both can be harmful, because one can come to be so dogmatic that one starts to think that model or theory of reality is far more real than just the reality itself. And this dogmatism leads people forgetting that scientific theories are only models of reality. You don't care how real life is different from the scientific model, the model itself is right.
Scientism itself can be viewed as a derogatory remark, but positivism itself isn't so bad, if you don't use it too much and aren't open to other thoughts. For example let's think about Comte's law of three stages, where first people believe in myths and magic, then the society transform's to a transitional metaphysical state and then finally, it becomes positivist society based on scientific knowledge.
That is an interesting idea, but is it a law? Will this really evidentially happen because it's a law? It's a building block of positivism itself, sure, but is it a building block of reality?
The most powerful implication of chaos theory , and complex dynamical systems theory, is that phenomena that appeared within previous frameworks to be merely random are in fact intricately ordered. This is a deterministic order , but it cant be discovered by using a linear causal form of description. It is a concept of chaos as a special sort of order, not something in opposition to it, as the title of the OP seems to suggest. It is necessary to understand how recursivity and non-linearity function to produce complex global behavior that cannot be reduced to a linear determinism. I think the lesson here is that the most vital aspect of scientific understanding is not search for certainty but patterned relationality. A much richer and more useful form of anticipatory predictiveness becomes available to us once we give up the goal of certainty. The universe isnt certain in a mathematical sense because it is constantly changing with respect to itself, but it is changing in ways that we can come to understand more and more powerfully.
Im much less interested in how many decimal places
one can add to a particular mathematical depiction of a scientific theory than I am in how that theory organizes the phenomena that it attempts to mathematize. The sacrifice of that precision for the sake of an alternate theory which organizes events in a more intricate way is well worth the loss of precision.
I'm not threatened, I'm just having a conversation with you. Here I try to elevate the discussion more than emotion, politics, or bias. Its about trying to get to the root rationale of arguments and see if they hold up. Your answer is an emotional one, not a rational one. It can take some time to adapt coming from other forums, I get it. So lets think about it again. If these old discoveries really did shake the foundation, why are people smarter than us don't seem bothered and still use them?
Quoting Tarskian
I think your problem isn't with science, but when people use the word 'science' to describe something that isn't actually scientific. Science is a very rigorous method of testing, and in essence tries to prove its conclusions wrong, not prove them right. The idea is to see if something can be disproven, and if it can't, then it must be something that works with what we know today.
Quoting Tarskian
It sounds like you have an issue with Covid vaccines and gender studies. Or more importantly, perhaps you have an issue with the way some people have reported on it? Many people have opinions on the science involving these two fields, but that doesn't mean it accurately reflects the science of those two fields.
We can test this by first starting with Covid vaccines. What part are you against specifically? I am moderately familiar with the scientific consensus on the Covid vaccines, and we can see if your issue is with the science itself, or people's opinions on the science itself.
Isn't your problem with dogmatism, or a misuse and/or misunderstanding of science/positivism, instead of with science/positivism itself?
Yes, absolutely! All the various philosophical schools of thought have contributed each their way. Even if I criticize reductionism and favour the idea of more-is-different, there's a place for reductionism. Yet positivist are the one's that can quite easily fall into that dogmatism.
Perhaps only scientism can be defined so negatively, that is basically something derogatory. (Note that Tarskian referred to scientism, not science.)
Sorry, I did no follow the intent of the rest of your post.
The question I replied to was from @Banno, who asked: "Why should we suppose that natural languages are only countably infinite?"
I gave a proof that the set of finite-length strings over a countably infinite alphabet is countably infinite.
That is, there are at most countably many finite-length expressions, strings, words, sentences, books, in any natural language.
I was confused by your quoted point here, since a sentence is a particular kind of well-formed finite-length strings. So the sentences are a proper subset of the strings. If the strings are countable so are the sentences.
Can you clarify your post? I may have missed something.
Quoting Treatid
I'm talking syntax, not semantics. There are only countably many finite-length strings.
I do see your point. Even if there are countably many strings, each string could be given a different interpretation, so that there could be lots more meanings.
That's a point about models, or interpretations, or semantics, and I'm not sure it's the appropriate context for the question. On the other hand you think it is, so at least let me try to respond in that context as well.
If there are at most countably many axiom systems or interpretations, then there are still only countably many sentences.
The only way you could make your idea work would be to have uncountably many interpretations. Do you have that many interpretations?
Also I haven't hear "sentence" used in this way. I thought the idea about sentences was two expressions that say the same thing, for example in different languages.
I haven't heard a sentence as you are defining it, as a syntactic string plus an interpretation. Is this something people do?
In any event, as long as you only have countably many interpretations per string, you'll still only have countably many sentences in your definition, assuming I'm understanding your post.
Quoting Treatid
Ok I see you anticipated my point. Can you suggest a context in which I can conceptualize an uncountably infinite number of contexts for interpreting a language?
And again, I haven't seen "sentence" used this way. There seems to be a blurring of syntax and semantics.
Going back to your top post:
I read the paper. It's true, but there's less there than meets the eye.
They're only talking about undefinable sets. Sets of natural numbers that can't be characterized by a predicate. There are uncountably many sets, and only countably many predicates (a predicate being a finite string over an at most countable alphabet). So "almost all," all but countably many sets of natural numbers, can't be described.
So if [math]S[/math] is one such indescribable set (whose elements are essentially random: there's no way to describe them like "the prime numbers," or "the even numbers," or whatever), we have a bunch of true facts like [math]s \in S[/math] for each element [math]s[/math] that happens to be in [math]S[/math]. And so forth.
So there are uncountably many facts about the powerset of the natural numbers that can not be expressed, but that are clearly true.
This is perfectly correct as far as it goes. But how far does it go?
There is nothing particularly interesting about a random set. It has no characteristic property that lets us determine whether a particular number is or isn't in the set. We have to look and see if it's in there. There's no rhyme or reason to the members of a random set.
These are all mathematical truths, but they're not very interesting mathematical truths.
What mathematicians do is find the interesting mathematical truths. The ones that form an overarching structural narrative of math. Perhaps the author of the paper, or Chaitin, are saying that this narrative is an illusion; that the mathematical truths we discover are a tiny, almost irrelevant subset of all the mathematical truth that's out there.
I think the opposite view could be taken. That the work of mathematicians in developing interesting, axiomatic mathematical truth, has value. It's what humans bring to the table.
How about this metaphor. Mathematicians are sculptors. Out of the uncountably infinite and random universe of mathematical truth, mathematicians carve away the irrelevant and uninteresting truths, leaving only the beautiful sculpture that is modern mathematics, those truths that we can express and prove. They're special just for being that.
Agreed. Unpredictable truths in the physical universe are usually not particularly interesting either. The difference is that we can see them or at least observe them. That is why we know that the physical universe is mostly unpredictable. Our own eyes tell us. In order to "see" a mathematical truths, however, we need some written predicate. Otherwise, such truth is invisible to us.
If we completely ignore the unpredictable truths in the physical universe, it also gives us the impression of being beautifully and even majestically orderly. In that perception of the physical universe, there is no chaos. In that case, the physical universe also looks like a beautiful sculpture.
Quoting fishfry
Yes, and that is absolutely not the problem.
The problem is that people such as David Hilbert are convinced that the beautiful sculpture is all there is. Hilbert insisted on the idea that his colleagues had to work overtime in order to give him proof of his false belief:
The vast majority of people still see mathematical truth like Hilbert did. They still see mathematical truth as a predictable and harmonious orchestra of violins.
Yes, you seem to know it perfectly fine. Most people, however, don't know it, simply because they don't want to know it.
They believe that one day we will discover the fundamental knowledge to see the entire physical universe also as a beautiful sculpture. We have already discovered the fundamental knowledge of arithmetic. Its axioms are known already and arithmetical truth is absolutely not a beautiful sculpture. Instead, it is uncountably infinite and random.
Sure. What this argument purports to show is that a natural language has no fixed cardinality. And this is what we might expect, if natural language includes the whole of mathematics and hence transfinite arithmetic.
But the point is that "...the collection of all properties that can be expressed or described by language is only countably infinite because there is only a countably infinite collection of expressions" appears misguided, and at the least needs a better argument.
Your posts sometimes take maths just a little further than it can defensibly go.
Quoting fishfry
Not I, but Langendoen and Postal. If you wish you can take up the argument, I'm not wed to it, I'll not defend it here. I've only cited it to show that the case is not so closed as might be supposed from the Yanofsky piece. Just by way of fairness, Pullum and Scholz argue against assuming that natural languages are even infinite.
Langendoen and Postal do not agree that "a natural language consists of a collection of finite-length strings".
Does mathematics also "consists of a collection of finite-length strings made from an at most countably infinite alphabet"?
Also, doesn't English (or any other natural language) encompass mathematics? It's not that clear how, and perhaps even that, maths is distinct from natural language.
All of which might show that the issues here are complex, requiring care and clarity. There's enough here for dozens of threads.
The lemma that the number of possible expressions in language is countably finite is actually a core argument in Yanofsky's paper:
There is simple proof for the lemma that language is countably finite. Yanofsky's paper does not mention it but the proof is trivial:
Langendoen and Postal argue in "The vastness of natural languages", 1984, that natural-language sentences can be infinitely long.
Yanofsky, on the other hand, assumes that language sentences, especially predicate formulas that describe natural-number subset properties in ZFC, are necessarily finite.
Even though infinitary logics allows for infinitely long predicate formulas, they cannot be represented in language but only by their parse trees:
Hence, the size of logic statements represented by language alone cannot be infinite. Therefore, the language of ZFC is still countably infinite.
Overcoming this constraint would require the use of meta-programs instead of predicates as set membership functions that have infinite while loops -- beyond primitive recursive arithmetic (PRA). These programs can then generate infinitely long predicates in the language of ZFC to describe Yanofsky's subsets. The use of such predicate-generating meta-programs instead of predicates as set membership functions is not supported in the language of ZFC.
Furthermore, this would still not help, because there are only countably infinite programs. There would still not be enough programs to describe all the uncountably infinite subsets of the natural numbers.
I'll check out those links. But if they deny natural languages are even infinite, then they surely aren't uncountable.
I do think natural language is infinite, in the sense that there are infinitely many legal sentences. The sun rises in the morning, I know the sun rises in the morning, I know that I know the sun rises in the morning, etc. There is a countable infinity of those.
Quoting Banno
I'm confused by that. If you allow infinite length strings then there are uncountably many of them, though most aren't grammatical. Are they making an argument about grammatical constraints?
Quoting Banno
Formally, yes. Every mathematical statement or proof has finite length. There are only countably many mathematical statements. That's the argument in the paper. There are uncountably many truths, but only countably many of them can be expressed.
Quoting Banno
As formal systems, probably not much difference. But natural language is much messier than math, I'm not even sure if a computer could determine whether a string is legal in natural language. Not once we include slang and the language of pop culture and the young.
Quoting Banno
Well I argued in my earlier post that there's less to that paper than meets the eye. The inexpressible truths in the paper are trivial and unimportant. I wonder if there are nontrivial truths that can't be expressed, and what that would even mean.
I wonder about that. Think of the slang the kids of every generation come up with. No algorithm could predict that. Humans not just using, but constantly recreating their own language. New words and ideas and phrases are constantly coming into existence, sometimes gaining traction in the general culture, sometimes fading away. I think that the way humans evolve their own languages in so many ways, is something that humans can do that perhaps algorithms can't. I'll just put that out there.
This reminds me a little of Chomsky's generative grammars. He revolutionized linguistics by saying there are structural elements common to all human languages. Not programs, per se, but structures. Just my impressions, that's all I know about it.
That should read "countably infinite." We can think of endless permutations of language, but we could also spend and infinite amount of time saying the names of the reals between any two natural numbers.
Nicely phrase. Our new chum is propounding much more than is supported by the maths. Here and elsewhere.
And we are faced again with the difference between what is said and what is shown.
So will we count the number of grammatical strings a natural language can produce, and count that as limiting what can be - what word will we choose - rendered? That seems somehow insufficient.
And here I might venture to use rendered as including both what can be said and what must instead be shown.
Somehow, despite consisting of a finite number of characters, both mathematics and English allow us to discuss transfinite issues. We understand more than is in the literal text; we understand from the ellipses that we are to carry on in the same way... And so on.
But further, we have a way of taking the rules and turning them on their heads, as Davidson shows in "A nice derangement of epitaphs". Much of the development of maths happens by doing just that, breaking the conventions.
Sometimes we follow the rules, sometimes we break them. No conclusion here, just a few notes.
* just for @ssu
Any proof will contain at most a finite number of characters. At least for us finite entities.
Quoting Banno
That's actually not Cantor's theorem (the power set of any set has a strictly greater cardinality than the set itself).
What Cantor shows is that there cannot be a bijection between the natural numbers and the reals by reductio ad absurdum. That's it. Notice that it's an indirect proof. And notice that already from this we have an open question, the Continuum Hypothesis.
Yet this doesn't stop Cantor treating uncountable infinities as normal ones and he continues adding things up in cascading system of larger and larger infinities while trying to evade paradoxes. Many mathematicians even today are doubtful of this, even if they might be not mainstream.
(1) The theorem known as 'Cantor's theorem' has the key part ('P' for 'the power set of'):
For all x, there is no function from x onto Px.
Proof:
Let g be a function from x to Px.
Let D be {y | y e x & ~ y e g(y)}.
So D is not in the range of g.
So g is not onto Px.
That's a direct proof. And it's constructive: Given any function g from x to Px, we construct a member of Px that is not in the range of g.
(2) Cantor's other famous proof in this regard ('w' for 'the set of natural numbers'):
There is no function from w onto the set of denumerable binary sequences
Proof:
Let g be a function from w to the set of denumerable binary sequences
Let d be the function from w to the set of denumerable binary sequences such that:
for all n in w, d(n) = 0 if g(n)(n) = 1 and d(n) = 1 if g(n)(n) = 0.
So d(n) is not in the range of g.
So g is not onto the set of denumerable binary sequences.
That's a direct proof. And it's constructive: Given any function g from w to the set of denumerable binary sequences, we construct a denumerable binary sequence that is not in the range of g.
/
Cantor did propose answers to the paradoxes (though his answers are not in the axiomatic method) but I don't know that Cantor's showing that there are always sets of larger infinite size was meant to evade the paradoxes. Indeed, it is the fact that there are always sets of larger infinite size that allows a paradox in Cantorian set theory. Cantor's answer to that paradox is another matter.
Given a theory adequate for a certain amount of arithmetic, for example, PA, it's redundant to say that the theory proves all its theorems. But if the theory is formal and consistent, then there are truths of arithmetic that are not provable in the theory. This has nothing to do with who "created" the theory.
Given a particular countable language and meta-theory with a countable alphabet:
This is correct:
Given a countable set of symbols, there are exactly denumerably many finite sequences of symbols, thus exactly denumerably many sentences.
There are uncountably many subsets of the set of sentences. And any set of sentences can be a set of axioms. Therefore, there are uncountably many theories. But there are only countably many ways to state a theory, so there are theories that are not stable.
I'm pretty sure this is correct:
There are exactly denumerably many algorithms. And for every formal theory and set of axioms for that theory, there is an algorithm for whether a sentence is an axiom. So there are only countably many formal theories.
Given a language for a theory, trivially, there are uncountably many interpretations for the language, since any non-empty set can be the universe for an interpretation, and there are not just countably many sets. But there are only countably many ways to state an interpretation, so there are interpretations that are not statable.
Given any theory, there are uncountably many models of the theory, since there are uncountably many isomorphic models of the theory. But there are only countably many ways to state a model, so there are models that are not statable.
Not in mathematical logic.
A sentence is provable from a set of axioms and set of inference rules if and only if there is a proof sequence (or tree, tableaux, etc.) resulting with the statement.
A sentence is true in an interpretation if and only if the sentence evaluates as true, per the inductive clauses, in the interpretation.
And we have the soundness theorem:
If the axioms used in a proof are all true in a given interpretation, then the proven sentence is true in that interpretation.
So, proving a sentence is not in and of itself proving the truth of a sentence. Rather, we have that if all the axioms used are true in a given interpretation, then the proven sentence is true in that interpretation.
No, diagonalization does not require indirect proof.
Quoting ssu
No, diagonalization does not require indirect proof.
Proof of incompleteness is usually constructive. Given a system of a certain kind, we construct a sentence in the language for the system such that the sentence is true in the standard interpretation for the language (or, more informally, true of arithmetic) but not provable in the system.
Also, there are two kinds of proof by contradiction:
Assume P, derive a contradiction, infer not-P. That method is not generally controversial.
Assume not-P, derive a contradiction, infer P. That method is not accepted by intuitionists.
/
It seems people get false Internet memes in their head. I don't know where these memes originate from, but they ubiquitous and persistent in forums.
I think the theorem you have in mind is that there is no algorithm that decides whether a program and input halt. The proof uses diagonalization. But, again, the proof is constructive. Given an algorithm, we construct a program and input such that the algorithm does not decide whether the program halts with that input.
The second completeness theorem is:
If S is a formal, consistent theory that adequate for certain arithmetic, then S does not prove the consistency of S.
Godel doesn't give a full proof for the second incompleteness theorem in his famous paper. But the details are supplied in subsequent articles and textbooks by other authors.
Some of the elements of B are infinite. Those members of B that are infinite don't have a finite conjunction, but all natural language expressions are finite.
Do you mean the diagonalization lemma applied to the negation of the provability predicate?
The diagonalization lemma doesn't have an existential quantifier like that. The diagonal lemma is:
If F(x) is a formula, then there exists a sentence S such that ('#' for 'the numeral for the Godel number of'):
PA |- S <-> F(#(S))
Applied to the negation of the provability predicate ('P' for the provability predicate):
There exists a sentence G such that:
PA |- G <-> ~P(#(G))
So the existence statement is in the meta-theory, not in PA.
He proposed the project. But he insisted that all of them undertake it? Moreover, is there even one colleague to whom Hilbert insisted the colleague undertake it?
What untested vaccines? (Of course, they're untested for the people who are taking them in tests.)
Quoting TonesInDeepFreeze
As an non-mathematician/logician, I'm not familiar with the terminology. So it is sentence - proof sequence - axioms? I still assume there is a link between sentence and the set of axioms.
Quoting TonesInDeepFreeze
Diagonalization itself of course doesn't require an indirect proof. What I meant that it itself is an indirect proof: first is assumed that all reals, lets say on the range, (0 to 1) can be listed and from this list through diagonalization is a made a real that is cannot be on the list. Hence not all the reals can be listed and hence no 1-to-1 correspondence with natural numbers. Reductio ad absurdum.
Perhaps using the "negative self-reference" would be better if the reductio ad absurdum proof isn't exactly about changing something on a diagonal.
Quoting ssu
Quoting TonesInDeepFreeze
Exactly.
Quoting TonesInDeepFreeze
Yes. Obviously Turing constructed a quite important and remarkable proof for the uncomputability of the Entscheidungsproblem. But is that constructiveness a problem?
So, according to your remark, the diagonal lemma should be phrased as:
instead of :
This is an extremely subtle difference because A still needs to be phrased in the language of PA. Therefore, A must also be a sentence in PA.
Another problem is that PA is its own meta-theory in this case. That is the whole point of encoding ÍAÎ as a natural number. We thereby make use of the fact that PA is capable of self-reflection.
If PA is not its own meta-theory in this case, what theory is then the meta-theory?
These vaccines were obviously not tested for long-term consequences.
If the habitual length of time it takes, is 10 to 15 years, to test a new drug for long-term consequences, then I do not want to use a drug that was tested for at most 6 months.
David Hilbert had a habit of drawing the attention of the entire mathematical world to his agenda. He also successfully did that with his 23 problems:
Hilbert's list of 23 problems actually still had merit. There was no hidden agenda in them. His program, however, was about proving a falsehood, such as "mathematics is complete" (i.e. there is a proof for every truth). Hilbert wanted other mathematicians to find proof for his misguided ideology.
I should add a bit more to this. Arithmetic is a tool we've created from logic. It is logic which proves arithmetic, not arithmetic itself.
A proof is a sequence of sentences such that every sentence is either an axiom or follows by a rule of inference from previous sentences in the sequence.
Quoting ssu
No, as I said, Cantor did not make that reductio assumption. Again:
Let g be an arbitrary list of denumerable binary sequences. (We do NOT need to ASSUME that this is a list of ALL the denumerable binary sequences). Then we show that g is not a list of all the denumerable binary sequences.
Quoting ssu
Church is the one who addressed the Entscheidungsproblem. Turing proved the unsolvability of the halting problem. My point was that Turing's proof is constructive.
As I understand, we agree. Godel gave an outline that leaves out needed details. But you said he was "slippery". I don't see what is slippery about it.
Arithmetic can be reduced entirely to logic. However, logic can also be entirely reduced to arithmetic. You only need to do it for one universal gate NAND (or NOR) because all other logic can be implemented with it:
NAND(x,y) = 1 - (x*y)
Therefore, logic and arithmetic are perfectly bi-interpretable. If you can do the one, you can automatically also do the other.
I would not write it that way. But, yes, my point is that the existential quantifier is in the meta-language and not in the scope of the turnstile.
Quoting Tarskian
What I wrote:
Quoting TonesInDeepFreeze
There is no mess up of what is in the meta-language and what is in the object language there. Look at virtually any article or textbook to see that they are, modulo stylistic and symbol choices, along those lines and not with the existential quantifier in the scope of the turnstile.
Quoting Tarskian
I don't know. It is a tricky point that I wish I understood better (especially I suspect we would need to be careful where the consistency assumption occurs?). Since it is not needed for PA to be the meta-theory, personally, I think of some other meta-theory. In any case, in ordinary expositions, the existential quantifier is not written after the turnstile. Doing so would mix up the meta-language and object theory, as otherwise would indeed cause get us embroiled in keeping straight what are the theorems of PA as opposed to what are the theorems about PA. Even if PA could be its own metatheory (I don't know whether it a can be while consistent) it is not good to write the meta-theorems such that they are both in the meta-theory and object theory, lest the exposition becomes quite confusing. Moreover, when we move on to mention 'truth', the language for PA cannot be its own meta-language.
Quoting Tarskian
Whatever you choose that is adequate. Godel performed the proof in ordinary mathematics and reasoning in natural language. On the other hand, set theory works even though it is more than we need. And, though I don't know the details, PRA would serve as an admirably lean basis.
Correct, of course.
I don't know that it was a habit, but of course he made his agenda prominent. But that is a far cry from insisting that all his colleagues work on it.
Quoting Tarskian
He wanted to prove something. He did not claim that it might not turn out to be untrue. He expected that it would be true, but that's hardly even a foible. And "ideology" suggests connotations that I don't see as warranted.
What specific reduction do you have in mind?
Yes, I have noticed. But then again, I have never understood it like that. In fact, I have just always ignored it. I have always seen it as PA talking about itself.
Of course, some other theory could talk about PA, but the textbooks or papers do not seem to elaborate any examples. ZFC would not be a good candidate because it is too much like PA. ZF-inf is even bi-interpretable with PA. I have only ever run into one such example, i.e. Goodstein's theorem, where the theorem is in PA while the proof is in ZFC.
Quoting TonesInDeepFreeze
Yes, agreed. The truth of PA is in ZFC. But then again, where is the truth of ZFC? The only answer that I have found to this question is the following nebulous explanation on math exchange:
In my impression, model theory is only straightforward for the simple case of PA being interpreted by ZFC's truth. Everything else seems to be smoke and mirrors.
Lost me. There are uncountably many reals between any two natural numbers.
Not sure what's being argued here. The cardinality of natural language is not relevant to the thread, it's a side topic. It is a fact that there are only countably many finite-length strings over an at most countably infinite alphabet. I think that addresses the issue, but perhaps there's some disagreement.
"How Computers Work: Arithmetic With Gates"
http://www.goodmath.org/blog/2022/12/26/how-computers-work-arithmetic-with-gates
sum = x XOR y
carry = x AND y
Chumming the water, is our new chum. But actually I'd heard the same claim from Chaitin, and it wasn't till I read the paper referenced by the OP that I learned that this is a very trivial observation related to undefinable real numbers, and not a major insight. But perhaps Chaitin has taken it deeper.
Quoting Banno
Yes, a point made by the constructivists I believe. Math talks about the infinite but math itself consists of finite-length proofs.
Quoting Banno
Moby Dick didn't really kill Ahab, even though we may have enjoyed the story. Moby and Ahab never existed. We can always use language to describe impossible things. "Fly me to the moon and let me play among the stars." What kind of pedant would complain about the illogic?
Quoting Banno
Lost me here. Anyway I don't think I should get too involved in the question of natural language, even if I believe my countability argument applies. There are only countably many finite-length strings over a countable alphabet. I just don't see that there's any more to say; but again, I'd rather talk about Chaitin's idea than get tangled up in the complexities of natural language.
Quoting Banno
It's like a trip to the moon on gossamer wings. We have no trouble expressing the thought, even though it describes something that's not possible, even if we knew what gossamer wings are. Unless by gossamer wings we mean the hardware of the Apollo missions.
Symbolic language lets us express many fanciful ideas. [i]Alice laughed. 'There's no use trying,' she said. 'One can't believe impossible things.'
I daresay you haven't had much practice,' said the Queen. 'When I was your age, I always did it for half-an-hour a day. Why, sometimes I've believed as many as six impossible things before breakfast.[/i]
Of course Lewis Carrol was a logician as well as a writer of children's stories, and perhaps he was making this very same point. That with language, we CAN believe, or at least express, impossible things.
Quoting Banno
Yes. Math is all about believing impossible things! A point that I'm sure did not escape Carrol's notice.
Quoting Banno
Yes.
They ordinary way of writing it up is quite understandable, in context of Godel's original paper and in context of just about any article or book on it. Your way though requires me to regard PA as the meta-theory, which is not a required assumption for the proof.
Yes, PA can "talk about itself" in a certain sense (though I don't know that it can formulate all of the proof when it is a proof about PA). But we don't ordinarily stipulate that the proof about PA is being given in PA. In an ordinary exposition of the proof, it seems to be it would be, at best, an invitation to a lot of confusion to presuppose that the proof is being given in PA.
Quoting Tarskian
In, for example, ZFC+"there exists and inaccessible cardinal".
But, yes, of course, that is not an epistemological basis. But that issue is aside the point that one does not ordinarily stipulate that PA itself is the meta-theory.
Quoting Tarskian
I don't know your criteria for straightforwardness, but model theory is rigorously developed, though it does use infinitistic mathematics.
Quoting Tarskian
What is an example?
It works fine for me, as long as it talks about ZFC models of PA. Model theory also uses infinitistic mathematics in that context. I have no problem with that.
I only give up when it talks about the models of ZFC. At that point, it is no longer capable of cleanly separating provability and truth: "How can a model of ZFC be a set, if we want to use ZFC to study sets?"
Concerning "smoke and mirrors", well that is about models of ZFC, where the theory is essentially its own model.
As I only perused, there's a writeup there about carrying out arithmetic with logic gates.
But to say that arithmetic can be reduced to logic requires showing, for example, the derivation of the axioms of PA from only logical axioms, or even more basically to define the non-logical primitives of PA from logical primitives. And those ain't gonna happen.
You don't need the axioms of PA to carry out arithmetic. It will work perfectly fine without. You just won't have a theory about it.
Of course we don't need any axioms to do a whole bunch of arithmetic. But just doing a bunch of arithmetical computations is not, in the context of mathematical logic or philosophy of mathematics, what people ordinarily mean by reducing arithmetic to logic.
I considered that argument, that there might be uncountably many theories (interpretations) but I don't think it's correct. An interpretation must have finite length, yes? There are only countably many FINITE subsets of a countable set.
Quoting TonesInDeepFreeze
That's what I believe is true. Only countably many interpretations of each sentence. So the poster (sorry I didn't look back to find out who) who said natural language could be uncountable, is wrong. IMO anyway.
Quoting TonesInDeepFreeze
I'd have to give that more thought.
Quoting TonesInDeepFreeze
I don't know. Not knowledgeable about model theory.
No disagreement here, and I agree it's a side issue.
Practically speaking, the sum total of all natural language that has ever or will ever be spoken/thought/etc. in the visible universe is finite anyhow. There are in fact, far stricter physical limits on how many truths could ever actually be expressed, which seems more relevant to the OP. If you could make every proton in the visible universe represent an entire sentence that only gets you to 10^80 or so sentences, a very far cry from an infinite number of truths.
No, theories and interpretations are different things. But, yes, I did give reasoning by which there are uncountably many theories and uncountably many interpretations for even just one language.
Quoting fishfry
I didn't say that there are only countably many interpretations of a given sentence. Indeed, there are uncountably many interpretations, as I explained.
Quoting fishfry
There are only countably many expressions. But there are uncountably many interpretations even of just one sentence.
Set theory says what is the requirement for being a model for set theory is. But set theory (if it is consistent) does not claim that there is model that meets that requirement. And set theory proves that if set theory is consistent if and only if set theory has model. Also, set theory talks about inner models, which a figure of speech for relativization, which also is not problematic.
ZFC is a set of sentences. Any consistent set of sentences has models, and the universe of a model is a non-empty set. There is no incapability of distinguishing the definitions of 'provable in a theory' from 'true in a model'.
Quoting Tarskian
That is nonsense. Theories are not models and models are not theories.
The math exchange answer says something quite confusing in that regard:
A model is not a metatheory, but "we need to work in" one "to study the model". So, as one of the options, we could "work in" ZFC itself to study its models.
I don't say that it is wrong. I just say that it is highly confusing.
I find that quote quite understandable, quite clear and not confusing.
In a meta-theory we define 'is a model' and we talk about models for languages for a theory, and we talk about models of theories.
Indeed, it looks indeed like it doesn't matter that ZFC is its own model metatheory. For example, with ZFC being PA's model metatheory, LöwenheimSkolem theorem does not seem to be particularly PA-specific:
It does not seem to matter for which object theory the cardinals ? are being considered.
The following math exchange answer suggests that explicitly mentioning the model's metatheory is not even a requirement:
Well, in my example (which is common), I was referring to reals between 0 an 1, not ALL reals.
I think that we aren't understanding each other here:
If you say " Then we show that g is not a list of all the denumerable binary sequences." Isn't then that g is not in the list of all these sequences exactly constructed by diagonalization? And here the negative self-reference is that g is not on this list, the list of (in your version) all the denumerable binary sequences.
Quoting ssu
Quoting TonesInDeepFreeze
Not quite, even if it's great that someone remembers Church's role (although he is remembered by us referring to Church-Turing thesis). Alan Turing's paper is called "ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ENTSCHEIDUNGSPROBLEM".
Where Turing states:
It's Turings paper itself where we get "the halting problem" of a Turing Machine (in the paper referred to a machine. The name "Turing Machine" was given by his teacher, Alonzo Church).
Translation does not mean you did not need to understand logic first to discover math. I don't mean formal annotated logic, I mean 'logical thinking'.
I also believe that a good measure of logical thinking is built into our biological firmware, but so is quite a bit of arithmetic:
I think that we were born with it and that we can do quite a bit of it out of the box.
Oh, no debate there. I'm just noting that to express the formal logic of arithmetic, you have to logically ascertain what '1' is as an abstract. What '+' means as an abstract. What '=' means as an abstract. You can't use math to prove math, because you must first invent the symbols and meanings that math is based off of.
This seems like a side track off of the larger conversation at this point however.
Math is confusing. It's far more closer to philosophy than mathematicians and logicians want to admit.
For example, I've followed Chaitin's story and his efforts with the Omegan number and AIT, on and off for now about 20 years and seen how he has been gotten even ad hominem attacks (actually from my own university, from where I graduated). Only now it seems that Gregory Chaitin is getting respect.
Quoting Lionino
I wouldn't go for ad hominems, but for me this thread is informative. So hopefully nobody is banned and the tempers don't rise too much.
I myself follow the rule that if two or more PF members saying you are wrong (not just that they oppose you view in some way) with nobody agreeing with you, then you might really look sincerely where this error would lies.
It doesn't matter whether we're proving that there is no list of all the reals, or no list of all the reals between 0 and 1, or (as in Cantor's proof) no list of all the denumerable binary sequences.
The point is that we don't need to assume for a reductio (and Cantor did not do that).
If it's about reals between 0 and 1, then let g be any list of reals between 0 and 1 (we don't need to assume for a reductio that it is a list of all reals between 0 and 1). Then we construct a real between 0 and 1 that is not listed.
Quoting ssu
g is not in the list since g is not a real between 0 and 1. Rather, we construct a real between 0 and 1 that is not in the range of g.
Cantor did it for the denumerable binary sequences: Let g be a list of denumerable binary sequences. We construct a denumerable binary sequence that is not in the range of g.
/
Entscheidungsproblem. My mistake. I overlooked that Turing proved both the unsolvabilty of the halting problem and the unsolvabity of the Entscheidungsproblem. Chuch also independently proved the unsolvability of the Entscheidungsproblem.
In this thread I got exposed to a part of model theory that I have always avoided (models of ZFC) because I have always found it highly confusing. (I had always restricted myself to dealing with models of PA.) After having a few exchanges on the subject, the subject has actually become slightly less confusing. That's actually some progress. I hope that I no longer confuse a model with its metatheory (that problem does not really have the opportunity to occur with dealing with models of PA). Maybe I will less actively avoid the subject in the future. I also think that mathematics has its deep philosophical aspects. As far as I am concerned, model theory has never been a goal in itself. I mostly get confronted with it when I read about something else which happens to be connected. I will probably still never read about model theory just for its own sake.
g is a list of denumerable binary sequences, and we construct a denumerable binary sequence not listed by g.
Or if reals are addressed:
g is a list of reals, and we construct a real not listed by g.
And you see now that a reductio argument is not needed; indeed Cantor did not use a reductio argument.
OK, so let me try get your viewpoint here: having the list g and constructing the real that is not on the list isn't itself using reductio ad absurdum. Yes, this obvious to me also.
However, I still insist that to prove that there's no 1-to-1 correspondence between exist between the natural numbers and the reals is a proof by contradiction, where you use what was done above. So when I talked about diagonalization, being an amateur here, I also referred to consequences and this (the list, anti-diagonal construct which isn't on the list) being used as part of a wider proof. For you diagonalization is just the part with the list g and the construction of the real that isn't in it. I can understand that totally.
Either this is the issue, or then I have to try to spend even more of your precious time.
If there is a bijection then there is a surjection
There is no surjection.
Therefore, there is no bijection.
No need for a reductio assumption.
Yes,
But if you start from that there is no bijection, and then prove it by:
If there is a bijection then there is a surjection
There is no surjection.
Therefore, there is no bijection.
Isn't that a proof by contradiction. That was my point.
If that is considered a form of reductio ad absurdum, then every proof of a negation is proof by a form of reductio ad absurdum.
In a natural deduction system, the way to prove a negation ~P is to assume P, derive a contradiction, and infer ~P.
In an ordinary Hilbert system, the way to prove a negation ~P is to prove, for some Q, P -> Q and ~Q, and infer ~P.
Yes, those are like "cousins" of one another. And they can be derived from one another as derived rules in the systems.
But again, if using modus tollens is considered a form of reductio ad absurdum, then any proof of a negation is a form of reductio ad absurdum.
Note that both of those are intuitionistically valid. What are not intutionistically valid are:
Assume ~P, derive a contradiction, and infer P.
~P -> Q and ~Q, and infer P.
/
Also there are different terminologies:
reductio ad absurdum
indirect proof
proof by contradiction
So we need to be clear whether the intuitionistically valid form or the intuitionistically invalid form or both are referenced.
/
You mentioned 'indirect proof' and you said:
Quoting ssu
My point was that we do not need to assume that all the reals are listed. "All the reals are listed" would be P in the remarks above.
Now you've switched to pointing out that modus tollens is used.
And this has nothing to do with anti-diagonalization.
So just to make things clear, I'll ask again:
Quoting ssu
Now why I'm ranting so much about negative self-reference or diagonalization, which I acknowledge I haven't accurately defined, is that it crops so easily in many important findings. Yet what is lacking is a general definition.
Here's a video explaining this perhaps better than me:
Could this be put to even more simple terms?
Here's the general theorem in the setting of category theory. It's called Lawvere's fixed point theorem. Not necessary to understand it, just handy to know that all these diagonal-type arguments have a common abstract form.
In mathematics, Lawvere's fixed-point theorem is an important result in category theory. It is a broad abstract generalization of many diagonal arguments in mathematics and logic, such as Cantor's diagonal argument, Russel's paradox, Gödel's first incompleteness theorem and Turing's solution to the Entscheidungsproblem.
I gather the video was about that, but the Wiki page is more to the point and takes far less time to not understand :-)
Quoting ssu
Not necessary to use reductio. Cantor's diagonal argument says that any list of reals is incomplete. We can prove it directly by showing that any list of reals (not an assumed complete list, just any arbitary list) is necessarily missing the antidiagonal. Therefore there is no list of all the reals.
We're talking about different things. I'm talking about formal theories and interpretations of their languages as discussed in mathematical logic, and such that theories are not interpretations.
I don't want to watch a video right now.
Exactly.
I gave you a very detailed answer. I can't do better than what I already wrote. Or, if you like, let me know what you don't understand in my post.
I'm talking about interpretations for languages as discussed in mathematical logic.
There are uncountably many sets, so there are uncountably many universes for interpretations.
Or, another way: Consider just one uncountable universe. Let the language have at least one individual constant. Then there are uncountably interpretations as each one maps the constant to a different member of the universe.
I don't propound the notion that that approach could be adapted for natural languages too, but it doesn't seem unreasonable to me.
Seeing just that one phrase from the great song made my night. Such a soul satisfyingly beautiful song by a gigantically great composer.
enderton page ref please or st*u. second time i'm calling your bluff on references to your magic identity theory.
Stop agreeing with me, that's no fun!
(edit) So you see I do know some logic after all!
ok
Quoting TonesInDeepFreeze
You're alternately insulting and praising me. Make up your mind!
Did you mean for that to be in the 'Infinity' thread?
In that thread, you've now seen that I already had given you the Enderton pages yesterday and I gave them to you even though you had not asked for them. There's no bluff and never has been. I've been giving you post after post of correct corrections, information and explanations. It's not my fault that you regard that as inimical.
I know you're kidding. But underneath there lies an actual point for me, which is that I don't think you know how insulting you are in certain threads when you read (if it can be called 'reading') roughshod over my posts, receiving them merely as impressions as to what I've said, so that you so often end up completely confusing what I've said and then projecting your own confusions onto me.
But I do appreciate that you quoted Cole Porter's so charming and magical lyric. And there was another special musical moment for me today, so my evening was graced.
It seems that from you I get extremely good answers. Yes, Lawvere's fixed point theorem was exactly the kind of result that I was looking for. It's just typical that when the collories are discussed themselves, no mention of this. I'll then have to read what Lawvere has written about this.
And that not necessary is important for me. This is what @TonesInDeepFreeze was pointing out to me also. I'll correct my wording on this.
Thank you.
Quoting ssu
If you're interested in this stuff, do you know the nLab Cafe? It's a category theory wiki. Here's their page on the theorem
It's all very categorical. Like a new paradigm for thinking about math.
Quoting ssu
I'm not sure how the subject came up. It's interesting to know that all these diagonal type proofs can be abstracted to a common structure. They are all saying the same thing.
If I crossed any lines, I apologize. But I think you are equivocating the word "insult." If I tell you, Tones, you are a low down rotten varmint who cheats at cribbage!" that's an insult.
But if I don't happen to dwell on every word you write; and if I often find your expository prose convoluted and unclear, especially when you lay out long strings of symbols without any context; my eyes do glaze over, and I do skip things.
That is not an insult. It's just me being me, reacting to whatever you wrote that made my eyes glaze. The fault is all mine, But that's who I am and how I am. I am not insulting you.
Can you see the difference between:
(a) Me actively and directly insulting you; and
(b) Me just being my highly imperfect self, doing something that annoys you.
Surely you can see the difference.
Quoting TonesInDeepFreeze
Well that's good, so let's go with the grace.
From the OP at least I made the connection.
Quoting fishfry
That's what really intrigues me. Especially when you look at how famous and still puzzling these proofs are...or the paradoxes. Just look at what is given as corollaries to Lawvere's fixed point theorem:
Cantor's theorem
Cantor's diagonal argument
Diagonal lemma
Russell's paradox
Gödel's first incompleteness theorem
Tarski's undefinability theorem
Turing's proof
Löb's paradox
Roger's fixed-point theorem
Rice's theorem
Of course in mathematics a lot theorems have corollaries, but I would just point out to what these theorems are about: limitations in proving, limitations in computation and a paradox, that basically ruined naive set theory and spurred the creation of ZF-logic. All coming from a rather simple thing.
Going back to the OP and the article given there, perhaps in the future it will be totally natural (or perhaps it is already) to start a foundation of mathematics or a introduction to mathematics -course with a Venn diagram that Yanofsky has page 4 has. Then give that 5 to 15 minutes of philosophical attention to it and then move to obvious section of mathematics, the computable and provable part.
IMO those concepts are far too subtle to be introduced the first day of foundations class. Depending on the level of the class, I suppose. Let alone "Introduction to mathematics," which sounds like a class for liberal arts students to satisfy a science requirement without subjecting them to the traditional math or engineering curricula. Truth versus provability is not a suitable topic near the beginning of anyone's math journey. IMO of course.
Agreed.
What children would really benefit from, is someone to teach them hope, preferably of the most irrational kind, i.e. the stronger, the better.
The mathematics class is clearly not suitable for that, but the mathematics teacher could actually be. But then again, in that case, he is not teaching math but trying to keep the students teachable. That is another job altogether.
Adults cannot teach hope to the children anymore.
Even the children's own (usually hopelessly divorced) families are no longer able to do that. You cannot teach what you don't have. That is why the children grow up believing that there is no hope.
The culture most excelling at "scientifically" inspired hopelessness, is communist China, but the West is clearly not far behind.
Nowadays the young Chinese want to "tang ping" (Chinese: ??; lit. 'lying flat') and believe that you should "bai lan" (Chinese: ??; pinyin: b?i làn; lit. 'let it rot').
The Chinese youth also increasingly believe in the "10 no's" (or the 10 don't") and insist that they are "the last generation". That is obviously a completely true, self-fulfilling prophecy.
The Chinese communist party react by trying to censor and ban public expressions of nihilism or absurdism, even though these things are the natural end point of believing that only pure reason can be a legitimate source of meaning.
There is much more to the struggle with the absurd than just sleeve tattoos, piercings and blue hair. The people who are the most in need of hope, are the least likely to find any.
If someone else does not keep them teachable, then all teaching will be in vain. There no longer exists anybody who can do that.
Me too.
There's a lot that in mathematics is simply mentioned, perhaps a proof is given, and then the course moves forward. And yes, perhaps the more better course would be the "philosophy of mathematics" or the "introduction to the philosophy of mathematics". So I think this forum is actually a perfect spot for discussion about this.
Of course it would be a natural start when starting to talk about mathematics, just as when I was on the First Grade in Finland the educational system then had this wonderful idea of starting to teach first grade math starting with ...set theory and sets. Ok, I then understood the pictures of sets, but imagine first graders trying to grasp injections, surjections and bijections as the first thing to learn about math. I remember showing my first math book to my grand father who was a math teacher and his response was "Oh, that's way too hard for children like you." Few years later they dropped this courageous attempt to modernize math teaching for kids and went backt to the "old school" way of starting with addition of small natural numbers with perhaps some drawings and references about a numbers being sets. (Yeah, simply learning by heart to add, subtract, multiply and divide by the natural numbers up to 10 is something that actually everybody needs to know.)
Quoting fishfry
It sure is interesting. And fitting to a forum like this. If you know good books that ponder the similarity or difference of the two, please tell.
Called the New Math in the USA. I can't even imagine this in grade one. I taught elements of it in college algebra courses in the 1970s - but not for long.
Quoting fishfry
Here is what ChatGpt has to say about mathematical truth:
It leaves out that for the most used overall system for mathematics, it is not the case that every truth is provable.
It leaves out that the concept of mathematical truth is actually not formulated in terms of proof. Rather, proof and truth are formulated separately, but then mathematics shows that, for first order logic: A statement is provable from a set of premises if and only if the truth of the premises entails the truth of the statement.
It leaves out that the greatest objectivity is in the fact that it is machine checkable whether, at least in principle, a given formal sequence that is purported to be a proof is actually a formal proof.
It is a tad simplistic. But it is as far as I went in that direction in my career; as for infinity, I never quite reached it for it lay beyond bounds. It's good you and fishfry are more up to date. Thanks for your service.
I agree that's suitable for this forum. Just not for "Intro to Math," which I interpreted as "Last math class the liberal arts majors will take," or something like the Discrete Math class they teach these days to math and computer science majors.
Quoting ssu
That sounds like the "New Math" they had when I was in school. I loved it but it was a failure in general.
I don't think they teach basic arithmetic anymore. It's a problem in fact.
Quoting ssu
There's always Gödel's Proof by Nagel and Newman. And Gödel, Escher, and Bach: An Eternal Golden Braid by Hofstadter. Actually I only leafed through it once but everyone raves about it. I'm not up on the literature of pop-mathematical logic. Or real mathematical logic, for that matter.
Et tu? ChatGPT doesn't know anything about mathematical philosophy. It just statistically autocompletes strings it's been fed.
There's many things they don't teach in school when looking at what my children have to study. Usually the worst thing is when the writers of school books are too "ambitious" and want to bring in far more to the study than the necessities that ought to be understood.
Quoting fishfry
I looked at this. Too bad that William Lawvere passed away last year. Actually, there's a more understandable paper of this for those who aren't well informed about category theory. And it's a paper of the same author mentioned in the OP, Noson S. Yanofsky, from 2003 called A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points. Yanofsky has tried to make the paper to be as easy to read as possible and admits that when abstaining from category theory, there might be something missing. However it's a very interesting paper.
In it he makes very interesting remarks:
And I would really underline the last chapter above. The issue is about limitations and if you end up in a paradox, you simply have had an inconsistent system to start with. Usually in the way that your premises or the "axioms" you have held to be obviously true, aren't actually true, not at least in every case. Hence an outcome similar to Russell's paradox is simply a logical consequence of this. Also understanding that these are limitations doesn't mean that the consistency of mathematics is brought to question. I think on the contrary: you simply have to have these kind of limitations for mathematics to be logical and consistent.
(If anybody is interested, there are some classes by Yanofsky in Youtube, for example Outer limits of reason. I haven't watched them yet, so I cannot rate them.)
Here is a quote from Reddit that brings some clarity to the subject of "truth" in mathematics these days:
I'm an antique. Truth for me is associated with proof.
proof implies truth, but truth does not imply proof.
Suppose we have a consistent set of axioms for mathematics (the set theory axioms will do nicely). Then if the axioms are true then all theorems derived from those axioms are true. But there are truths not derivable from the axioms.
In other words, whatever is provable is true. But it's not the case that whatever is true is provable.
People who believe that pure reason is the only source of meaning will never accept this, no matter how often you hammer it into their heads.
Even if we had the axioms of the physical universe, most of its facts would still be inexplicable. Stephen Hawking already pointed that out, but apparently nobody cared:
With the overwhelming majority of facts inaccessible even from the perfect axiomatic theory of the universe, it is clear that the "God of gaps" conjecture is simply nonsensical.
Hence, all of this is clearly very unpopular.
Quoting jgill
If you cannot accept the true nature of the truth, you may need its false nature for your worldview. I don't know how much you are invested in positivism, if at all. A positivist will never accept the truth about the truth.
'Not everything that counts can be counted, and not everything that can be counted counts'
Quoting Tarskian
Any examples of those people come to mind?
Positivists are like that:
.
In fact, it may actually be ok to reject metaphysics and/or theism but not for positivist reasons.
These people really exist. David Hilbert was one and he even wanted proof for positivism:
A lot of people simply ignore Godel's work and continue to behave as if positivism makes sense. I cannot readily pinpoint anybody in particular but I know that the false belief is widespread. The problem is certainly not imaginary.
Sure. Ive always rejected positivism, although for different reasons. I see positivism as being a kind of undercurrent in modern thought. But I don't know if Hilbert fits the bill. Hilbert's work in mathematics and his foundational program, known as Hilbert's program, aimed to provide a solid foundation for all of mathematics by formalizing it and proving its consistency using finitary methods. This goal aligns more with a foundationalist approach rather than with positivism per se.
Positivism, particularly as developed by the Vienna Circle in the early 20th century, emphasizes empirical science and the idea that meaningful statements are either empirically verifiable or logically necessary.
Hilbert was more concerned with the internal consistency and formalization of mathematics rather than the empirical verification of mathematical statements. His program sought to ground mathematics on a set of axioms and prove its consistency through purely syntactic means, without reference to empirical content.
I did a unit on A J Ayer's Language Truth and Logic, which is a canonical text of positivism, and found it immensely annoying. I was pleased to learn that it had became evident not long after its publication, that Ayer's style of positivism was self-contradictory, because the kind of verificationism that he insisted on, could neither be validated nor falsified by empirical methods. So it failed its own criteria! My tutor said it was like the mythical Uroboros, the snake that eats itself. 'The hardest part', he would say with a wink 'is the last bite.'
But while there are some overlaps in the emphasis on formalism and logic, Hilbert's aims were distinct from the broader philosophical tenets of positivism.
So I agree with your rejection of positivism, but not for your reasons.
'Scientific method relies on the ability to capture the measurable attributes of objects, in such a way as to be able to make quantitative predictions about them. This has been characteristic of science since Galileo, who distinguished those characteristics of bodies that can be made subject to rigourous quantification. These are designated the 'primary attributes' of objects, and distinguished, by both Galileo and Locke, from their 'secondary attributes', which are held to be 'in the mind of the observer'. They are also, and not coincidentally, the attributes which are specifically amenable to the treatment of mathematical physics, which lies under so many of the spectacular successes of science since Galileo.
This was part of the essential discovery of the 'scientific revolution': that insofar as you can represent an object mathematically, that you can use mathematical logic to predict its behaviour. The greater the amenability of an object to mathematical description, the more accurate the prediction can be: hence the high estimation of physics as the paradigm of an 'exact science'.
Bertrand Russell said that 'physics is mathematical not because we know so much about the physical world, but because we know so little; it is only its mathematical properties that we can discover.' And within the domain of applied mathematics, the applicability of mathematical logic to all kinds of objects yields nearly all of the power of scientific method. But Russell makes a philosophically important point, that the power of mathematics in the physical world depends on a fundamental abstraction, a boiling down to its precisely-quantifiable attributes.
In other words, what can be expressed in quantitative terms can also be subordinated to mathematical analysis and, so, to logical prediction and control. It becomes computable, countable, and predictable by mathematical logic. That is of the essence of the so-called 'universal science' envisaged on the basis of Cartesian algebraic geometry.'
That is much nearer to what I think you have in your sights, rather than pure mathematics as such.
Empiricism (as embodied in the principle of testability) is just a temporary stopgap solution in science. What they really want, is the complete axiomatized theory of the physical universe. So, what they really want, is provability:
At this level, science and mathematics will be merged into one. They actually want to get rid of empiricism and testing and science as we know it today. However, in absence of the ToE, they simply cannot.
Hilbert was relentlessly preparing the ground for proving the completeness of the ToE, as soon as the ToE would finally be ready -- back then, "any time now".
In the positivist vision of the future, there would simply be no need for empirical testing of ToE-based mathematical statements about the physical universe.
Quite a few people still believe that this is attainable. If you tell them that it is not, they will just ignore it. In that sense, sending people to the moon with Apollo 11 was a fantastic gimmick. It fueled the masses with the hope that something like the ToE would arrive very soon now. Everybody would be able to go on holiday to the moon. Positivism is also an important political program aimed at boosting the credibility of the powers that be.
(1) The completeness theorem is: If a sentence is entailed by a set of premises then the sentence is provable from that set of sentences. Or, equivalently, if a set of sentences is consistent then it has a model.
But 'a theory is complete' means that every sentence in the language for the theory is either provable in the theory or its negation is provable in the theory.
Now, I'm not sure, but I doubt that (first order) group theory is complete.
What does "true for every group" mean? Sentences are true or false in models. So does"true for every group" mean "true in every model of first order group theory"?
If yes, then, yes every sentence that is true in every model of group theory is provable in group theory, as follows:
If a sentence S is not provable from a consistent set of axioms G, then G plus ~S is consistent, as follows: By the completeness theorem, it is not the case that every model of G is a model of S. So there is a model of G that is also a model of ~S. So G plus ~S is consistent. Now suppose a sentence S is true in every model of group theory. But suppose it is not provable in group theory. So ~S is consistent with the axioms of group theory. So, by the completeness theorem, the axioms of group theory plus ~S has a model. But since S is true in every model of group theory, ~S is false in every model of group theory, which contradicts that there is a model of the axioms of group theory plus ~S.
(2) Yes, if ZFC has a model M then there are other models of ZFC that are not isomorphic with M. And, yes, there are sentences independent from ZFC. But I don't know what exact claim is made with "The existence of various models of ZFC is analogous to the existence of different groups." There's nothing notable about the fact that there are different groups. Since we have Lowenheim-Skolem, it's not even notable that there are non-isomorphic models of group theory.
I haven't the foggiest what that is supposed to mean.
Run that by me again, please.
Most mathematical truth is not explicable, even though we have its theory.
Most physical truth isn't explicable either, even if we had its perfect theory, which we don't. The most perfect theory of the universe would only explain a very small fraction of its truth. Hence, if the goal is to explain all of the facts in the universe, it is pointless to look for the perfect theory of the universe, because the goal is unattainable. There simply is no instrument conceivable that could do it.
Quoting TonesInDeepFreeze
Not everything that matters is calculable.
What if the positivist are indeed partly right, but they won't get the answer they would want to hear? Hasn't this been obvious starting from Hilbert? He got answer, but not those one's he wanted to hear.
What if this merging of science and mathematics can happen, yet not in the way mathematicians or especially positivists want it to happen? What if a lot of science and even something as distant as the social sciences is indeed mathematical, but in the part of math that is not provable or computable?
Just make this thought experiment: What if an area of study of reality is indeed mathematical, but firmly in the non-computable and non-provable, but perhaps in the "true and expressible" (as Yanofsky put it in the text that you referred in the OP)? How will this show itself?
In my view, one thing would be certain: those people studying that part of reality and it's phenomena aren't computing data or making functions or other mathematical models about reality. They will just smile if you ask if they could explain the phenomena they are investigating by forming a mathematical model of the phenomena.
I should have said that that I don't know what his comment to me is supposed to mean in relation to anything I've written.
Quite a few straw people, I suspect.
:up:
If a positivist hears an answer that he does not like, he will typically ignore it and just carry on. Hilbert may grudgingly have accepted proof but not everybody is Hilbert.
Positivism and scientism are ideologies. It is not possible to prove them wrong. You cannot win a debate from a Marxist either.
It should be totally evident to everybody that when discussing the foundations of mathematics, philosophy is unavoidable. You simply cannot "just stick to the math" and not take a philosophical stance in my view.
Hence this thread is totally fitting for a philosophy forum.
I hear awful things about the teaching of math these days and teaching in general, but I have no personal experience. I did try to help a friend's 13 year old with her math homework once and couldn't make heads or tails of it.
Quoting ssu
Thanks so much for that reference and to Yanofky's YouTube channel. RIP William Lawvere.
You are not old as Godel's proof, which was published in 1931. Godel's results are therefore more antique than you. Perhaps you're a logicist at heart. They thought mathematical truth was derivable from logic.
https://en.wikipedia.org/wiki/Logicism
Not quite. The mathematicians I knew BITD had little to no interest in discussing the distinctions between provability and truth. We were mostly in classical (complex) analysis. Mostly we are gone now. A few of us remain.
That's as true today as it was back then, logic being a niche, ignored by most math departments. But in terms of antiquity, Godel's work precedes you.
Depending upon the quality of the university to some extent. With the exception of a 12 month post-graduate program I took at the U of Chicago for the USAF, my entire education was in large state universities (4).
I checked at what Harvard has to offer and they have two undergraduate courses in mathematical logic (and probably foundations), but at my last Alma Mater there is nothing of that kind offered at any level.
I don't think it's just quality. My grad school was high quality but no logic or foundations to speak of. The one set theorist when I was there didn't get tenure and left. I think logic is concentrated in a few places but not that widely. Seems that way anyway.
I just checked on this past week's papers in logic posted at ArXiv.org. Four are from American universities and 13 are from foreign countries. FWIW
When you say "mathematical truth" do you also refer to axioms? Me and another user had a disagreement about the definition of logicism as it seems hard to source no surprise. SEP presents both a "weak logicism" and "hard logicism":
The article The Three Crises in Mathematics: Logicism, Intuitionism and Formalism says:
Quoting https://www.jstor.org/stable/2689412?seq=1
Quoting Edgar E. Escultura
Love me some crazy folks.
Does that make Americans illogical? :-)
I am definitely not authoritative on that. I know that Russell wanted to develop math from logic, and Gödel busted Russel's dreams. Beyond that I am totally ignorant.
Quoting Lionino
I should read that. Will dispatch a clone.
It is the same article as the reading for my Metaphysics of Mathematics thread. Tones didn't love it.
So many articles, so little time.
I agree. "Truth" is negotiable it seems. The word should be avoided in mathematical discussions.
Tarski's Undefinability Theorem says (Wiki):
There is nothing wrong with referring to truth in mathematics. (1) The everyday sense of 'truth' doesn't hurt even in mathematics. When we assert 'P' we assert 'P is true' or 'it is the case that P'. (2) There is a mathematical definition of 'true in a model'.
Just to be clear: Tarski did not disallow the notion of 'truth', but rather he sharpened it to 'true in a model'. The undefinability theorem doesn't vitiate the notion of truth, especially as formalized as 'true in a model'; rather the undefinability theorem is just that in certain interpreted languages there is no definition of a truth predicate.
But of course you yourself know that's not true. I assume you think of your research as discovering truths about abstract mathematical structures that have some Platonic existence in the conceptual realm. You surely feel that the things you study are true. Do you not?
Quoting jgill
What made you quote that? Not sure of the relevance. It's another diagonal argument.
In any event, my sense is that most mathematicians are at heart Platonists. The things they study are real. The number 5 is prime, and there is no possible world in which it isn't. The number 5 is prime even when there are no intelligent minds in the universe to comprehend it. The fact that 5 is prime is True even before mathematics exists. There is indeed truth about the things mathematicians study.
Hasn't this been your experience?
Uh-oh, insert Biden joke here.
Of course there is nothing wrong with using the word "true" in math. But in the papers I have written (around thirty publications and over sixty more as recreation) I doubt that I ever used the word - but I could be wrong. On the other hand, "therefore" is ubiquitous.
Quoting fishfry
"True but verify" might be my motto. I suppose I would consider myself a Platonist were I to care, but this type of philosophical categorization - although relevant to this forum - matters very little to me.
Quoting fishfry
"concept of truth in first order arithmetic statements"
If there are any practicing or retired mathematicians reading these threads I wish you would speak up. I would ask my old colleagues what they think of these philosophical discussions, but they are pretty much all gone to greener pastures.
Well, no. The term "truth" should be used in a way that is compatible with its model-theoretical definition, which is in fact not particularly negotiable.
In model theory, truth is a correspondentist notion.
A fact is true because it is part of a particular collection of truth, i.e. a "model", an "interpretation" -- or if the operations supported are irrelevant-- a "universe".
If such "model" "interprets" a theory, then every statement that is provable from this theory will be true in the "model", i.e. soundness theorem:
soundness theorem: provable ==> true
So, the correspondentist mapping of truth occurs between theory and "model" (or "universe").
Concerning Tarski's undefinability, it doesn't say that truth does not exist. It just says that true(n) is not a legitimate predicate.
It says that for certain formal interpreted languages, there is no predicate in the language that defines the set of sentences true in the interpretation.
Quoting Tarskian
That's not the soundness theorem.
The soundness theorem stated in two equivalent ways:
If a set of sentences G proves a sentence S, then every model of G is a model of S.
If a set of sentences G proves a sentence S, then for all models M, if every member of G is true in M then S is true in M.
Quoting Tarskian
No, the mapping is from the symbols of the language:
each individual constant map to a member of the universe
each n-place predicate symbol maps to an n-ary relation on the universe
each n-place operation symbol maps to an n-place function on the unviverse
And ""model" (or "universe")" is wrong since a model is not just a universe. Rather, for every model there is a universe for that model.
So, for example, "soundness means: provable => true" is just the gist of it. In fact, it seems to be ok to phrase it like that:
Quoting TonesInDeepFreeze
That is how it is technically achieved. I was trying to point out that it achieves the same goal as stated in the correspondence theory of truth:
Technically, model theory will map the symbols. However, the actual purpose of doing that is to achieve what is described in the correspondence theory of truth.
The following explanation about correspondence in model theory will probably be deemed impenetrable in a multidisciplinary context:
But then again, in my opinion, the correspondence theory of truth is perfectly fine to describe the gist of how model theory sees the relationship between theory and model.
Quoting TonesInDeepFreeze
Well, I glossed over that, without insisting too much. If you give the technical explanation of what exactly is missing, then pretty much nobody will keep reading in a multidisciplinary context.
Quoting TonesInDeepFreeze
I simplified the following:
To:
(in PA or similar)
Again, the complete statement above is probably too much in a multidisciplinary environment. It will be deemed impenetrable.
That seems okay as a broad synopsis.
Quoting Tarskian
Simplifications are okay if they don't mislead by omitting crucial conditions and distinctions.
There are surprising and unexpected connections between the foundational crisis in mathematics and fundamental metaphysics.
In principle, mathematics proper is about nothing at all:
If you dig into the foundational crisis of mathematics, however, it suddenly starts talking about deep metaphysical issues. The mathematical crisis arose out of profound paradoxes:
How can something that is essentially about nothing at all, suddenly make a U-turn, and give answers on the fundamental nature of everything?
The mathematical crisis turns out to have massive implications for the following issues in metaphysics:
- What is truth?
- What is the connection between a truth-bearer and a truthmaker?
- What is free will?
- Is the universe predetermined?
- Is the universe part of a larger multiverse?
- Is there a heaven and a hell?
The mathematical crisis even puts into question the most fundamental and seemingly unassailable laws of logic:
- What is actually "identity", since the law of identity does not always hold?
- Why is the law of the excluded middle (LEM) not always legitimate?
- Since identity and the LEM are in question, is even the law of noncontradiction actually circumstantial?
The mathematical crisis also shows that existing answers in metaphysics are largely unsatisfactory:
- It shines another light on Kant's Critique of Pure Reason. Kant has probably got it mostly wrong.
- It certainly proves the positivists wrong.
After more than a century, the implications of the mathematical crisis have not been digested in metaphysics. There is pretty much no awareness of its metaphysical impact. In my opinion, this is because people in both fields almost never talk to each other. One reason for this, is the fact that most publications on the mathematical crisis are written in a language impenetrable to outsiders.
That is one extreme view.
That is extreme formalism. It does not speak for all formalists.
It is actually an incredibly productive view. The more you insist that it is about nothing at all, the more it starts revealing secrets about everything. It is truly mind blowing.
You may hold that the view has merits. I'm only pointing out that formalism is not confined to that view.
The reason why moderate formalism has less merit that the most extreme take on the matter, is actually to be expected. If you abstract away almost everything, then the very little that is still left, will indeed apply to pretty much everything.
Yes, of course. Hardy famously said:
I would add to what Hardy said, that "useful" mathematics has absolutely zero metaphysical implications. That is why it is "intolerably dull".
Mathematics proper is indeed not necessarily about nothing. That is why it is so boring.
So why do you quote something that is seriously incorrect?
The quote is extreme.
I don't think, however, that it is incorrect.
If mathematics is "just string manipulation" then it is indeed "about nothing". As I have already acknowledged, other views on the matter are also viable.
Furthermore, besides formalism, there are several other competing ontologies for mathematics. They all turn out to be simultaneously correct as well. For example, Platonism is not wrong either. It is just another way of looking at things.
Similarly, concerning competing mathematical theories, PA and ZF-inf are two completely different ways of looking at things. "Everything is a natural number" versus "Everything is a set".
However, they turn out to be perfectly bi-interpretable.
You can express natural numbers as sets, and arithmetic on natural numbers as set operations, and then everything you say about natural numbers, you can effectively say them about these sets. The reverse works fine as well.
Extreme formalism turns out to be a metaphysically useful view.
No. PA can be built from ZF but not the converse.
Quoting Tarskian
Impossible, those are two mutually exclusive views.
Not ZF but ZF-inf. It requires removing and denying the axiom of infinity.
Quoting Lionino
Of course, they are mutually exclusive. Still, they both provide a perfectly legitimate ontology for mathematics. Similarly, you can build a society on capitalism or on communism. They are both mutually exclusive.
That was not my point. Mathematicians don't use the word true in their formal work.
But when you make a discovery, don't you feel that you are discovering something that is true, or factual, about whatever it is you're studying? Surely you don't lean back and say, "That's a cool formal derivation that means nothing." On the contrary, I imaging that you say, "I learned something about nonabelian widgets" or whatever. Am I wrong? I would be surprised if I'm wrong?
Quoting jgill
In your work, do you think of yourself as discovering formal derivations? Or learning about nonabelian widgits?
Quoting jgill
Yes but it's so unlike you to have an interest in Tarski's undefinability theorem.
Quoting jgill
The "philosophical" discussions in this forum do not reflect the actual work of philosophically inclined mathematicians. For example the early category theorists like Mac Lane were very philosophically oriented. "He was vice president of the National Academy of Sciences and the American Philosophical Society, and president of the American Mathematical Society ..." Impressive resume.
https://en.wikipedia.org/wiki/Saunders_Mac_Lane
It's a bit like physicists. Most of them are of the "shut up and calculate" school, while others -- a minority -- are interested in what it all means, what it can tell us about the ultimate nature of reality. They call the latter foundations of physics. So math is to math foundations as physics foundations. Most don't care, some do.
Nor do I lean back and say, Wow, that's true! I simply don't use the words "true" or "truth" when doing math. I don't even think the words. But that's me, not other math people.
Quoting fishfry
I don't think of myself doing anything. I only do. Or did. I'm pretty old and not in such great shape to do much of anything.
Quoting fishfry
Doesn't surprise me. I am (was) a humble classical analysis drone, far from more modern and more abstract topics. Maybe young math profs these days use the word "truth" frequently.
(On the other hand I did point out what I considered the truth of a form of rock climbing many years ago by demonstrating and encouraging a more athletic, gymnastic perception of the sport. Even then I didn't use the word "truth".)
Isnt that just an example of Kants dictum concepts without percepts are empty?
When Kant writes about philosophy of the mind:
I generally refuse to engage, if only, because philosophy of the mind is almost never falsifiable. That is why I ignore a good part of the text in "Critique der reinen Vernunft". If what Kant says, is simply not actionable, I will just generously concede the point to him. What else can I do?
In this regard, about similar theories, Karl Popper writes in "Science as falsification":
My opinion about "Critique of Pure Reason" is pretty much the same as what Popper writes about Marx, Freud, and Adler. The number of falsifiable points in what Kant writes, is very, very limited. Still, when Kant -- very rarely -- takes the risk of saying something that is actually falsifiable, it always turns out to be false.
So, what could falsify the thesis you're proposing in this thread? What could someone point to, to demonstrate that your contention 'Mathematical truth is chaotic' is false? Isn't Popper's point that metaphysical theses cannot be disconfirmed by empirical discoveries? What empirical discovery would disprove the thesis 'Mathematical truth is chaotic'?
If you demonstrate that Cantor's theorem is false. (the existence of countable and uncountable infinity)
If you demonstrate that Gödel's theorem is false. (incompleteness)
If you demonstrate that Tarski's theorem is false. (undefinability of the truth)
If you demonstrate that Turing's theorem is false. (halting problem)
If you demonstrate that Carnap's theorem is false. (diagonal lemma)
and so on.
These theorems are all interrelated. Demonstrate one flaw in one of their proofs. One is probably enough, because one flawed theorem will be enough grounds to demonstrate the falsity of all other ones.
In another paper, Yanofsky argues that it is Cantor's theorem that is at the core of it all:
https://arxiv.org/pdf/math/0305282
"You cannot create an onto mapping between a set and its power set."
Until I ran into Yanofsky's other paper, I used to think that Carnap's theorem was the real culprit:
"For any property of logic sentences, there always exists a true sentence that does not have it, or a false sentence that has it, or both."
My intuition says that Yanofsky is probably right, and that it is Cantor's theorem that is at the root of it all, but I am currently still struggling with the details of what he writes.
In the paper about the chaos in the truth about the natural numbers, Yanofsky argues that:
If you try to express all the truth about the natural numbers, you are effectively trying to create an onto mapping between the natural numbers and its power set, the real numbers, in violation of Cantor's theorem. That is why most of this truth is simply ineffable, and a fortiori, unprovable, and therefore, unpredictable.
Concerning Cantor, Gödel, and Turing, I have some kind of morbid fascination for what I consider to be a form of disaster tourism.
It is akin to a guided tour around Chernobyl reactor number four. It leads to the very fault lines in the tectonic plates in the foundation of things, which are indeed fiendishly ugly.
Sometimes, it is even difficult to believe that it is all true. It often gives the sensation that "it cannot be that bad?".
Sometimes, I don't really get it, or not immediately. At that point, I know that I am close to understanding something that is even worse than all the bad stuff that I have come across already.
I cannot stop because I like too much playing with metaphysical fire. If you have the sensation that you are about to discover the true secret name of Satan, would you stop or would you keep going?
That was a typing mistake obviously as the very post I quoted said "ZF-inf" instead of just "ZF". Regardless, ZF-inf and PA are not "two completely different ways". One is tied to the other.
Two things that are mutually exclusive cannot "turn out to be simultaneously correct as well". It is absurd. Besides, formalism is not an ontology of mathematics, it is an approach to foundations.
Apparently, other people call formalism also an ontology:
Platonism and intuitionism are in his opinion the other main ontologies:
I hardly understand anything in this thread, as my knowledge of mathematics is rudimentary. But my view of the metaphysics is much more benign, as I'm attracted to mathematical Platonism. My view is that numbers are real, but not physically existent. If you point to a number, '7', what you're indicating is a symbol, whereas the number itself is an intellectual act. And furthermore, it is an intellectual act which is the same for all who can count. It's a very simple point, but I think it has profound implications.
There was an article in Smithosonian Magazine called What is Math? which considered this question, with the Platonist view being represented by an emeritus professor, James Robert Brown. After reading that article, I bought his book, although I must confess most of it was also beyond me, but I'm intuitively convinced that the platonist view is correct, and that it's resisted because it challenges materialism and empiricism, as some of those quoted in the Smithsonian article attest.
I will take issue with this. I say that in this standard formulation, the phrase 'timeless entities existing objectively' is wrong, because it is a reification. To reify is to 'make into a thing'. Numbers don't exist as objects, except for in the metaphorical sense of 'objects of thought'. As soon as this 'ethereal realm of separately existing things' is posited, this reification occurs. To understand why, review Thinking Being: Introduction to Metaphysics in the Classical Tradition, Eric D Perl, Chapter Two, Plato, particularly S3, The Meaning of Separation, from which:
Much the same can be said of number, which is why they're not 'objective', but 'transjective' (a newly-coined word meaning 'Transcending the distinction between subjective and objective, or referring to a property not of the subject or the environment but a relatedness co-created between them'.)
I said the quote is incorrect. You agreed. So I asked why you posted it.
Quoting Tarskian
Now you've reversed yourself.
Quoting Tarskian
I don't offer this as a philosophy, not something I advocate others adopt, so not something I would need to defend, but my personal perspective is that different philosophies are not necessarily right or wrong but rather are framework options for organizing one's thoughts on subject matters.
Quoting Tarskian
ZF\I is not (ZF\I)+~I. The former is ZF without the axiom of infinity. The latter is ZF with the axiom of infinity replaced by the negation of he axiom of infinity.
It is (ZF\I)+~I that is bi-interpretable with PA.
[EDIT CORRECTION: I'm told that 'ZF-I' is a notation for '(ZF\I)+~I'.]
I'm with you. And I was a professor of mathematics. I am still puzzled over what precisely "true" means beyond verification by formal proof.
Quoting Wayfarer
I like your clarity.
By itself, this could be framed within a conceptualist framework, where mathematics is reduced to psychology not just platonism.
Quoting Wayfarer
The distinction between type and token.
Quoting Wayfarer
Is it though? You may say this because you are a platonist, so you believe there is some unambiguous universal accessible to all.
apropos of this general discussion, I've just downloaded a rather interesting textbook, What is Mathematics, Really? Reuben Hersh, 1999. Quite approachable.
Outstanding.
And actually we don't need the 'F'.
(Z\I)+~I is bi-interpretable with PA.
[EDIT CORRECTION: I think it is incorrect that (Z\I)+~I is bi-interpretable with PA. This is correct: If every set is finite, then the axiom schema of replacement obtains and (Z\I)+~I = (ZF\I)+~I. But I don't think that works; I was thinking that the negation of the axiom infinity implies that every set is finite. But I think that itself requires the axiom schema of replacement.]
In that regard, Victoria Gitman writes the following alarming statement:
Even though the law of identity is certainly applicable in the standard model of the natural numbers, it may fall apart in nonstandard models of arithmetic.
So, ?+7 ¬= ?+7 may be true in a nonstandard context, with ? the infinite ordinal representing the order type of the standard natural numbers. If it is false in any other nonstandard context, then this statement is even true but unprovable. I am not sure if this can be the case.
Victoria Gitman points to the following publication for a more elaborate explanation on what's going on:
Unfortunately, the publication is not available online. It can be ordered in paper-based format for $180 from Oxford University Press:
https://global.oup.com/academic/product/the-structure-of-models-of-peano-arithmetic-9780198568278?cc=us&lang=en
So, we already had ineffable numbers. Now we also have indiscernible ones. What other monstrosities are they going to discover in the melted plutonium core of Chernobyl reactor number four?
I agree, also with Yanofsky.
Cantor's proof is the simplest form of diagonalization that has all the "problematic" consequences, once we start to look at infinite sets (with finite sets Cantor's theorem is quite trivial). As Yanofsky say's:
And of course, with the proof of the theorem, using diagonalization, we showed that a surjection / onto mapping is not possible. This shows just how close making a bijection is to giving a proof. We understand that an infinite set is incommensurable to a finite set and that we cannot count finite numbers and get to infinity. However, this isn't the only thing we have problems once we encounter the infinite.
After all, if a formal system can express Peano Arithmetic, then Gödel's second incompleteness theorem holds that the system cannot prove it's own consistency.
Any observations on the arguments for or against mathematical platonism as outlined in this post?
I subscribe to the following take on Platonism:
In my opinion, you cannot actively do mathematics if you do not believe that its objects are real while you are doing it.
Godel also thought that talent for Platonism is a prerequisite for being successful at mathematics:
It is, however, mentally very easy to switch to formalism.
You can simply switch off the lights and declare that it is all just meaningless symbol manipulation and about nothing at all, which it actually is, if you take the time to think about it.
Except it doesnt allow for the iunreasonable effectiveness of mathematics in the natural sciences.
As I said above, the reason the most people wont defend platonism is because they dont understand or cant live with the metaphysical commitment it entails. Myself, I have no such difficulty.
The problem which I have encountered in this forum, is that there is an attempt by many, to represent numbers, and other mathematical objects like sets, as things which are subject to the law of identity. The law of identity states what it means to have an identity as a thing, and it is known to be applicable to material objects. By representing mathematical objects as subject to the law of identity, which applies to things, mathematical objects and material objects are implied to be of the same type, each having the identity of "a thing".
The result of this is that there are significant conceptual structures, set theory, and mathematical logic in general, which are based on the assumption that there is no difference between 'objects' of thought' and material objects. This leads to absurd ontologies like model-dependent realism.
It is my opinion that this conflating of the two is the reason why quantum observations are so difficult to understand, and quantum theory interpretations are many and varied. Within quantum theory there are no principles which would allow for a distinction between the material object and the 'object of thought' so that the two are combined in a confused model of wave/particle dualism.
Viewing mathematics as just string manipulation highlights a different aspect of the same thing. The same holds true for structuralism. You can see mathematics as mostly templates with template variables. There are circumstances in which an alternative ontological view is actually the most inspiring one.
Quoting Wayfarer
I intuitively believe that arithmetical truth and physical truth are structurally similar. This explains why it is unreasonably effective in a physical context. For exactly the same reason, it should also be unreasonably effective in a metaphysical context.
I fully endorse Pythagoras' view on the matter:
In modern lingo, arithmetical theory, i.e. the theory of the natural numbers (PA), and the unknown theory of the physical universe exhibit important model-theoretical similarities.
For example, the arithmetical universe is part of a multiverse. I am convinced that the physical universe is also part of a multiverse.
The metaphysics of the physical universe is in my opinion nothing else than its model theory.
Model theory pushes you into a very Platonic mode of looking at things. In my opinion, it is not even possible to understand model theory without Platonically interpreting what it says.
Desparately Seeking Mathematical Truth
If we don't differentiate between objects sensed and ideas grasped by the intellect. then there is nothing to prevent us from believing that the universe is composed of numbers. This is known as Pythagorean idealism, and often called Platonism. But Plato, along with Socrates, was very skeptical of this type of idealism, revealing its weaknesses. Aristotle, following Plato is often claimed to have decisively refuted Pythagorean idealism. He developed the concept of matter as a principle of separation between human ideas and the independent universe.
I do not believe that the universe is composed of numbers.
What I believe is limited to the idea that the arithmetical multiverse is structurally similar to the physical multiverse.
For example, if there are five people in a group, this situation is structurally similar to a set with five numbers. It does not mean that a person would be a number.
You could conceivably make a digital simulation of the entire universe and run it on a computer. This simulation of the universe would consist of just numbers. What you would see on the screen will be an exact replica of what you would see in the physical world. It would still not mean that this collection of numbers would be the universe itself.
Pythagorean idealism is actually a widespread fallacy:
A map of the world can help us understand the world. The map will, however, never be the world itself.
Now, if it is about an abstract world, then the perfect map of such abstract world is indeed the abstract world itself. There is no difference between a perfect simulation of an abstract world and the abstract world itself.
That is why an abstraction cannot be truly unique. An abstraction can only unique up to isomorphism.
Physical objects, on the other hand, can be truly unique in this physical universe (but almost never in the physical multiverse).
Not everything that Pythagoras said was necessarily correct. The same for Plato. The same for Aristotle. It is just that they have managed to also say things that are amazingly insightful.
It's structurally similar because what constitutes "a group" is artificial, just like what constitutes "a set" is artificial. So you are just comparing two human compositions, the conception of a group and the conception of a set..
Quoting Tarskian
If it's not the same as the universe, but a replica, then there is no limit to the difference which there may be between the two. I could show you a piece of paper and say that it's a replica of the universe. How would your proposed computer simulation provide a "better" replica of the universe? That's the thing about maps, they only show what the map maker decides ought to be shown.
Quoting Tarskian
Then there's something more to reality than maps and the world which is mapped. There must also be something which makes one map "better" than another. This cannot be shown by the map nor is it a part of the world which is mapped.
Quoting Tarskian
This makes no sense. what would make an abstract world the perfect abstract world? Do you see what I mean? If there is no difference between the perfect simulation and the abstract world which is simulated, then they are one and the same thing. So now we have an abstract world which you claim is |a perfect simulation". What makes it perfect? It's just an abstract world like any other.
This place is not real.
The notion of group may indeed be an abstraction, a way of perceiving things, but there are still five people, which are physically there.
Quoting Metaphysician Undercover
Fewer differences.
You could measure a random sample of these differences, add up their squares, take the root, and rank replicas according to their sampled "deviation" from the original. This is actually done very routinely. It is even standard practice.
Quoting Metaphysician Undercover
A perfect map of an abstract world is the abstract world itself. Perfect means "isomorphic" in this case.
According to the structuralist ontology, an abstraction consists of only structure. An abstraction is structure, turtles all the way down.
Hence, an isomorphic mapping of a structure is equivalent to the structure itself:
Since two isomorphic abstractions have the same properties, they are essentially identical:
Two abstraction are not truly identical. They are identical up to isomorphism.
For example, the symbols "5" and "five" are identical up to simple translation (which is in this case an isomorphism). Two maps can also be isomorphic. In that case, they are "essentially" identical.
The number "5" is an abstraction and is therefore not truly unique. It has numerous isomorphisms, such as "2+3" or "10/2" that represent essentially the same abstraction.
Abstraction are never truly unique.
The existence of the multiverse is a Pythagorean belief.
It is not possible to prove that the physical universe is part of a multiverse, simply because it is not possible to prove anything at all about the physical universe.
Hence Wheelers conjecture of the One Electron Universe
Did you think your work was "about" anything? Or pure symbol-pushing?
I'm pressing you on this point because I don't believe you did not believe in the things you were studying!
Quoting jgill
I'm sorry to hear that.
Quoting jgill
I believe you are making too much of what someone on the forum might have said about truth. You are the only professional mathematician on this site and you are more authoritative on what mathematicians do than anyone.
Quoting jgill
You know, I would think there is much truth to rock climbing. A famous theoretical physicist, Lisa Randall, famously had a rock climbing accident. She is a specialist in the most advanced theories of gravity. I always thought that was ironic, a world-class expert on gravitation being injured by that very force.
Gravity is true, wouldn't you say? You probably have a visceral sense of gravity, more so than most physicists.
I never spent any time thinking about what I was doing. I did it, and still do it because it is a fascinating realm of exploration. As was rock climbing when I was a lot younger. I never puzzled over the fundamental nature of mathematics. And I doubt my colleagues did either.
Quoting fishfry
No. Gravity simply is. Some aspects could be said to be true. Word babble IMO.
Interesting.
Quoting jgill
You don't believe in the word truth, or that anything in the world is true, even outside of math?
That was an interesting phone call:
It still generates a flurry of new articles in 2024!
But these articles don't particularly say anything new. They do not seem to make much progress in investigating the matter.
Why wouldn't a discussion of mathematics morph into a conversation about the US elections? In elections mathematics plays a pivotal part too: who gets the largest number of votes. And when you have these different kinds of electoral systems, then it can happen that the candidate that gets the most vost votes isn't actually elected. Yet elections are math, aren't they? :wink:
When you were never taught how to go around the territory you are exploring, you tend to wander outside of that territory as the walk goes on. Same thing. It doesn't help that one of the chatters here is using wiki links to completely make stuff up as he goes.
"What is math?" is a question situated in a larger metaphysical arena. If we say "it is just symbolic manipulation," we are then led to ask: "what are symbols? And: "why do we manipulate them?"
The "why" here leads right to physics, and the natural sciences more broadly, because a big part of the "why" seems to involve how our symbolic systems have an extremely useful correspondence to how the "physical world" is.
Explanations that just posit mathematics as "a social practice," "an activity," etc. are really non-explanations IMHO. No one denies that mathematics is a product of human culture engaged in by humans. But the question "why do we do this?" leads right to questions about "how the world is" which tend to include physics and metaphysics.
To the extent that we use mathematics to understand the world, our understanding of mathematics also seems to underpin our very notions of "what our lives and our world are."
My question would instead be: "why physics over metaphysics, semiotics, information theory, computer science, or biology?"
These all seem equally relevant. To my mind it has to do with a certain sort of view of naturalism and physics' role in the sciences, one that, if not "reductive," at least tends towards the ideas fostered by reductionism.
I have a friend who is a math PhD. I have never really had a chance to discuss this sort of thing in depth, but I have asked him before if he though mathematics was something created or discovered. He said "created" but not with any great deal of confidence and waffled on that a bit.
Aren't these symbolic systems of mathematics extremely useful in the US elections too? Isn't counting the votes quite essential in free and fair elections?
Quoting Count Timothy von Icarus
And that's why reporters ask metaphysical questions from cosmologists or quantum-physicists and not from philosophers, who actually could be far more knowledgeable about metaphysical questions.
Yes, I totally understand the arguments of mathematics being an essential tool for physics and physics is an inspiration to create new mathematics and this all leads to reductionism of physics and math.
However, why do we stop there? Or to put it in another way, why then the rejection of what is quite important to us, the society and the World humans have built for themselves and which is studied by the humanities/social sciences in academia?
Let's remember topic of the thread and the idea that there's non-computable mathematics: that many true mathematical statements aren't provable or computable. How do we get to those things that are not computable, not provable? As discussed here in the OP and then later in the discussion of Lawvere's theorem in Category theory, many of these theorems showing the limitations of mathematics have self-reference and diagonalization in their argument. Negative self-refence seems to be a limit for computation.
Now, just ask yourself: We base a lot of our actions on past history. And we also try to learn from our past mistakes even as a collective, that we don't the same mistakes as in the past. Wouldn't that be perfectly modeled by negative self-reference? If so, then could you then argue that historians don't explain history by computing functions because their field of study falls into non-computable mathematics? Without computabilty, the only thing might be left is a narrative explanation of what happened.
And please understand, my argument is that indeed everything is mathematical, when we want to be logical.
Then the matter at issue is what constitutes a distinct individual, in order that we say that there is five of them. And this is a product of the way that we sense things. We sense things as having a separation from their environment, as distinct objects, particulars.
Quoting Tarskian
But the simulation is completely different. By the conditions of your example, it is digital, a numerical representation. How are numbers similar to the world which is represented? The number "2" is in no way similar to two separate objects.
Quoting Tarskian
This still does not make sense to me, it gives no real meaning to "perfect" You are saying that what was first described as two, the abstract world and its simulation, are really just one, because the simulation is "perfect". But then there really is no simulation, just the one "perfect" abstraction. So all you are saying is that to be an abstraction is to be perfect. So all abstractions are perfect, ideal, as being one and the same as themselves.
Quoting Tarskian
Now you're using "equivalent to the structure", and before you said the perfect map "is" the structure it maps. This is saying two different things. When we say it "is" that, we allow no difference, but to say it is "equivalent" allows for a world of difference. In my example above, "2" is completely different from the two things it represents, but it is equivalent.
Quoting Tarskian
You already said, "the perfect map of an abstract world is the abstract world itself". If it "is" the thing then it is truly identical. But now you take that back and claim they are not truly identical. If they are not truly identical then we need to account for the difference between them. You say they are "isomorphic" and that implies that they have the same form. So how could the abstraction and the model of the abstraction have the very same form, yet be different? A difference is always a difference of form. And since they are both abstractions there is no "matter" here to account for the proposed difference. Therefore we end up with contradiction. They are not truly identical so there must be a difference between them. The difference must be a difference of form. Therefore they cannot be isomorphic.
Quoting Tarskian
This is where the problem is, "essentially identical" is an oxymoron. "Identical" means the same, but you degrade "identical" to say "essentially identical", such that it can no longer mean "the same" any more, because "essentially identical really means different. All you are really saying is that it is the same but different, which is contradictory.
Quoting Tarskian
I totally agree, but the problem comes when we try to say that an abstraction, which is never truly unique, has an identity, just like a thing which is unique does. That is the case when you say "A perfect map of an abstract world is the abstract world itself". You have given identity, uniqueness, to the abstract world, to allow that there is a "perfect" map of it. Only if the abstraction is truly unique could there be a perfect map of it. If it is not truly unique, as you admit here, then the map could equally be a map of a number of different abstractions. This would mean that it is ambiguous, and less than perfect, by that fact.
I'm guessing, typical. Philosophical speculations distract from True mathematics. :cool:
Quoting fishfry
True or False?: The Earth is a planet. Answer: True (by virtue of classification)
True or False?: The square of the hypotenuse in a right triangle equals the sum of the squares of the two sides. Answer: True (by virtue of proof)
True or False?: The Continuum Hypothesis is true. Answer: Well, let's see . . . .
Interesting to hear you arguing against the concept of truth. Well moral relativism is the ethos of the age, I suppose.
Ordinarily I would not give it much thought, but this thread seems to focus on math truth beyond virtue of proof. You seem to know what that is all about. Can you provide a very simple definition of this sort of truth in math? I suppose the definition of a triangle is truth without proof. Truth by definition. But what makes a string of symbols true? Model theory? I thought I understood a parallel idea when I quoted the group theory example from StackExchange, but I guess not. Are axioms true by virtue of definitions?
So maybe, in some sense, the demand that mathematics itself be explained is a bit like the childs question. Mathematics, after all, is the source of a considerable number of explanations, not something that itself needs explaining. Im reminded of the concluding paragraph of Wigners ode to mathematics:
[quote=The Unreasonable Effectiveness of Mathematics in the Natural Sciences, Eugene Wigner; https://math.dartmouth.edu/~matc/MathDrama/reading/Wigner.html] The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of learning.[/quote]
Noson Yanofsky's paper, "True but unprovable", is about arithmetical truth, also called "true arithmetic":
I have used the term "mathematical truth" instead of "arithmetical truth" because alternative foundational theories of mathematics such as set theory have large fragments that are bi-interpretable with arithmetic and therefore have the same properties.
Yanofsky points out that only a very small part of Th([math]\mathcal {N}[/math]), i.e. arithmetical truth, is provable. The remainder of Th([math]\mathcal {N}[/math]) is unpredictable and chaotic. Most of Th([math]\mathcal {N}[/math]) is even ineffable.
The fact that it has leveragability in the material world, means that there is something more to it than "it just is". It is useful.
Quoting Wayfarer
The explanation needs to take a different tact, one which addresses the usefulness which we observe. That's why Peirce was led into pragmaticism. Notice in my exchange with @Tarskian above, I was quickly led to ask what makes one theory "better" than another. Tarskian claimed the "perfect" model of an abstraction is one which is identical with the abstraction which it models. However, this is clearly incorrect if we consider what actually works in practise. In practise, what makes one specific model of an abstraction better than another is some principle of usefulness, and this is not at all a principle of similarity. That is reflected in the fact that the symbol often has no similarity to the thing symbolize ("2" in my example, is not similar to the idea of two).
My initial interpretation of the term "better" was "more faithful", but indeed, this doesn't necessarily make an abstraction more useful. That does indeed depend on what you are going to use it for.
Wikipedia and article:
It looks like passing the buck to me. The word "true" in mathematics appears to be a kind of primitive when used outside of "true by virtue of proof". However, the statement of Goldbach"s Conjecture from Wikipedia:
might very well be true in the common sense of the word, even if possibly unprovable. But one cannot actually assert it is true - only that it might be.
With a formal system with Peano Arithmetic we already get the results of Gödel's incompleteness. Hence this has been shown earlier than Yanofsky's paper. Yet do notice that Presburger Arithmetic is complete.
So what's the thing with multiplication?
Skolem Arithmetic only has multiplication (no addition) and is also complete. The problem occurs when you try to add both addition and multiplication.
That is a bit of a mystery. Any simplification to Robinson's arithmetic will make it complete: https://en.wikipedia.org/wiki/Robinson_arithmetic. It just turns out to be like that when you do it.
Indeed that's interesting. With Robinson arithmetic you rule out mathematic induction and the axiom schema. But you do have the successor function, addition and multiplication...and that seems to be all it takes for incompleteness.
On the other hand, for instance Presburger arithmetic is complete. But then:
If anyone has something more to say about this and why this is so, I'll definitely want to hear from you.
Ok I'll do my best.
When we manipulate symbols, we use syntax rules. There's no "meaning" associated with the symbols except the meaning in our minds. The symbol manipulations are entirely mechanical, they could be worked out by a computer program. In fact there's much contemporary research on computers doing proofs. Mathematicians are starting to use https://en.wikipedia.org/wiki/Proof_assistant]proof assistants and proof formalizer software. It's a big field, going on ten or twenty years now.
But ok, there's the mechanical symbol manipulation. Syntax.
Now we want to talk about semantics, or meaning. So we cook up, if we can, a model or interpretation of the symbols. The variables range over such and so set. The operation symbols mean such and so. For example, we might have the formal axioms of integer arithmetic, say the ring axioms. And then there is a model, the set of integers.
Now -- and this is actually a very deep point, I do not pretend to begin to understand the nuances -- things are said to be either true or false in the model.
So truth and falsity, semantic concepts, are always relative to a particular model. The integers and the integers mod 5 both satisfy the same ring axioms, but 1 +1 + 1 + 1 + 1 = 0 is false in the integers; and true in the integers mod 5.
That's what we mean by truth. Mathematical truth is always:
Axioms plus an interpretation.
If I say, "Our planet has one moon," as a purely syntactic entity, it's a valid sentence. But it's neither true nor false.
If I interprete "Our planet" as Earth, it's true. If I interpret it as Jupiter, it's false.
So. Given a collection of axioms; and an interpretation of the axioms, which is (1) a domain over which the symbolic variables range; and (2) a mapping from the symbols to objects in the model.
When you do that, then each valid sentence in the language of the axioms is then either true or false in the model. (It could be independent, too, but we're not concerned with that here).
Now the paper in the OP makes the point that there are more mathematical truths than there are symbol strings to express them. So most mathematical truths don't have proofs we can write down. In fact we can't even express most mathematical truth.
This is already long enough so let me know if this is the answer you were looking for. In the end it's related to Tarski's truth thing and Godel's incompleteness theorem and Turing's Halting problem -- though I recently learned that in fact he did NOT talk about what we call the Halting problem in his famous 1936 paper. So all this stuff was "in the air" in the 1930s. A guy named Chaitin came along later and recast all of this in terms of information theory. How "incompressible" a string is as a measure of its randomness. Using those ideas he proved Godel's incompleteness theorems from a different point of view.
From that, you can show that mathematical truths are essentially random. True for no reason than we could ever write down. That's what all this is about.
But you must understand, none of this is of the least importance to the vast majority of working mathematicians. You're not missing anything.
Thank you. This is similar to the group theory example. It makes more sense now.
Quoting fishfry
My favorite game on the internet is guessing the number of page views per day for math and other topics. I guessed 126 here, whereas it is 111. Close, but no cigar.
Glad that helped.
Quoting jgill
That's interesting. Which page views? I think you've mentioned in the past that you look at papers written or something like that.
The law of identity, the indiscernibilty of identicals, and the identity of indiscernibles are different. With a semantics for '=' such that '=' is interpreted as the identity relation on the universe, the first two hold.
'w+7 = w+7' is true in every model. For any term 'T', 'T = T' is true in every model.
No, a fragment of set theory with also the negation of the axiom of infinity is bi-interpretable with PA. I pointed that out previously.
Yes, there is a mathematical definition of 'true in a model'.
For perspective, keep in mind that Skolem arithmetic and Presburger arithmetic are not fully analagous, since Skolem arithmetic has more detailed axioms about its operation symbol.
For starters, the system represents all the primitive and recursive functions, and is incomplete.
The truth of a sentence is per interpretation, not per axioms.
Some sentences are true in all models. Some sentences are true in no models. Some sentences are true in some models and not true in other models.
Axioms are sentences. Some axioms are true in all models (those are logical axioms). Some axioms are true in no models (those are logically false axioms, hence inconsistent, axioms). Some axioms are true in some models and not true in other models (those are typically mathematical axioms).
The key relationship between axioms and truth is: Every model in which the axioms are true is a model in which the theorems of the axioms are true. And every set of axioms induces the class of all and only those models in which the axioms are true.
Robinson arithmetic is incomplete.
We know it is so because having both addition and multiplication entails incompleteness, so, since Presburger arithmetic is complete, it can't define multiplication.
(1) I would avoid the word 'valid' there, since it could be misunderstood in the more ordinary sense of 'valid' meaning 'true in every model'. What you mean is 'well-formed'. But, by definition, every sentence is well-formed, so we only need to say 'sentence'.
(2) If by 'independent' you mean 'not determined to be true, and not determined to be false', then there are no such sentences. Per a given model, a sentence is either true in the model or false in the model, and not both.
That depends on what things are truths.
If a truth is a true sentence, then there are exactly as many truths as there are true sentences, which is to say there are denumerably many
If a truth is "state-of-affairs", such as taken to be a relation on the universe, then, for an infinite universe, there are more "truths" then sentences.
Where did Carnap write that?
And that was my basic question: why having both addition and multiplication entail incompleteness?
How does it entail incompleteness?
Is it that with both addition and multiplication you can make a diagonalization or what is the reason?
See a proof Godel-Rosser.
Quoting ssu
Diagonalization is available in any case. But we need multiplication for Godel numbering. We also need exponentiation, but Godel proved that exponentiation is definable.
OK, I think you answered here my question.
One point though: Godel-numbering is in the meta-theory, but we want to know why we need multiplication in the object theory. But, if I'm not mistaken, we need that it is representable in the object theory; I'd have to study the proof again.
Daily pageview statistics on Wikipedia. And papers submitted to ArXiv.org
For example: True arithmetic (Talk)
And True arithmetic (pageviews)
Low priority in Mathematics in Wikipedia. About the same as my own low priority math article.
(The daily analysis can be misleading, however. The median is a better indicator of popularity. For example, I just checked my former sport, bouldering, and found a huge disparity with a daily average of 912, but a median of 351. It was running below 400 per day until one day only it shot up to nearly 12,000. I haven't a clue.)
Qualification:
Presburger arithmetic is usually stated with a finite axiomatization. But it also can be finitely axiomatized.
On the other hand, the only axiomatization of Skolem arithmetic I can find is at Wikipedia. It seems to be a finite axiomatization (it doesn't have an induction schema), but I don't understand it because it includes exponentiation though exponentiation is not primitive. So I can't say whether that Wikipedia axiomatization makes sense.
My point is that if we compare a finite axiomatization of Presburger arithmetic with finite axiomatization of Skolem arithmetic, we may find that they are indeed "analogous" to some extent.
The diagonal lemma:
(By the way, there seems to be a mistake in the page: "formula" should be "sentence").
Equivalently replace F by ¬ F:
? ? ¬ F(°#(?))
It is used in this negative variant in Gödel's proof and Tarski's proof.
That is how you get:
For any property of logic sentences, there always exists a true sentence that does not have it, or a false sentence that has it, or both
With "formula" or well-formed formula replaced by "predicate" or "property".
? ? ¬ F(°#(?))
is not:
"For any property of logic sentences, there always exists a true sentence that does not have it, or a false sentence that has it, or both."
Counterexample: Let P be the property: P(S) if and only if S is equivalent with S.
[EDIT CORRECTION: I misread the quote. The quote was a disjunction not a conjunction. Mine is not a counterexample. No true sentence is not equivalent with itself, but every false sentence is equivalent with itself.]
I guess you meant to write:
[i]Let P be the property: P(S) if and only if S is equivalent with P(#S)[/I].
In that special case, P is actually Tarski's truth predicate, which is indeed not definable. The conclusion here is that truth is not a legitimate predicate.
No, I meant what I wrote, I showed you a property of sentences that every sentence has.
[EDIT CORRECTION: I misread the quote. The quote was a disjunction not a conjunction. Mine is not a counterexample. No true sentence is not equivalent with itself, but every false sentence is equivalent with itself.]
And what you wrote doesn't even make sense. # S is a number not a sentence.
In arithmetic theory, the argument n in P(n) must be a natural number. You cannot apply the predicate to S. You can only apply it to its Godel number.
P(S) is simply not a predicate of PA.
The identity predicate in PA is:
P(n) := n = n
It cannot be implemented as:
P(S) := S <-> S
You are trying to do something that is not supported in PA.
You wrote:
"For any property of logic sentences, there always exists a true sentence that does not have it, or a false sentence that has it, or both."
That doesn't mention PA. Rather, it a universal generalization over properties and sentences.
I wrote that about the diagonal lemma, i.e. Carnap's theorem. Of course, there are conditions for when it applies. The context required, is PA or equivalent.
Furthermore, the term P(S) is in and of itself ambiguous.
It is a predicate that seemingly applies to a truth value. At first glance, it always means P(true) or P(false). It's as if it were a predicate with a boolean argument.
The term P(S) only works if you do not distinguish between the source code of the sentence and its truth value. It requires judiciously swapping between both meanings and second-guessing what exactly S means: the source code of the expression or its truth value? Sometimes this and sometimes that.
It yields expressions that are in fact not computable. No compiler would ever be able to compile that kind of things.
('r' for 'the numeral for' and '#' for 'the Godel number of')
Let C be this theorem:
For certain theories T, for every formula F(x) there is a sentence S such that T |- S <-> F(r(#S)).
Let K be:
"For any property of logic sentences, there always exists a true sentence that does not have it, or a false sentence that has it, or both."
C is not correctly rendered as K.
(1) K doesn't qualify as to certain kinds of theories.
(2) C generalizes over formulas, not over properties.
(3) C doesn't say anything about 'true'.
(4) C doesn't say that for every property of sentences there is a true sentence that does not have the property. C doesn't say that for every property of sentences that there is a false sentence that does have the property.
[EDIT CORRECTION: I misread the quote. The quote was a disjunction not a conjunction. Mine is not a counterexample. No true sentence is not equivalent with itself, but every false sentence is equivalent with itself.]
Moreover:
(5) I showed a counterexample to both prongs of K.
[EDIT CORRECTION: I misread the quote. The quote was a disjunction not a conjunction. Mine is not a counterexample. No true sentence is not equivalent with itself, but every false sentence is equivalent with itself.]
You said that my counterexample is not in PA. So what? It doesn't have to be in PA, it merely needs to be a counterexample to K. And, by the way, K is not in PA, especially since PA doesn't have a predicate 'true'. And C includes PA as one of the T's, but C itself is not in PA.
(6) And with the arithmetization of syntax, both 'is a sentence' and 'is equivalent with itself' are expressible in PA. But I didn't do that, because K doesn't specify any language or kinds of theories.
/
For certain theories T, for every formula F(x) there is a sentence S such that T |- S <-> F(r(#S)).
is not remotely anything like:
For any property of logic sentences, there always exists a true sentence that does not have it, or a false sentence that has it, or both.
First, we replace F by ¬F. If F is a property then its negation is also a property. So, the following is an equivalent statement:
Next, we replace S <-> ¬F(r(#S)) by the equivalent expression:
(S ? ¬F(r(#S)) ? (¬S ? F(r(#S))
Meaning:
(S is true and F is false) or (S is false and F is true)
Since ? is an "inclusive or", we can add "or both":
(S is true and F is false) or (S is false and F is true) or both.
So, it means:
A true sentence that does not have the property, or a false sentence that has the property, or both.
That is the same.
A formula that takes a sentence as argument is a property of that sentence.
That is just how logic works. Asserting:
.
S ? L
Means:
S is true and L is true.
P(S) := S <-> S
Is indeed impossible in PA. However, you can implement it as:
P(n) := n=n
The diagonal lemma is still perfectly satisfied for the identity property:
There exists a true sentence that does not have the identity property or a false sentence that does have the identity property, or both.
All false sentences have the identity property. Hence, you can always find one to satisfy the lemma. So, in what way is the diagonal lemma not satisfied?
(1) Your quoted characterization did not have the specifications you are giving now. Your quoted characterization was a broad generalization about properties and sentences.
(2) PA doesn't say 'true' and 'false'.
(3) Inclusive 'or' allows 'both' but not regarding 'true' and 'false'.
We may have:
P is true and Q is false
or
P is false and Q is true
But we cannot have:
P is true and Q is false
and
P is false and Q is true
(4) There are properties not expressed by formulas, so the generalization should be over formulas, not properties.
(5) I did not say: "P(S) := S <-> S." I said: "with the arithmetization of syntax, both 'is a sentence' and 'is equivalent with itself' are expressible in PA." What is not expressible in PA are 'true' and 'false'.
It has always been an explanation about the diagonal lemma:
S <-> ¬F(r(#S))
Meaning:
(S ? ¬F(r(#S)) ? (¬S ? F(r(#S))
Meaning:
(S is true and F is false) or (S is false and F is true)
Meaning:
A true sentence that does not have the property, or a false sentence that has the property, or both.
It was a choice not to provide these details because this kind of explanations quickly become impenetrable in a multidisciplinary environment.
Quoting TonesInDeepFreeze
The meaning of the S above is "a true sentence". PA doesn't say it, but that is what it means, for reasons of first-order logic.
Quoting TonesInDeepFreeze
In that case, it is not a property in PA, because that would require a predicate in PA. In fact, Tarski's truth is also a property but not one in PA.
It is possible to precisely state all the conditions that apply, but in that case, the explanation becomes impenetrable. Nobody would be interested in a multidisciplinary forum. In order to keep it readable, there is no other alternative than to leave things out.
(1) You skipped that I pointed out that:
(S is true and F is false) and (S is false and F is true)
is never the case.
(2)
"S ? ¬F(r(#S)" is not the same as "S & ~F".
"¬S ? F(r(#S)" is not the same as "~S & F".
F
does not have a truth value. What has a truth value is
F(r(#S))
Saying "F is false" is nonsense.
(3) I agree with this:
C entails that there is a sentence S such that T proves:
(S & ~F(r(#S))) v (~S & F(r(#S))).
F expresses a property. F(r(#S)) is true if and only if S has the property expressed by F.
But:
[EDIT CORRECTION: I misread the quote. The quote was a disjunction not a conjunction. Mine is not a counterexample. No true sentence is not equivalent with itself, but every false sentence is equivalent with itself.]
First, I made a mistake as I misread your disjunction for conjunction. I made edit notes for that in my posts now.
You don't know what every person is interested in. As far as posts thus far, the only person to comment on your remark is me, and I am interested in seeing the subject properly represented. Better not to post terribly poor renderings of technical matters than to mangle the subject. The fact that this is a philosophy forum doesn't entail that it is good to oversimplify to the point of foggy vagueness and/or substantive misstatement. And I don't know why you would suppose that people would care about your synopsis of Carnap if they didn't also grasp the mathematical basis. You think many (if any) people are going to read your one liner about the theorem and grasp anything about it without a clearly stated mathematical basis? Moreover, your one-liner is incorrect.
Your synopsis is poor:
It does not make clear that it pertains specifically to the mathematical theorem.
It should not say 'logic sentences' in general, since the theorem pertains to sentences in certain languages for certain theories.
You should generalize over formulas in those languages and not over properties (since there are properties not expressed by formulas).
Disjunction is inclusive, but that does not entail that it is ever the case that "P is true and Q is false" and "P is false and Q is true".
You conflate a predicate with a sentence.
I left out that detail because it is obvious. So, with the details:
It is more accurate but also much more impenetrable than:
The resulting syntactic noise detracts from understanding what exactly it is about. It muddies the explanation.
Quoting TonesInDeepFreeze
I simplified F(r(#S)) to just F, because I thought that it was obvious what it was about.
Quoting TonesInDeepFreeze
If that is truly the case, then the subject may not be suitable for a philosophy forum. I had hoped that it was, but you may be right.
The metaphysical implications do seem out of reach of philosophical investigation. Apparently, they have been for almost a century.
You should not say 'logic sentences' in general, since the theorem pertains to sentences in certain languages for certain theories.
You should generalize over formulas in those languages and not over properties (since there are properties not expressed by formulas).
Disjunction is inclusive, but it is never the case that both of these are true: "P is true and Q is false" and "P is false and Q is true".
In general, a disjunction 'phi or psi' might not allow 'phi and psi', depending on the content in phi and the content in psi.
/
People can decide for themselves what is too technical or not. Symbolic logic, some mathematical logic and set theory are included in some philosophy department programs. Symbolic logic, even in community colleges. And philosophy of mathematics is based on understanding the mathematics being philosophized about.
And my point was not that people wouldn't be interested or that there are not enough technically minded readers (we don't know who is reading now or who might read in the future), but rather that the subject deserves to not be mangled by oversimplification.
In context of mathematical logic, I would take a truth to be a certain kind of sentence relative to a given model. So, for a countable language, there are only denumerably many truths (i.e. true sentences).
One might also say that truths are states-of-affairs, such as a certain tuple being in a certain relation is a truth, even though no sentence asserts that fact. In that sense, yes, for an infinite universe, there are uncountably many truths.
Hmm. You could fix the interpretation and change the axioms to show that truth depends on the axioms plus the interpretation. That's not the usual way of thinking about it but I believe it could be shown.
Quoting TonesInDeepFreeze
Ok.
Quoting TonesInDeepFreeze
Yes you're right.
Quoting TonesInDeepFreeze
Yes ok.
Quoting TonesInDeepFreeze
Yes that's a bit of murkiness in the paper the OP linked. But Chaitin and the paper author seem to take truth in the latter sense.
I don't know what that means.
The definition is:
sentence S is true in model M if and only if [fill in the definiens here]
and that definiens doesn't mention 'axiom'.
There exist sentences that are true or there exist sentences that are false, or both.
"Or both" means: Potentially, there exist as well true as false sentences.
"Or both" is not about an individual sentence. It is about the fact that both existence clauses could be true, i.e. there are true sentences but also false sentences that satisfy the lemma.
Quoting TonesInDeepFreeze
I meant to say:
The term "or both" emphasizes that the "or" is not exclusive. The default interpretation in natural language for "or" is actually exclusive.
Quoting TonesInDeepFreeze
In my experience, it usually is too technical. The consequence is that nobody reads what I just wrote. I could as well not write it at all ...
If your statement S is an axiom, you will understand my point.
You wrote:
"(S is true and F is false) or (S is false and F is true) or both."
Which is:
(S is true and F is false) or (S is false and F is true) or ((S is true and F is false) and (S is false and F is true))
That is false.
[EDIT CORRECTION: It is not false. I should have said the third disjunct is false, thus otiose as added to the two other disjuncts.]
If you meant something different, involving 'there exists', then you need to write it.
And just to be clear: The theorem is of the form: For all formulas P(x), there exists a sentence S, such that ....
Quoting Tarskian
I'm not nobody. You have a problem with quantifiers.
So you didn't write what you meant regarding S and F.
And still no recognition of these:
You should not say 'logic sentences' in general, since the theorem pertains to sentences in certain languages for certain theories.
You should generalize over formulas in those languages and not over properties (since there are properties not expressed by formulas).
It doesn't matter whether S is an axiom or not. The definition doesn't mention 'axiom'.
By the way, every sentence is an axiom of uncountably many axiomatizations.
You should go back to what I originally said that you objected to, and you will see that I am right.
It is not the case that 'mathematical truth' means ''axioms plus an interpretation'.
The definition is:
sentence S is true in model M if and only if [fill in the definiens here]
and 'axiom' is not mentioned.
/
'axiom' is a syntactic notion, not semantical.
A system T is comprised of
a language L
a set G of sentences in the language L (the axioms)
a set of inference rules
That induces a set T of theorems.
That is all syntactical.
'true' is a semantical notion.
a sentence S is true in a model M if and only if [fill in definiens here]
If you fix an interpretation and change the axioms, you'll get different truths. This is obvious. Not worth arguing about.
That is a deep misunderstanding.
An interpretation for a language determines the truth or falsehood of each sentence in the language.
Different axiom sets induce different theorems, hence different theories, but for a given interpretation, what axioms are in a given axiomatization has no bearing on that interpretation and no bearing on which sentences are true in that interpretation.
Again:
'sentence S is true in a model' is semantical.
'sentence S is an axiom for a system' is syntactical.
I suspect that what you have in mind may be put this way:
Given a set of axioms, every model in which the axioms are true is a model in which the theorems from the axioms are true.*
That is the case. But it is not a definition. It is a theorem (the soundness theorem). The definition of 'S is true in model M' does not mention axioms. Again: Given an interpretation (a model) M, the truth or falsehood of every sentence in the language is determined irrespective of what axiom sets there are for different systems.
* We also keep in mind that there are axiom sets such that there are sentences such that neither the sentence nor its negation is a theorem from the axioms, so, if the axioms are consistent, then there are models for the axioms in which a given independent sentence S is true and there are models for the axioms in which S is false.
For example, most trivially:
Let 'P' and 'Q' be sentence letters. Let the only axiom be 'P'. There are two models in which 'P' is true:
P is true, Q is true
P is true, Q is false
So the axiom P does not determine the truth or falsehood of Q.
Also, look at it this way:
Given a set of axioms G and a different set of axioms H, it may be the case that the class of models for G (thus for all the theorems from G) is different from the class of models for H (thus for all theorems of H). So let's say S is a member of G or a theorem from G, and S is inconsistent with H. Then, yes, of course, S is true in every model of G and S is false in every model of H.
But, given an arbitrary model M, whether S is true in M is determined only by M.
I did. I wrote:
"There are sentences that are like this. There are sentences that are like that. Both could exist."
There's a lot of syntactic noise associated to specifying F.
Quoting TonesInDeepFreeze
It's related to PA or similar. That is always implied.
Quoting TonesInDeepFreeze
Well, it is about properties that have formulas in PA. That was also implied.
Mentioning all of that, including the above, will make the entire explanation impenetrable.
I just refer to a link that contains all these details but I can pretty much guarantee that few people will ever read it.
My rendition is not suitable for a mathematical forum, but I had hoped that it would be for a philosophical one.
You said something similar to that. But later you said something very different. It's not the reader's job to suppose you don't mean what you write. Moreover, even if one did figure out that you meant something different from what you wrote, then it is still appropriate to point out that what you wrote is incorrect as it stands no matter that in your mind you meant something different.
Quoting Tarskian
No, a person doesn't know that you're not making the generalization that you stated. Instead of saying "logic statements" (and what is a "logic statement"?) it would have been correct to say "sentences in the languages of said theories" or something like that. Or say, "For theories of a certain kind". People are not supposed to guess that you don't mean what you say.
Quoting Tarskian
Then you need to say that. People are not required to not take you to mean what you say when you say "all properties".
Actually, all you needed to say is, "all predicates in the language" rather than "all properties" which is not only mathematically wrong but philosophically wrong.
Quoting Tarskian
Actually, it's inpenetrable when you don't specify what you mean but instead put the burden on the reader to suss out what you might mean.
Quoting Tarskian
Another way of putting it on the reader to then wade through an article to figure out what you mean in a post, when you don't at least say what particular passages in the article you have in mind, or at least say that something like "my thesis depends on the bulk of this article that needs to be understood first". And linking to Wikipedia about mathematics is rank. Wikipedia articles about mathematics are too often incorrect, inaccurate, poorly organized or poorly edited. It's often to the subject matter what fast food is to nutrition.
Quoting Tarskian
Your writing about mathematics is so often incorrect and ill-formed That it is in a philosophy forum and not a mathematics forum doesn't alter that it is so often incorrect and ill-formed.
You leave out crucial points because you think they are too technical. But then people who don't know that there are such crucially needed points are liable to be misled by your bad oversimplifications.
What you call "noise" is actually needed clarity of the signal. What you think is your signal is the sound of a blown woofer.
Back to this matter:
Whether there are uncountably many truths or whether there are unexpressed truths depends on what is meant by 'a truth'.
In context of mathematical logic, I would take a truth to be a certain kind of sentence relative to a given model. So, for a countable language, there are only denumerably many truths (i.e. true sentences).
Or one might also say that truths are states-of-affairs, such as a certain tuple being in a certain relation is a truth, even if there is no sentence that states that fact about that certain tuple and certain relation. In that sense, yes, for an infinite universe, there are uncountably many truths.
And your claim about ZF\I is incorrect. ZF\I is not bi-interpretable with PA. Rather, it is (ZF\I)+~I that is bi-interpretable with PA. (Actually, we can simplify to (Z\I)+~I, which is bi-interpretable with (ZF\I)+~I and bi-interpretable with PA.)
[EDIT CORRECTION: I think it is incorrect that (Z\I)+~I is bi-interpretable with PA. This is correct: If every set is finite, then the axiom schema of replacement obtains and (Z\I)+~I = (ZF\I)+~I. But I don't think that works; I was thinking that the negation of the axiom infinity implies that every set is finite. But I think that itself requires the axiom schema of replacement.]
There are over 20,000 articles about math on Wikipedia. My own experience has been that accuracy improves with advanced topics, and I have found that as an introduction to a topic Wikipedia is very good. But I know little of foundations.
I'll fix that:
as an introduction to a topic Wikipedia is [s]very good[/s] lousy.
Two opposing opinions. Here is a discussion on Quora.
Even worse than Wikipedia, which much too often is, at best, slop. Quora is close to the absolute lowest grade of discussion. It is a gutter of misinformation, disinformation, confusion and ignorance. Quora is just disgusting.
(1) The layout of the threads is quite illogical and very impractical. The illogical organization style of answers and comments makes discussions incoherent. Like the rest of the site, the design is not to facilitate reading but to add to click counts and ad views. A site is not to be faulted for having ads, but the entire design of Quora is egregiously manipulative. (2) Posters are allowed to delete their posts, which is okay, but deletion of one's posts includes deletion of replies to the deleted posts. Thereby, a poster can wipe out all your replies. (3) Mathematics and logic discussions at Quora are inundated with prolific, persistent, chronic, serial cranks who slater the threads with misinformation, disinformation, confusion and ignorance. It's a disgusting cesspool. It's the dark web of discussion.
The only thing worse is "AI", which can always be relied upon for absurd misinformation and computer generated lies, all under the imprimatur of computing.
Better than stackexchange nowadays, and it is not as if Quora got better with the years.
The original article that establishes and proves the bi-interpretability:
I have already linked to this original publication in a previous comment. The abstract says the following:
The official name for the set theory that it is about, is ZF-inf.
As I already have written previously in a previous comment, if is about ZF with the axiom of infinity removed and denied.
I don't know why you believe that the term ZF-inf would be wrong because that is exactly the term that the original authors use, i.e. Richard Kaye and Tin Lok Wong.
As far as I know, Wikipedia does not mention this publication anywhere in connection with bi-interpretability. There are a few placeholder pages on the subject but they look very much like a draft at this point.
I agree with the following comment:
I disagree with the following comment:
That is only going to make the problem worse.
In my opinion, it is preferable to mention the scholarly publications in the footnotes. That will allow anybody who is interested in the exact technical details to investigate them there.
Better deep than shallow.
I discussed this with myself and determined I'm right.
I prefer not to argue the point, if you'll forgive me.
Quoting TonesInDeepFreeze
I'll read and consider his when I get a chance. Thanks for posting it.
Another discussion: Are mathematical articles on wikipedia reliable?
StackExchange also has a bad discussion design. And often some confused discussions, But at least as far as math and logic, I have found it to be far better than Quora, which is the pits.
Ah, that is not a notation I would have thought means "the axiom of infinity negated". I would have thought it means "ZF without the axiom of infinity". The notation with which I am familiar indicates (1) the axiom of infinity is dropped and (2) the negation of the axiom of infinity is adopted. But since the ZF-Inf is also used for that, of course, with that use, ZF-Inf is bi-interpretable with PA. [s]And as I said, so is Z-Inf.[/s] [EDIT: cross out previous sentence.]
Better deep in knowledge and shallow in misunderstanding. Better deep in love and shallow in hate.
Quoting fishfry
Then you're discussing with the wrong person.
Quoting fishfry
You may prefer whatever you want; there's no need for forgiveness for preferring whatever you like; meanwhile, I prefer to show how you are wrong in saying that 'true' is not defined entirely with the notion of interpretations and not the notion of axioms.
There still is, but it is dead
[EDIT CORRECTION: I think it is incorrect that (Z\I)+~I is bi-interpretable with PA. If every set is finite, then the axiom schema of replacement obtains, so (Z\I)+~I = (ZF\I)+~I. I was thinking that the negation of the axiom infinity implies that every set is finite. But I think that itself requires the axiom schema of replacement.]
Yes, I understand that it's a part you need in Gödel-numbering, to make the number that holds the logical sentence. Once you have both addition and multiplication, you can do what Gödel did. With Presburger Arithmetic the completeness is lost if you take into account also multiplication:
(see On the Decidability ofPresburger Arithmetic Expanded with Powers)
But then I also found an interesting answer on StackExchange, which seems very interesting, an answer to the question "Why does multiplication lead to incompleteness where addition does not?":
(See here)
I find this interesting. With real recursion I assume means recursion of real numbers. Well, if you have real numbers, then you're hopelessly mired in the notion of infinity and infinite sequences and so on. With mathematical induction, we get to questions about infinity.
Don't understand that quote. But comments that might be on target:
(1) "given complete induction. Unfortunately Peano's axiom of induction is not fully reducible to a collection of first-order statements."
I guess by 'complete induction' he means induction over all properties (i.e. a second order theory). And, yes, the PA induction schema is over only formulas. But the induction schema does define a set of first order sentences.
But I guess 'Peano's axiom of induction' refers to a second order axiom, not the first order PA axiom schema.
(2) The theory of real closed fields doesn't define the predicate 'is a natural number' (I hope I've stated that correctly).
(3) What do you mean by 'recursion of the reals'? Recursion requires well ordering.
From the quote, the only difference between recursion and real recursion, that I can think of, is using recursion for reals, in other words recursive real numbers: "A recursive real number may be described intuitively as one for which we can effectively generate as long a decimal expansion as we wish, or equivalently, to which we can effectively find as close a rational approximation as we wish."
Because I don't think there's real recursion and "phony" unreal recursion.
Thanks for the comments!
He might have meant something parallel to the distinction between first order induction and second order induction that he seemed to be mentioning, so that 'real induction' is second order and so too for 'real recursion'. But that would only be a guess.
Quoting ssu
That is not recursion over the reals. A recursive real r is such that there is a recursive function f on the naturals such that for each n, f(n) is the nth digit in the decimal expansion of r. That's still recursion over the naturals. I highly doubt that 'recursive real' is what he meant in this context. I think you're hearing hoofbeats in wild horse country and thinking zebras rather than horses.
Choke me in the shallow water before I get too deep. -- Edie Brickell.
Quoting TonesInDeepFreeze
I have some darned fine conversations with myself.
Quoting TonesInDeepFreeze
You are free to do so, of course.
So did Bill Evans.
Help me out. The jazz player?
The albums 'Conversations With Myself' and 'Further Conversations With Myself'.
It's only my guess as to what he might mean. I've never heard of second order recursion or what it might be, though it seems like something that might exist.
Thank you. "Today I learned."
Anytime you want jazz album recommendations, just ring.
Jazz is one thing I know a lot about, unlike logic.
Ok!
Quoting TonesInDeepFreeze
Haha very humble of you.
Confession is the road to redemption. Nice start!
Half is humble, since my knowledge of modern logic is not extensive relative to people who study it a lot more intensely (though vastly greater than cranks and jokers - such as in this forum - who know don't know jack about it). And I've forgotten a lot of what I knew and am rusty on many details and more advanced topics. Also, in the last couple of weeks, very atypically, I made not just one or two reasoning errors but a series of them, though I exercised intellectual honesty to correct them. The other half is not humble since I do have a well developed perspective on jazz - technically, historically, discographically - and a well developed taste in it and an intense emotional and spiritual connection with it, though there are people who know a tremendous amount more than me.