Even programs have free will
Imagine that you install an app on your phone that can tell you minute by minute what you will be doing at any point in the future along with all possible details?
The existence of this app would prove that you are just an automaton, i.e. a robot. In that case, it would be ridiculous to claim that you have free will. Conversely, you can prove the existence of free will by proving that it is impossible to construct such app.
In fact, there is no app that can tell minute by minute what even any other app will be doing.
Say that you try to construct such oracle app. It inspects the source code of any other app, looks at what inputs the other app will be getting from its environment, and then predicts what the other app will be doing.
Now we construct a pathological app, the thwarter.
The thwarter first asks the oracle what it predicts that it will be doing. The oracle then looks at the source code of the thwarter and at the inputs that it would be getting from the environment, and then predicts what the thwarter will be doing. Upon receiving the answer from the oracle, the thwarter does something else instead, because that is exactly how it was programmed.
The narrative above is pretty much the gist of Alan Turing's proof for the halting problem.
The environment of the oracle and the thwarter is perfectly deterministic. There is nothing random going on. Still, the oracle cannot ever predict correctly what is going to happen next. The oracle is therefore forced to conclude that the thwarter has free will.
The existence of this app would prove that you are just an automaton, i.e. a robot. In that case, it would be ridiculous to claim that you have free will. Conversely, you can prove the existence of free will by proving that it is impossible to construct such app.
In fact, there is no app that can tell minute by minute what even any other app will be doing.
Say that you try to construct such oracle app. It inspects the source code of any other app, looks at what inputs the other app will be getting from its environment, and then predicts what the other app will be doing.
Now we construct a pathological app, the thwarter.
The thwarter first asks the oracle what it predicts that it will be doing. The oracle then looks at the source code of the thwarter and at the inputs that it would be getting from the environment, and then predicts what the thwarter will be doing. Upon receiving the answer from the oracle, the thwarter does something else instead, because that is exactly how it was programmed.
The narrative above is pretty much the gist of Alan Turing's proof for the halting problem.
The environment of the oracle and the thwarter is perfectly deterministic. There is nothing random going on. Still, the oracle cannot ever predict correctly what is going to happen next. The oracle is therefore forced to conclude that the thwarter has free will.
Comments (92)
So by your argument, you've used Turing's argument to prove free will. Somehow that doesn't follow from the impossibility of such an app since the app is impossible even in a pure deterministic universe.
The natural numbers are also a pure deterministic universe. Most of its truth, however, cannot be predicted by arithmetic theory. A pure deterministic universe can still have free will as long as its theory is incomplete.
The real requirement here, is incompleteness of the theory.
Tarskian, You may be interested in a recent paper by Joel David Hamkins. Turing never proved the impossibility of the Halting problem! He actually proved something stronger than the Halting problem; and something else equivalent to it. But he never actually gave this commonly known proof that everyone thinks he did. Terrific, readable paper. Hamkins rocks.
https://arxiv.org/pdf/2407.00680
Quoting Tarskian
That's too strong a statement. If an app is halted, I can write a program that, given any time t, says, "The program is halted at time t."
Likewise if I'm dead, a program can exactly predict what I'm doing. But of course in that case I wouldn't have much in the way of free will.
Quoting Tarskian
Penrose thinks free will might be a quantum effect in the microtubules of the brain.
https://philosophy.stackexchange.com/questions/3322/the-emperors-new-mind-and-free-will
By the way, humans may or may not have free will.
Programs, by their very nature, do not have free will.
Time is of the essence.
The Thwarter app is not aware (figuratively speaking) of the existence of the Oracle app. All the Thwarter app is aware of is input.
Therefore, we only need to consider the Thwarter app.
Feedback occurs when the output of the Thwarter app then becomes new input. This is a temporal process, in that its output happens at a later time than its input.
The source code of the Thwarter app determines the output from the input using the function F, where output = F (input).
For example, if the input is a set of numbers, such as 3, 5 and 7, the output could be the addition of this set of numbers, such as 15.
At time zero, let there be an input I (1). This input cannot include any subsequent output, as any output happens at a later time.
At time t + 1, the output O (1) can be predicted from the function F operating on input I (1).
At time t + 1, the new input I (2) includes output O (1).
At time t + 2, the new output O (2) can be predicted from the function F operating on input I (2).
At time t + 2, the new input I (3) includes output O (2).
Etc.
At each subsequent time, the output can be predicted from the input. The output is pre-determined by the input.
At any time t + x, the output has been pre-determined by the situation at time zero.
Just finished reading it. It is very informative. I must say, though, that it is heavily vested in logic connected to the arithmetical hierarchy. It is still doable but admittedly an obstacle of sorts if you do not use that framework particularly often.
Hamkins acknowledges that the contemporary version of the proof is arguably preferable to Turing's original "detour":
I have tried to turn Hamkins' phrasing of the standard contemporary proof into a narrative:
For the original circle-free problem, the proof is actually trivially easy.
Say that we call programs with infinite output "infinitist" ("circle-free") and programs with finite output "finitist" ("circular"). Can we list all possible infinitist programs? No, because if we list their infinite output in a table, then we can create a brand new infinite output by flipping the bit on the diagonal, i.e. by diagonalization.
The real difficulty is related to how Turing uses the circle-free problem to prove that the symbol-printing problem is undecidable:
I agree with Hamkins' take on the matter. I also find the contemporary standard version of the proof much simpler than Turing's original approach. Turing's "unusual kind of reduction" feels like an exercise in painful shoehorning.
Quoting fishfry
I think that humans have a soul while programs do not. However, since programs also make choices, they can just as humans appear to be "free" in making them or not. That is why I think that it is perfectly possible to analyze free will as a computability problem.
Yes, the oracle may perfectly well know that thwarter will do the opposite of what he predicts, but he has committed to his prediction already. It will be too late already.
"The following sentence is true" (the oracle predicting the impending outcome). "The previous sentence is false" (the thwarter doing the opposite to what was predicted to thwart the oracle's veracity/render it false.
It is a dichotomy/pair of mutually cancelling phenomena. The result: lack of utility of either. They're entangled and self defeating.
The Thwarter app has a source code which specifies how the Thwarter app performs a calculation when input information
The Thwarter app is given an input and performs a calculation to arrive at an answer.
It may be that the Oracle app knows that the answer is contained within the input information.
However, the Thwarter app would only know that the answer was contained in the input information after it had completed its calculation, and then it would be too late to change what type of calculation it had used.
IE, the calculation that the Thwarter app uses cannot be determined by an answer that is only known by the Thwarter app after it has completed its calculation.
I enjoyed it.
Quoting Tarskian
You have made an impressively detailed reading of the article, way more than I did.
Quoting Tarskian
I am in complete agreement. But just try to explain that to the simulation theorists, the mind-uploading freaks, the singularitarians, the AGI proponents, etc. They have the mindshare these days.
Quoting Tarskian
Hmmm. Let me mull that over. I don't agree. Computability, by its nature, is deterministic. Whatever free will is, it is not computable.
Computability may be deterministic but is fundamentally still unpredictable too. It is generally not possible to predict what a program will be doing at runtime:
A deterministic system is unpredictable when its theory is incomplete. There is no need for randomness for a system to be unpredictable. Free will is essentially the same as unpredictability.
I am not sure if Rice's theorem means what you say it does.
If you give me a program, say its listing printed out on paper; and you give me its inputs; and you give me a lot of pencils, paper, and time; I can deterministically and with no ambiguity determine exactly what it's going to do. I can not imagine this being false, and therefore Rice must be full of beans! :-)
Quoting Tarskian
Not how I understand this. A chaotic system is deterministic yet unpredictable. Nothing to do with incompleteness. There's no free will, none whatsoever, in a chaotic system.
You will never predict correctly what thwarter is going to do.
Quoting fishfry
When you put thwarter in that chaotic system, you suddenly have something freely making decisions while you can impossibly predict what decisions it will make.
Free will is a property of a process making choices. If it impossible to predict what choices this process will make, then it has free will.
What makes you convinced thwarter is a genuinely possible program? Has anyone programmed one?
If a chaotic system can be deterministic but unpredictable, then you should be able to imagine software that is chaotic, and thus deterministic and unpredictable, no?
I think there's a subtly shifting meaning for the word "unpredictable" that's at play there.
Thwarter is trivially easy to implement. On input of string "halt" it prints "loop forever" and on input "loop forever" it prints "halt".
So, if the prediction (which is the input string) is that Thwarter will print "halt" or "loop forever", it won't.
The problem is rather to implement oracle. Example:
https://github.com/Solidsoft-Reply/Halting-Problem
I really don't see that as free will in any meaningful sense.
It is a contorted example.
It is accepted as proof, however, that no oracle can exist that can predict what choices programs will make.
Even in a perfectly deterministic environment, free will can still exist, as long as its theory is incomplete.
Therefore, we don't even need the physical universe to be nondeterministic for free will to be possible. It just needs to be incomplete.
So one can imagine a world where determinism is true, this oracle is impossible, and free will doesn't exist because determinism is true, regardless of this oracle.
incompatibilism implies that if predeterminism is true, free will doesn't exist
There is a massive difference between predeterminism and deterministic systems. If a deterministic system is incomplete, its future is not predetermined.
Quoting flannel jesus
It is exactly in a predetermined world, that the oracle can make flawless predictions.
The universe consisting of just the oracle app and the thwarter app, however, is not predetermined because of its incompleteness. The construction theory of this world is capable of arithmetic. That is enough to make it incomplete and therefore not possibly predetermined.
It is perfectly possible to deterministically build machines that are not predetermined.
Predeterminism implies that the system's theory is complete. In that case, every true fact about the system can be derived from its theory. If this is not possible, then the system's theory is incomplete.
For example, the arithmetic theory about the natural numbers is incomplete.
The arithmetic of the natural numbers is obviously a deterministic system. There is nothing random about it. Still, its truth is mostly unpredictable.
I don't think anything about the oracle or the thwarter says anything interesting about free will at all, personally, i think it's a red herring.
I use the term "predeterminism" instead of "determinism" because of the possible confusion with the term "deterministic system".
A deterministic system that is incomplete is not predetermined. Further confusion is also caused by tying the term "determinism" to "causality":
Causality is not a usable notion in mathematics. It is replaced by "provable from its theory". We don't need to know what the individual causes are for a particular fact. We don't care about that. We just need to know that the system can correctly predict the fact. Hence, the idea that all facts are "causally inevitable" translates into all facts being "provable from theory". So, the term "determinism" in mathematical terms means "completeness". It does not mean "deterministic system".
There are two fields involved here: metaphysics and mathematics. The vocabulary is not completely aligned.
The existence of a functioning oracle is equivalent to determinism (with the notion of determinism equivalent to the notion of completeness). The oracle fails. It doesn't function. Therefore, there is no determinism.
Asserting incompatibilism, as a notion in metaphysics, translates into proving the impossibility of constructing an oracle, as a notion in computer science. It is effectively equivalent. The difficulty here is that we are mapping concepts from one field to another.
So then when you were talking about incomplete determinism, you were... what? What is that? An oxymoron? Nonsense? A contradiction? What is that?
incomplete deterministic system.
This is just factually untrue. You've got chaos theory which makes future-predicting oracles impossible, to start with.
You got caught up in the vocabulary misalignment. The phrase "incomplete deterministic system" is perfectly fine in computer science or mathematics. It means that there is nothing random in the system ("deterministic system"). However, most facts can not be predicted from its theory either ("incomplete"). This is the essence of Gödel's incompleteness theorem.
I am trying to point out the metaphysical implications of the foundational crisis in mathematics. That is necessarily multidisciplinary, meaning that you end up with two vocabularies that are not necessarily compatible.
Gödel proves the lack of determinism (as in metaphysics) in particular deterministic systems (as in mathematics).
This sounds confusing.
This misalignment in vocabulary is, however, inevitable because people from either field rarely talk to each other or read each other's publications.
When one definition of determinism is equivalent to "completeness", but then another definition allows you to say "incomplete determinism", and you put pretty close to 0 effort into explaining how that's supposed to make sense, I can't imagine I'm alone in just thinking it's all nonsense from that point on.
The misalignment in vocabulary is something akin to a landmine. You become aware of the problem only after the facts. But then again, I don't think that "determinism" is a much used term in mathematics. You will mostly find the term "deterministic system".
If you Google for "mathematics determinism", the first search result is "deterministic system":
https://www.google.com/search?q=mathematics+determinism
So, even Google is confused here, because "determinism" does not mean "deterministic system" in mathematics. It means "completeness".
So, if even Google puts "pretty close to zero effort" into getting the facts straight, then it means that their 182,000 members of staff are possibly just spouting nonsense instead of properly maintaining their search engine.
Well, the real conclusion is that playing the blame game is pointless. Looking for whom to blame is unproductive. Furthermore, it never fixes the real underlying problem. Two different backgrounds means two different vocabularies. Sometimes it still works flawlessly. Sometimes, it doesn't.
Maybe there's not, maybe you can't clarify your ideas.
It's recursive in a way that means the oracle can't even begin.
It's like me telling you, Tarski, I have a math problem for you: your job is to give me a number that's 2 more than the answer to this math problem.
Does that even make sense as a task?
There's no problem with this oracle being impossible in the first place, because of course it's impossible, the task itself is inherently recursively impossible.
There are landmines when in the combined vocabulary of both metaphysics and mathematics. It is overly optimistic to believe that you can always detect them beforehand. The sentence, Gödel proves the lack of determinism of deterministic systems, even sounds contradictory. If you are lucky, you become aware of the problem after the facts. I can certainly imagine situations in which you actually don't.
It is actually a description of the standard contemporary proof for Alan Turing's halting problem. The oracle must predict if thwarter will print a zero or not.
Hamkins describes it as following:
https://arxiv.org/pdf/2407.00680
Just for the hell of it, I rewrote Hamkins wording in terms of the oracle and the thwarter:
And who came up with that sentence? Typed that into google, no hits. Is that one of yours?
The real question is, who confused the vocabulary? Well, the pretty much complete absence of communication between both fields.
It's kind of hilarious, it seems like you're using this as an example of some unavaoidable language landmine just about anybody could walk into, but... it's not, it's just another landmine YOU personally chose to walk into.
Like, we're in a sitcom and you see a landmine on the ground and you just actively, knowingly step right on it, and your leg blows off a hundred yards away, and you look right in the camera and the Curb Your Enthusiasm music plays and you say "Damn, these landmines are so hard to avoid."
They... aren't that hard to avoid. You're literally not trying.
The problem is that this is not the only problem. It is just one of the problems. The language in which the foundational crisis of mathematics is worded, is usually "impenetrable". So, I first need to translate it into a narrative with an oracle and a thwarter, because otherwise, it is absolutely not suitable for interdisciplinary use.
For example, Hamkins paper:
Indeed, it is actually surprisingly readable for a paper on this subject. The following paragraph, however, is unsuitable for interdisciplinary use:
If I cannot not find an alternative way of phrasing this differently, it will be pointless to use this particular argument. Fortunately, I don't need this argument for anything.
The effects of diagonalization are important and should be discussed here in PF. It's great that this pops up in several threads and people obviously are understanding it!
Basically the oracle is similar to the Laplace's demon, that we have been talked about, for example here (real world example) in the "The Argument There Is Determinism And Free Will"-thread. One simply cannot say what one doesn't say or predict what one doesn't predict. Yet in some occasions this obviously can be the correct prediction. In your example, you make the diagonalization with the "Thwarter app".
It should be noticed that this doesn't refute determinism, it just is that any program itself or predictor himself or herself is part of the universe and once there's interaction with reality to be predicted, situations like where it cannot predict the future will happen. The pathological "Thwarter app" is similar what is describe in Turing's paper about the Entscheidungsproblem. But notice you don't have to have this app and problems will arise. (Btw, have you read Yanofsky's A Universal Approach to Self-Referential Paradoxes, Incompleteness and Fixed Points that we discussed on another thread, should be important to this too)
Yet what should be noticed is that this is a limitation that we have or any machine has in the ability to forecast everything. There's much that indeed can be accurately predicted.
And free will?
Well, this doesn't refute determinism, it's only a limitation of basically our computational abilities and logic. So the philosophical question of free will won't go anywhere.
And does the Thwarter app have free will?
Well, the thwarter app does exaclty what the original app doesn't do. Is that free will? The thwarter app still can be a program (Turing Machine) that itself cannot do something else than what is written in it's own program.
Thanks! Again a fine article, @fishfry, that I have to read. I've been listening to Youtube lectures that Joel David Hamkins gives. They are informative and understandable.
O: You will produce the number 2.
T: [produces number 7]
Which is exactly what oracle had predicted and [whispered] to the experimenters.
It seems like the scenario is conflated a specific chain of events with an inability to accurately predict the future. Yes, that app if it is forced to say it's conclusion to thwarter might not be able to predict that one part of the future. But that doesn't mean an app couldn't predict the future - though I think there are computing power issues in making such a deity level app.
A deity level app given a self-undermining task has a problem.
I can't see where one can conclude there is free will from the odd restrictions and fantasies in this scenario.
One, if an app such as the oracle were to exist, it would only show possibilities in which the thwarted cannot thwart the future, so it may actually work in that case. Obviously, this is impossible if the thwarted is effective, so the thwarter would just not do anything because the oracle could never show a possibility.
While both working is a paradox and a result of impossible apps, I would still ask, how does the oracle conclude the thwarter has free will? There is a distinction between sentient individuals and programmed applications, even if both are able to respond and make decisions. The thwarter does not have free will because its choices are limited. It can only choose to respond in a set of ways, there are some things it cannot choose to do, unlike a sentient entity, which can essentially choose to do anything.
If I predict you will go to the store and you do, that would not be sufficient for me say you didn't have free when you went to the store.
At what point do you declare my predictive powers eliminate your free will? How many trials must there be and would a single variance re-establish my free will?
If I accurately predict the outcome of 50 coin tosses, does that necessarilymake the coin toss outcomes not random?
This is a good point. An infinite amount would not limit free will. Free will is only limited if the person does not have complete control over the choices they can make.
Quoting Hanover
And to this point, they would still be random. What does finding out the outcome earlier have to do with the randomness of the trial?
I'll concede you the Halting problem, but certainly not that programs have free will, if that was the claim.
Quoting Tarskian
Nothing is "freely making decisions." That's a complete misunderstanding of what programs are. I know you know that, so you must be using free will in a different sense than I understand.
Quoting Tarskian
Oh for gosh sake. That's not true. A coin doesn't have free will when you flip it. And if you say that deep down coin flips are deterministic, so are programs.
I believe I'm losing this point. I do know about chaos.
Quoting flannel jesus
Yes, I think I have lost this debate to @Tarskian. Except that he thinks programs have "free will," and of course they don't.
Quoting flannel jesus
Agreed. But also, chaotic deterministic unpredictability is not the same as Halting problem deterministic unpredictability, and Tarskian is trying to make some kind of connection.
But I concede the point that programs are inherently unpredictable in the sense of Turing. Not in the sense of chaos, and they certainly don't have free will, except for alternative definitions of the phrase.
You haven't lost any debate, you just made a post with some mistakes. You seem ready to acknowledge them, which is winning in my book.
It's relatively short. You can skip most of the technical bits. I did.
Quoting ssu
He's awesome.
I make many misteaks :-)
Although an epistemic limitation falls short of a metaphysical proof, I am sympathetic to the idea of free will, because in my opinion the conceptual distinction between free will and determinism rests upon a belief in absolute infinity, which i reject.
In my view, to say that "A => B is necessary true" in the sense of material causation, is to say that there exists a Z such that "A => Z is necessarily true" and "Z => B is necessarily true". If we reject the idea that this definition can appeal to actually infinite recursion, then the use-meaning of " A => B is necessarily true" in any given context must eventually bottom out to a finite chain of implicative reasoning, in which the meaning of "necessarily true" is left undefined.
A simpler way of putting it, is to say that we make up the meaning of " A => B is necessarily true" as we go along. This proposition doesn't have precise a priori meaning, and so isn't contradicted by a future discovery that A => B fails to hold, rather the proposition meant by the sentence "A=> B is necessary true" changes on discovery that A => B fails to hold.
Deep down humans could also be deterministic. As long as the theory of humans is incomplete, humans would still have free will.
Or, at any point, oracle might say, "I'll (app equivalent of) write it down, and, after you act, you can read it. And you'll see I predicted accurately."
Thwarter needs a prediction as input. Otherwise it does not run.
Yes, of course, Oracle can perfectly know what is truly going to happen. However, his knowledge of the truth is not actionable. What else is he going to do with it?
Yes, But notice that the Oracle staying silent can be also viewed as an input. So when the Oracle is silent and doesn't make a prediction, the Thwarter can do something (perhaps mock the Oracle's limited abilities to make predictions), which should be easily predictable.
Quoting Tarskian
Oracle can know perfectly what is going to happen if your Thwarter app is a Turing Machine that runs on a program that tells exactly how Thwarter will act on the Oracle's prediction.
And this is why you have to go a step forward from just declaring what that the Thwarter has free will. After all, what's the "free will" in the following?
Oracle predicts A -> Thwarter does B
Oracle predicts B -> Thwarter does A
Oracle predicts something else or is silent -> Thwarter does B
Notice the simple diagonalization. Now, here really both the Oracle and the Thwarter can be basically Turing Machines. Turing Machines don't have free will.
However, you do get to the really interesting point of free will when from this (which is basically a result from the Church-Turing thesis) when you make the following question: If the Oracle knows it's limitations in predicting the Thwarter, but can write Thwarter's actions down on a paper, when does the Oracle have problems even with writing the actions of the Thwarter on a paper?
The Thwarter cannot be a simple predictable program that simple reacts to the Oracle's prediction. The Oracle can easily write this down as it knows Thwarter's program.
The Thwarter app basically has to be an Oracle itself with an ability that no Turing Machine has: it has to understand it's programs it itself is running on and then change it's behaviour/action in a way that it hasn't changed ever before.
How does the Oracle now write down what is going to happen, as in this case there is not historical example of what the Thwarter will do? Well, it cannot use past information and extrapolate from it.
It should be understood here that computers cannot follow an order of "do something else". They can follow it only if in their program there's instructions what to do when asked to "do something else". And now what the "Twarter app" has to do is even more. And something doing the above, basically a "double diagonalization", if one can coin a new term.
But of course it should be evident that nobody here will crack philosophical question of free will, because the counterargument to this is that even we cannot know our own "metaprogram". Well, I would argue that as we can understand our behaviour at least partly and can learn from the past, this "double diagonalization" is at least partly something that we can do. Yet this deep philosophical question of free will won't go away.
In my view, this is an extremely important discussion, because it just shows how profound philosophical impact the findings of Turing and the Church-Turing thesis have. Just what lies beyond computability is a very important question. It's not just a limitation in mathematics for computability, it's also a deep philosophical limitation.
Comments?
If a program knows a list of things it can do [ A1, A2, A3, ..., An], and it receives the instruction "do something else but not Ak", then it can randomly pick any action from [A1, A2, ..., A(k-1),A(k+1) .... An] as long as it is not Ak.
Chaos theory has already been brought up twice, which he ignored, like he does everytime his incorrigible nonsense is challenged. Prediction of choices has nothing to do with free will and this is nonsensical woo disguised in logical language. If you know your friend likes cake over pie, it is possible to predict he will choose cake, it doesn't mean he has no free will.
Randomly picking some action from [A1, A2, ..., A(k-1),A(k+1) .... An] as long as it is not Ak is surely not "do something else". It is an exact order that is in the program that the Oracle can surely know. Just like "If Ak" then take "Ak+1". A computer or Turing Machine cannot do something not described in it's program.
It's not systems that are "incomplete": the idea makes no sense at all, but our understanding of systems.
Understanding of a system amounts to having perfect knowledge of its construction logic, i.e. its theory.
For example, the axioms of arithmetic theory are perfectly well known. Every claim that we can prove from it, is also true in the universe of the natural numbers. However, most of the truth about the natural numbers is still unpredictable.
So, it is not because you build a system -- and therefore know how you have built it -- that you will be able to predict its entire truth. Only some of its truth will be predictable.
The idea is that every physical system has a sound theory, albeit possibly unknown. Every collection of truth has a sound theory.
Every claim that necessarily follows from this sound theory will be true about the physical system.
This is even true about the entire physical universe. It is not because we do not know this theory that it does not exist.
Quoting Janus
The universe is not a theory. It is a collection of truth, i.e. a "model" or "interpretation" of its unknown theory.
If its unknown theory is complete, it can predict its entire history, akin to Laplace's demon. No free will could possibly exist in it.
If we knew its incomplete theory, we would still not be able to predict most of its truth or future. We know the theory of the natural numbers. However, because it is incomplete, we cannot explain most of its truth.
Quoting Janus
If some of its truth is unpredictable, its theory must be incomplete.
The alternative would be in violation of Godel's completeness theorem. If a theory is complete, every fact in its universe is provable and therefore predictable.
Without unpredictability, free will is not possible. Therefore, incompleteness is a firm requirement for free will.
Free will necessarily implies incompleteness, according to the impossibilist assessment.
You only need to discover one true sentence that is not provable from the system's theory to conclude that most of the system's truth isn't predictable.
Every collection of truth has a sound theory. However, only some part of its truth may follow from it.
Say that a collection of truth has 5 sentences: A, B, C, D, E. From its incomplete theory only B and E necessarily follow. Therefore, A,C, and D are its unpredictable truths.
One major problem in trying to discover this system's theory, is that some of its truth must be ignored. You cannot possibly discover its theory if you take A,C, and D into account. You must ignore it.
The general idea in physics is that we cannot discover a theory because we can see too little. According to mathematics, they are actually wrong. It is exactly the other way around. We cannot discover a theory, because we can see too much.
One reason why mathematics works, is because we cannot easily see its unpredictable truth. It takes a series of rather difficult hacks to even detect that it is there.
Yanofsky seems to say that all paradoxes listed in his paper are somehow the consequence of Cantor's theorem. Even though I understand Cantor's theorem as described in wikipedia:
I cannot fully grasp why it is supposedly the same as how Yanofsky phrases it:
In my impression, f(x,y) is Cantor's table while g(r) is the value in the diagonal that is not in the table, or something like that. Concerning Y, a derangement ? (permutation without fixed points) must exist. I can't connect it, though. He does not mention Cantor's power set. His wording for the theorem seems to condense Cantor's diagonalization proof right into the statement of the theorem itself. My intuition says that Yanofsky's version is undoubtedly correct, but I don't fully master its construction.
While Cantor says something simple, i.e. any onto mapping of a set onto its power set will fail, Yanofsky says something much more general that I do not fully grasp.
Ok, this is very important and seemingly easy, but a really difficult issue altogether. So I'll give my 5 cents, but if anyone finds a mistake, please correct me.
Let's first think about how truly important in mathematics is making a bijection, which is both an injection and a surjection. We can call it a 1 to 1 correspondence or a 1-to-1 mapping. And basically bijections are equations like y=f(x) or 1+1=2. And of course Cantor found the way to measure infinite sets by making bijections between them, like there's a bijection between the natural numbers N and the rational numbers Q.
With the diagonal argument or diagonalization, by negative self-reference we show that a bijection is impossible to make as the relation is not surjective. This is the proof for Cantor's theorem. Yet this is also the general issue that Yanofsky is talking about as this is found on all of these theorems.
Even in the case of your example in the OP (if I have understand correctly, that is) first it is assumed that the Oracle can make a bijection from the past to the future and hence can make correct predictions about everything. Then with the Thwarter app, because of the negative self-reference, means that the situation for the Oracle is that it cannot make a bijection as the new situation with the Thwarter app is not surjective anymore.
And as @noAxioms immediately pointed out, you are basically using Turing's proof in your model. Which itself uses also diagonalization.
Hopefully this was useful for you.
Just a nitpick. Not every f(x) function is bijective. I don't think there is a general form of a bijective function. 1+1=2 is not bijective either because it is not a function.
Not too nitpicky, I think it's an important distinction to make. If you don't make this distinction, then... there's no point to the word "bijection", as "function" already exists. This distinction is what makes bijection meaningful over just "function".
I stipulate that:
1. This is a very hip and TED-talky idea going around; and
2. I personally disagree strenuously; but I concede that I can't prove it.
But given that, my original point stands. That programs can't have free will. And I hope you agree that humans being deterministic would not contradict that point.
Quoting Tarskian
We all have moral choice.
I try to keep an open mind and take the good with the bad of all, say, a bit eccentric posters. I hope that is not too uncharitable to @Tarskian. Am I being fair?
Quoting fishfry
Let's not be so open-minded our brains fall out.
Yanofsky phrases a generalized Cantor theorem in terms of the sets Y, T and the functions ?(x), f(x), g(x,y). I still do not fully grasp the connection between the symbols that he uses. I suspect that it is indeed equivalent to Cantor's theorem but I don't see how exactly.
I think that having "free will" versus having a "soul" are not the same thing.
As I see it, the soul is an object in religion while free will is an object in mathematics.
I see free will and incompleteness as equivalent. I don't see why they wouldn't be.
I guess so.
As you have probably noticed, @Lionino does not talk about metaphysics or about mathematics but about me. That is apparently his obsession. He incessantly talks about me, very much like I incessantly talk about Godel. I don't know if I should feel flattered.
But then again, the metaphysical implications of the foundational crisis in mathematics, are truly fascinating.
Mathematics proper has exactly zero metaphysical implications:
How can something that "isn't about anything at all" suddenly become about the fundamental nature of everything?
Thanks to both of you. And no, it isn't nitpicking. Of course we can talk about surjective or injective functions. What for me it's very irritating that there aren't these general definitions. As a layman I would think that something being an equation, a mathematical statement that shows two or more amounts are equal, would also be a (or could be modeled as a) bijection. But, uh, apparently not. :(
And we haven't even discussed isomorphisms and their relation to bijections. Perhaps it's better simply to talk about bijections, injections and surjections. At least that ought to be simple, I hope. Far more easier than these than to talk about Turing Machines, or (yikes), Gödel numbers!
And if that was the only thing correcting, then I'm not totally wrong in the discussion. :)
That sounds rather the opposite of free will.
But again, as I mentioned in my previous post. Oracle could give it a false input. It says you will produce two. Thwarter thwarts and says five, which is what oracle knew and whispered to the judges.
IOW you have conflated the potential for extreme restrictions on the options with oracle - it must be honest with thrwarter and undermine it's predictions, with an inability to predict the future. Ironically seeming to show that we have free will by radically restricting the free will of this tool (oracle) and its tool using owners. IOW the owners of oracle could just tell it to lie to Thwarter.
I've noticed that some posters have personal obsessions with others. For me, when I find it unpleasant to interact with someone, I just don't interact. Don't disagree with them, don't bait them, don't troll them, don't interact with them directly or directly.
Quoting Tarskian
Well, you are saying that historically contingent opinions about math, have some bearing on the ultimate nature of math. But I imagine that if there is such a thing as an ultimate nature of math, it incorporates and transcends all such opinions. Math is more than the sum of all philosophies about it.
Quoting Tarskian
Well now, that is a great question. Wigner asked about the Unreasonable Effectiveness of Mathematics in the Physical Sciences. How can math be so fictional, so idealized, so much about nothing at all; and yet so relevant and useful. I think one answer is that math is useful to humans in the same way that echoes are useful to bats. Our brains are wired to makes sense of the world through math. Our approach isn't better or worse than any other creature's. We flatter ourselves to imagine that the world is "like" math; when in fact, we're just wired that way.
There is an infinite number of ways to write the same thwarter program. In order to know that the source code does indeed represent a thwarter, oracle needs to be able to prove that this alternative is equivalent to thwarter.
That is the same problem as proving that two lambda expressions are equivalent.
Hence, oracle won't know that he is looking at the source code of an alternative version of thwarter.
Therefore, the only solution would be for oracle to lie all the time. Consequently, oracle won't be able to correctly predict the output of a program that does the opposite of thwarter and that just prints oracle's prediction as output.
They're closely related. A self-awareness and the ability to have preferences and desires, and to be able to act to bring them about.
Quoting Tarskian
I'm using soul in a secular sense. And free will does not appear in any math text that I've ever seen. Free will is not an object of study of math at all.
Quoting Tarskian
I believe Penrose makes that argument.
https://en.wikipedia.org/wiki/The_Emperor%27s_New_Mind
"Penrose argues that human consciousness is non-algorithmic, and thus is not capable of being modeled by a conventional Turing machine, which includes a digital computer. Penrose hypothesizes that quantum mechanics plays an essential role in the understanding of human consciousness. The collapse of the quantum wavefunction is seen as playing an important role in brain function."
I believe that the soul is non-algorithmic.
Concerning "human consciousness", I don't know how much of it is just mechanical. The term is too vague for that purpose. A good part of the brain can only be deemed to be a machine, i.e. a biotechnological device, albeit a complex one, of which we do not understand the technology, if only, because we did not design it by ourselves.
But then again, even if the brain were entirely mechanical, its theory is undoubtedly incomplete, which ensures that most of its truth is unpredictable.
Even things without a soul can have an incomplete theory and therefore be fundamentally unpredictable.
Ok! You're an anti-computationalist like me. I don't believe we're ever going to "upload our minds," I don't think we live in a computer simulation, I don't think our minds or our universe are Turing machines.
Quoting Tarskian
What we know of the human brain does not work like a digital computer. Some people say that neural nets work because they mimic the neural structure of brain. I don't believe that, but I have to admit that some of their recent achievements are impressive. Who knows.
Quoting Tarskian
Something deterministic can be unpredictable, so that doesn't solve the problem.
Quoting Tarskian
You're confusing determinism with predictability, but I thought we'd already covered this.
According to the page on the subject, determinism and predeterminism are "closely related":
If you believe that everything has a reason, it does not mean that you also know that reason. Predictability requires indeed both.
After reading all that i was a little unclear on pre- versus regular old determinism. The text passages are the kind of philosophical writing that always makes my eyes glaze.
Quoting Tarskian
Hmmm, determinism doesn't mean that everything has a "reason." If you have some Rube Goldberg machine and you start it, it's perfectly deterministic. But it doesn't have any "reason." It's just one thing causing the next thing.