What is computation? Does computation = causation
What is computation?
This is a surprisingly hard question to get a straight answer on. The Stanford Encyclopedia of Philosophy article on physical computation offers nine competing theories on how computation is instantiated in the world.
Things do not get better if you want to look at computation abstractly, from the perspective of pure mathematics. SEP has no article on computation sans physical instantiation. Nor is it easy to find articles on "what computation is" in any sort of philosophical sense. This excellent free textbook titled "Mathematics and Computation" is no exception. The introduction briefly addresses the issue, and then moves right along to Turing Machines, computational complexity, definability, incomputability, etc.
I'm wondering if anyone knows any good resources on this topic?
I'm going to propose a few radical positions. I will back them up later in this thread, but this post would be too long if I went into all of them.
Computation is what defines mathematical/abstract objects
If we throw a Hamiltonian Path problem with a sufficient number of nodes at a supercomputer we will be waiting until the heat death of the universe for it to compute our answer. However, it is possible to write an algorithm that specifies the answer we wish to calculate using a tiny fraction of the resources it takes to calculate the answer.
The key point is that in the real world it takes time and energy to transform one representation of an abstract object (e.g., a number) into another. If we agree that numbers and other abstract objects only exist inasmuch as they are instantiated in the universe (be that in the external world or in our minds) then it seems like we should take computation as essential to these objects' nature.
You cannot feed 10 + 10 into a computer and get it to return 20 without having it preform computations. Presumably, the same is true for our minds. Take P1 - "The total spent on popcorn and vitamins in the United States from June, 1989 to October 1990." P1 defines a real number, but it is in important respects not equivalent to that number. The key difference, aside from the fact that capturing data on all those sales would require a lot of energy expenditure and information storage, is that P cannot be used in many computations. For example, P1 > P2, where P2 is some other equally frivolous description of a real number, is not computable.
However, for the most part, it seems that the ghosts of Plato and Pythagoras still have a lot of influence on how we conceptualize abstract objects. Abstract objects seem to exist in some eternal realm, at least as far as computation is concerned. If two functions are equal to the same number then they are "describing the same thing", they are just "different names." That is, 8 + 8 and 10 + 6 share an identity. This is seemingly unproblematic with simple algorithms, but is a serious issue when the resources of the visible universe wouldn't be enough to turn X into Y.
Even if relationships between objects or transformations of them must occur stepwise, both conceptually in our minds and observably in nature, it is assumed that this "step-wise-nature" is an artifact of our limitations, that these objects' relations are in fact eternal and direct. This is one place where it seems even very committed nominalists appear to let the eternal slip into their metaphysics.
Existence proofs are a challenge to this view, but I don't think they are a big one. Sure, there are ways to show that an object exists without computing it in total, but proofs themselves are computation, logical operations. They can be seen as simply ambiguous descriptors, the same as P1 up above or "the first number that violates the Golbach Conjecture," if such a number exists.
I find this view promising because it resolves the scandal of deduction/paradox of analysis (the problem that logical truths and computation should give us no new information). It also jives with P ? NP. It would also get past some of the main barriers facing "physics as computation" models. I like those models because there, computation becomes pretty much synonymous with causation, and the former is understood much better than the latter.
There is obviously more to be said here. I still haven't answered "what is computation?" I think a metaphysics where computation emerges from fundamental ontic difference and logic could ground the system, but obviously huge holes will remain because there are huge holes in mathematical foundations in general.
This is a surprisingly hard question to get a straight answer on. The Stanford Encyclopedia of Philosophy article on physical computation offers nine competing theories on how computation is instantiated in the world.
Things do not get better if you want to look at computation abstractly, from the perspective of pure mathematics. SEP has no article on computation sans physical instantiation. Nor is it easy to find articles on "what computation is" in any sort of philosophical sense. This excellent free textbook titled "Mathematics and Computation" is no exception. The introduction briefly addresses the issue, and then moves right along to Turing Machines, computational complexity, definability, incomputability, etc.
I'm wondering if anyone knows any good resources on this topic?
I'm going to propose a few radical positions. I will back them up later in this thread, but this post would be too long if I went into all of them.
- Computation is what defines mathematical/abstract objects rather than it being some activity that you do with them. In some ways, this is not far off the position of formalism in mathematics ("a number is what it does"). However, a major implication of this position that has generally been missed is that you cannot ignore the stepwise nature of computation. Equivalent functions are not equivalent until computation makes them so. Or, for a less strong view, "computation is as essential as abstract objects and its stepwise nature is essential even when considering equivalences in abstraction."
- In many respects, it is impossible to distinguish communication from computation in contemporary theories. I think they are different and that this shows a weakness in the theories.
- For all intents and purposes, "computation" in physical systems is identical with what we generally mean by "causation." Replacing "conserved quantities" with "conserved information" in causal theories has been successful, but they would be more intuitive if you took this extra step.
- Conservation of information was just sloppily imported from other conservation laws in physics. This is a larger problem with digital ontology in general, which attempts to just replace "fundamental particles" or fields with bits. This doesn't work, since information, a measure of contextual difference, doesn't work with reductionism. If information is fundamental, reduction doesn't work in many respects. (I should note that some measures of information DO appear to be conserved in some interpretations of QM)
Computation is what defines mathematical/abstract objects
If we throw a Hamiltonian Path problem with a sufficient number of nodes at a supercomputer we will be waiting until the heat death of the universe for it to compute our answer. However, it is possible to write an algorithm that specifies the answer we wish to calculate using a tiny fraction of the resources it takes to calculate the answer.
The key point is that in the real world it takes time and energy to transform one representation of an abstract object (e.g., a number) into another. If we agree that numbers and other abstract objects only exist inasmuch as they are instantiated in the universe (be that in the external world or in our minds) then it seems like we should take computation as essential to these objects' nature.
You cannot feed 10 + 10 into a computer and get it to return 20 without having it preform computations. Presumably, the same is true for our minds. Take P1 - "The total spent on popcorn and vitamins in the United States from June, 1989 to October 1990." P1 defines a real number, but it is in important respects not equivalent to that number. The key difference, aside from the fact that capturing data on all those sales would require a lot of energy expenditure and information storage, is that P cannot be used in many computations. For example, P1 > P2, where P2 is some other equally frivolous description of a real number, is not computable.
However, for the most part, it seems that the ghosts of Plato and Pythagoras still have a lot of influence on how we conceptualize abstract objects. Abstract objects seem to exist in some eternal realm, at least as far as computation is concerned. If two functions are equal to the same number then they are "describing the same thing", they are just "different names." That is, 8 + 8 and 10 + 6 share an identity. This is seemingly unproblematic with simple algorithms, but is a serious issue when the resources of the visible universe wouldn't be enough to turn X into Y.
Even if relationships between objects or transformations of them must occur stepwise, both conceptually in our minds and observably in nature, it is assumed that this "step-wise-nature" is an artifact of our limitations, that these objects' relations are in fact eternal and direct. This is one place where it seems even very committed nominalists appear to let the eternal slip into their metaphysics.
Existence proofs are a challenge to this view, but I don't think they are a big one. Sure, there are ways to show that an object exists without computing it in total, but proofs themselves are computation, logical operations. They can be seen as simply ambiguous descriptors, the same as P1 up above or "the first number that violates the Golbach Conjecture," if such a number exists.
I find this view promising because it resolves the scandal of deduction/paradox of analysis (the problem that logical truths and computation should give us no new information). It also jives with P ? NP. It would also get past some of the main barriers facing "physics as computation" models. I like those models because there, computation becomes pretty much synonymous with causation, and the former is understood much better than the latter.
There is obviously more to be said here. I still haven't answered "what is computation?" I think a metaphysics where computation emerges from fundamental ontic difference and logic could ground the system, but obviously huge holes will remain because there are huge holes in mathematical foundations in general.
Comments (73)
If I have perfect information about a billiard table and can predict an upcoming shot, it does not follow that state S1 before the shot and state S2 after the shot are the same thing or indistinguishable for me. If earlier states of the table, or the universe, are such that perfect information about S1 would allow you to perfectly predict S2, it still seems that recording all the information in the universe at one instant (S1) is not the same thing as recording all the states of the universe (S1 to S max).
A lot of arguments against the passage of time and existence of change rely on/are motivated by the unwillingness to see computation as anything but something we experience due to being limited beings.
I have a thought experiment that makes this clearer I will try to dig up.
Additionally, even if you don't buy that argument, while the universe is in a low entropy state, it seems like it should certainly have a lower Kolmogorov Complexity because, given fewer possible microstates, the description does not need to be as long to describe which microstate the universe is actually in.
Here's the first in a series of lectures by one of the founders of 'quantum computational theory' David Deutsch which explains in summary the fundamental nature of computation as a quantum process underlying all classical processes like e.g. the 'Universal Turing Machine'.
But perhaps it is the other possibilities that matter in the same way that possible states drive thermodynamics. I have to think about that more.
I believe I have heard Deutsch use the common explanation of particles "storing information." I know I have heard this from Vlatko Vedral, Max Tegmark, Ben Schumacher and Paul Davies. This appears to be somewhat mainstream and I think it fundamentally misunderstands the logic and mathematics of information.
A particle can only carry information inasmuch as it varies from other particles and measurements of the "void." If this was not the case, they wouldn't hold information, e.g., an electron can't store/instantiate information in universe where every measurement shows charge identical to an electron, at least not in terms of EM charge.
You can transfer information via the quantum afterglow of photons without transferring energy. The void appears to be seething with observables. The general push in ontic quantum computation models unfortunately seems to have fallen back into problems with prior models by just replacing the old fundamentals with "information."
The much less common assertion that virtual particles and QCD condensates don't have information is even more obviously off. If they didn't produce observable differences, information, then how could we know about them and how could their existence spawn books and papers on them. I only see this position in older papers though.
Something like "Computation is to information as causation is to matter" seems more accurate, but even then I am not sure.
Quoting Count Timothy von Icarus
Communication would seem to require encoding, transmission, and decoding. A causal process sandwiched between two computational ones?
Quoting Count Timothy von Icarus
You might enjoy ‘Consciousness and the Computational Mind’ by Ray Jackendoff, a critique of computational approaches in psychology. Other critiques of computationism in cognitive science can be found dani. the work of Francisco Varela, Evan Thompson and Shaun
Gallagher.
True, but is there such thing as computation that goes wrong in the abstract sense? Can the square root of 75 ever not be 8 2/3rds + a set of trailing decimals? The very fact that we can tell definitively when computation has gone wrong is telling us something. If we think causation follows a certain logic, e.g., "causes precede their effects," we are putting logic posterior to to cause. But just because we can have flawed reasoning and be fooled by invalid arguments, it does not follow that logical entailment "can go wrong."
When computation goes wrong in the "real world," it's generally the case that we want a physical system to act in a certain way such that it computes X but we have actually not set it up such that the system actually does this.
I was coming from the suprisingly mainstream understanding in physics that all physical systems compute. It is actually incredibly difficult to define "computer" in such a way that just our digital and mechanical computers, or things like brains, are computers, but the Earth's atmosphere or a quasar is not,without appealing to subjective semantic meaning or arbitrary criteria not grounded in the physics of those systems. The same problem shows up even clearer with "information." Example: a dry riverbed is the soil encoding information about the passage of water in the past.
The SEP article referenced in the OP has a good coverage of this problem; to date no definition of computation based in physics has successfully avoided the possibility of pancomputationalism. After all, pipes filled with steam and precise pressure valves can be set up to do anything a microprocessor can. There are innumerable ways to instantiate our digital computers, some ways are just not efficient.
In this sense, all computation does require energy. Energy is still being exchanged between the balls on a billiard table just like a mechanical computer will keep having its gears turn and produce an output, even without more energy entering the system.
I do have a thought experiment that I think helps explain why digital computers or brains seem so different from say, rocks, but I will put that in another thread because that conversation is more: "what is special about the things we naively want to call computers."
---
As to the other points, look at it this way. If you accept that there are laws of physics that all physical entities obey, without variance, then any given set of physical interactions is entailed by those laws. That is, if I give you a set of initial conditions, you can evolve the system forward with perfect accuracy because later states of the system are entailed by previous ones.
All a digital computers does is follow these physical laws. We set it up in such a way that given X inputs it produces Y outputs. Hardware faliure isn't a problem for this view in that if hardware fails, that was entailed by prior physical states of the system.
If the state of a computer C2 follows from a prior state C1, what do we call the process by which C1 becomes C2? Computation. Abstractly, this is also what we call the process of turning something like 10 Ă· 2 into 5.
What do we call the phenomena where by a physical system in state S1 becomes S2 due to physical interactions defined by the laws of physics and their entailments? Causation.
The mistake I mean to point out is that we generally take 10Ă·2 to be the same thing as 5. Even adamant mathematical Platonists seem to be nominalists about computation. An algorithm that specifies a given object, say a number, "is just a name for that number." My point is that this obviously is not the case in reality. Put very simply, dividing a pile of rocks into two piles of five requires something . To be sure, our groups of physical items into discrete systems is arbitrary, but this doesn't change the fact that even if you reduce computation down to its barest bones, pure binary, 1 or 0, i.e., the minimal discernable difference, even simple arithmetic MUST proceed stepwise.
Sure, but doesn't computation require all of that. Computer memory is just a way for a past state of a system to communicate with a future state. When you use a pencil for arithmetic, you are writing down transformations to refer to later.
We might be the recipient of a message transmitted onto a screen, but an important sense our eyes send signals, communications, the the visual cortex for computational processing. A group of neurons firing in a given pattern can act as part of a signal/message to another part of the brain, but also be involved in computation themselves.
This is what I call the semiotic circle problem. In describing something as simple as seeing a short text message, it seems like components of a system, in this case a human brain, must act as interpretant, sign, and object to other components, depending on what level of analysis one is using. What's more, obviously at the level of a handful of neurons, the ability to trace the message breaks down, as no one neuron or logic gate contains a full description of any element of a message.
Even in systems modeled as Markov chains, prior system states can be seen as sending messages to future ones. The two concepts are discernible, but often not very. I will look for the paper I saw that spells this out formally.
As a non-mathematician, I am curious about the following:
Question one: if I put one pebble on a table and alongside it put another pebble, has a computation been carried out ? Because, whatever has happened has proceeded in a series of steps, within the system there has been a change in information, something has caused the pebbles to move, time and energy have been needed, two has been instantiated in the physical world as two pebbles but also two exists as the abstract object two and two pebbles existing as a single whole is different to two pebbles existing as two separate parts.
Question two: if in the absence of any observer, a pebble moves alongside another pebble
under natural forces, has a computation happened ?
A method, or procedure, M, for achieving some desired result is called ‘effective’ (or ‘systematic’ or ‘mechanical’) just in case:
1 ) M is set out in terms of a finite number of exact instructions (each instruction being expressed by means of a finite number of symbols);
2) M will, if carried out without error, produce the desired result in a finite number of steps;
3) M can (in practice or in principle) be carried out by a human being unaided by any machinery except paper and pencil;
4) M demands no insight, intuition, or ingenuity, on the part of the human being carrying out the method.
This original conception of computation in terms of a mechanical method is therefore strongly, if not completely normative, strictly in relation to perspective, and anti-real in being defined entirely in relation to human purposes and human psychology, whist forbidding any empirical contribution from mother nature herself to the computational process. Or as Wittgenstein summed it up : "Turing Machines are what humans do" . Such single player games are incompatible with a realist's conception of causation as a zero player game that is fully determined by the initial state of the game without any subsequent interventions by man, nature or god.
To bring causation and computation into line requires their definitions to be weakened and generalised so as to refer to strategies of two player games involving interaction and dialogue between man and nature. Computer science and mathematics can then be understood as attempting to answer questions of the form "If nature were to act in such-and-such a fashion to my actions, then what are my available winning strategies in relation to my goal?". While physics and it's concept of causality could be understood as asking the complementary dual question "If one were to act in such-and-such a fashion, then how is nature expected to respond?"
I would say in the other way: if you think that computation and causation are equivalent, then you think that mathematics and physics are equivalent. Not just that physics is accurately modeled using mathematics.
First of all, there do exist mathematical objects that are true, but not computable.
The easiest way to conceptualize how rocks act as computers is to think of them modeling something simple, like a single logic gate.
In terms of grouping rocks together, it's probably easier to conceptualize how the cognition of "there are two rocks over there," and "there are 12 rocks over there," requires some sort of computational process to produce the thought "there are 14 rocks in total."
Wouldn't physics generally be answering the question of "if nature acts in such-and-such a fashion how will nature respond?" In general, scientific models are supposed to be about "the way the world is," not games. I don't think such interpretations were ever particularly popular with practicing scientists, hence why the Copenhagen interpretation of QM, which is very close to logical positivism, had to be enforced from above by strict censorship and pressure campaigns.
I wouldn't agree that mathematics necessarily has anything to do with goals.
Lambda calculus doesn't come with the thought experiment baggage of Turing Machines but is able to do all the same things vis-á-vis computation. I think it would be a mistake to misconstrue the framing Turing gives to the machine with something essential to it. In any case, classical computing wouldn't be equivalent to causation in the physical world. Something like ZX calculus would be the model.
Certainly that's a hypothesis that's been raised from a number of angles (Tegmark, Wheeler, etc.) I don't think that's a necessary implication though. Not all forms of mathematics appear to be instantiated in the physical world. Mathematics is the study of relationships. The physical world observably instantiates some such relationships.
Indeed, most forms of the hypothesis that physics is somehow equivalent with mathematics are explicitly finitist. Infinites and infinitesimals are said not to exist, but clearly they are part of mathematics, so the two aren't fully equivalent.
Saying computation is causation is simply saying that one thing entailing another in the physical world follows the same logic as computation in mathematics. One doesn't reduce to the other, they are just different ways of looking at the same thing, i.e., necessary stepwise relationship where states proceed from one another in an ordered fashion.
In an algorithm you have initial conditions, your inputs. The algorithm then progresses in a logically prescribed manner through various states until is reaches the output of the process. In physical systems, you have initial conditions which progress in a logically prescribed manner until the process ends.
Of course, "systems" and "processes" are arbitrarily defined in physics. Any one process can be an input for another process, one system is merely a part of another system, etc. However, this mirrors mathematics, where inputs are also arbitrarily selected.
These brackets might be artificial, but my argument is that the step wise progression of computation is not. Equivalences of two different functions are not a shared identity. Rather, through a process one can [I]become[/I] identical to the other. Such becoming, the continual passing away of one state into another is the hallmark of our world and I think it's been a serious mistake to dismiss it as illusory and that this mistake owes to a seriously calcified view of mathematical objects tracing back to Pythagoras.
I don't know if this is simply a lack of knowledge about the way the world works, or a more fundamental problem where an observer within a system cannot clearly delineate its levels of abstraction on principle. I will have to think about that one.
If I see two rocks on the left, I know that two objects has the name "two".
If I see twelve rocks on the right, I know that twelve objects has the name "twelve".
If I see fourteen rocks in total, I know that fourteen objects has the name "fourteen".
IE, I know there are fourteen rocks in total not from any computational process but from how objects are named.
This.
There certainly are many scientists who offhandedly assume in an old-fashioned way that causality must be an "objective" notion. But as Bertrand Russell pointed out, the notion of causality is objectively redundant. e.g, what does the notion of causality add to a description of the Earth orbiting the Sun? The notion of causality adds nothing of descriptive value to any proposition that states an actual state of affairs, while the employed purpose of causality is to model possible outcomes in relation to possible actions. Do you really wish to promote the possibilities that exist in relation to a model to the status of objective reality, given the fact that possibilities aren't scientifically testable or observable?
Quoting Count Timothy von Icarus
The Copenhagen interpretation itself isn't generally regarded as constituting a game-semantic interpretation of QM, but it should be noted that the the linear logic behind the ZX calculus has very strong game semantics (e.g see Blass and Abramsky's work on game semantics and linear logic ). The conceptual connection between Logic and games goes all the way back to Aristotle. And of course, logic is used to both state the causal assumptions of a model, and also to define computation. So there are good reasons for interpreting both causation and computation at least semi-normatively in terms of game-semantics, an analysis which if correct, precludes both from constituting or describing observer-independent properties of the universe.
.
I am now seeing that was not a good example. The quantities you perceive are irrelevant. I referenced cognition because the most popular models of how the brain works are computational. I only meant to point out that in this view, seeing [I]anything[/I] is the result of computation. The computational component of seeing things in the world is most easily traced back to the system that generates the observers' perspective being computational.
Obviously not everyone thinks computational neuroscience is a good way to model the brain, let alone consciousness, but I figured its well known enough to be a good example.
If you want to think of rocks computing, you have to think more abstractly. Computers are such that a given state C1 is going to produce an output C2. Rocks change states all the time, for instance, they get hotter and colder throughout the day. You can take the changing states of the rock to be functioning like logic gates.
In theory, you could compute anything a digital computer can by setting up enough rocks in relation to one another such that heat transfer between them will change their states in such a way that they mimic the behavior of logic gates in microprocessors vis-á-vis their state changes. Rather than electrical current, you'd be using heat. Of course, to make this system compute what you want it to compute, you'll have to be selective in the composition of your rocks as well.
It's probably easier to think of how you can spell out any phrase with small rocks. Just line them up in the shape of the letters. Your rocks are now storing information.
But of course, they were already storing information. The locations of the rocks when you found them tells you something about prior events. For another example, foot prints store information about the path you took to get to these rocks.
Information is isomorphic. You could spell out a message with the rocks, then take a Polaroid of said message. Then you could scan the Polaroid and send it to a friend as an email. Your message, which is represented by some of the information that defined each system, remains in each transition. Information is substrate independent. Computation, the manipulation of information, is the same way.
This brings up the question of why computers and brains seem so different in their ability to compute so many different things so readily in comparison to rocks or systems of pipes with pressure valves. I would like to bracket that conversation though.
I will start a new thread on that because I think discernablity between different inputs in the key concept there, but it isn't relevant to "what computation is."
If pancomputationalism seems nonsensical, the best way to see where the idea is coming from is to try to define what a computer is in physical terms and how it differs from other systems.
I'm not sure what this is supposed to mean, possibilities already seem fundemental to understanding physics. Possibilities are essential to understanding entropy, the heat carrying capacities of metals, etc. The number of potential states a system can be in given certain macro constraints is at the core of thermodynamics and statistical mechanics. Quantitative theories of information on which a large part of our understanding of genetics rest also are based on a possibilities.
For any one specific message the distribution of signals one receives is always just the very signals that one actually did receive. Every observation for every variable occurs with probability = 1. However, a message can only be informative in how it differs from the possibilities of what [I]could have been[/I] received.
It's "objectively redundant," because he is begging the question, assuming what he sets out to prove in his premise. He assumes a full description of a system doesn't involve explaining causation. The fact that "if you've said everything that can be said in terms of describing a system from time T1 to time T2, you've said everything there is to say," is trivial. The argument against cause here comes fully from the implicit premise that cause is properly absent from a description of the physical world.
Certainly an explanation of "why does the Earth rotate around the Sun," adds something here, no? Russell denied the existence of time's passage and in some more flippant remarks on Zeno's arrow, appears to deny that change and motion exist. I don't want to get into unpacking the bad assumptions that get him there, but obviously in such a view cause can't amount to much because what is cause without change?
I don't find it to be an attractive position though.
Suppose we have a document 150 pages long. Each page contains either just blank spaces or the same symbol repeated over and over. We have pages for every letter of the alphabet, uppercase and lowercase, plus punctuation marks and mathematical equations.
We also have an algorithm that shuffles these symbols together, working through all possible interations of the pages. Given 2,000 characters per page, and no limits on our algorithm's output, this will produce 2,000^150 pages. Each of the pages is then assembled into all possible 150 page books (simply because books are easier to visualize) made by this process.
This output will include the pages of every novel ever written by a human being, plus many yet to be written. Aside from that, it will produce many near exact replicas of existing works, for example, War and Peace with an alternate ending where Napoleon wins the war. It will include papers that would revolutionize the sciences, a paper explaining a cure for most cancers, correct predictions for the next 5 US Presidential races, etc. The books will also contain an accurate prediction of your future somewhere in their contents. George R. R. Martin's The Winds of Winter will even be somewhere in there (provided it is ever finished).
It will also produce a ton of nonsense. The number of 150 page books produced will outnumber estimates of particles in the visible universe by many orders of magnitude.
If algorithms are just names for specifying abstract objects, then you can create all this with basic programming skills on a desktop computer. The algorithm would just be a highly compressed version of all the items listed above.
But since the output includes mathematical notation, the output also includes all sorts of algorithms. This would include algorithms and proofs specifying every abstract object ever defined by man, plus myriad others. It would also include an algorithm for an even larger random symbol shuffling algorithm, which in turn, if computed, would produce an even larger symbol shuffling algorithm, and so on, like reverse Russian nesting dolls.
If algorithms are just names, a relatively bare bones symbol shuffling algorithm is almost godlike in it's ability to name almost everything.
Two points this brings out to me:
1. Negativity is very important in information. We don't just care about what something is, we care about what it is not. The Kolmogorov Complexity of an object, i.e. the shortest string that can encode said object, is crucially "the shortest string that can define an object and just that object." Otherwise, a random bit generator is the shortest description of all classically encodeable objects.
2. Second, we have to recall that information, and thus computation, is necessarily relational. A paper that tells you how to cure cancer generated by a random symbol shuffler is useless. It would indeed be remarkable to find a coherent page from such a process because there are many more ways to generate incoherent pages than coherent ones (maybe, more of that later). But likewise, there are many more ways to write about incorrect ways to cure cancer than there are actually effective methods, and so such a page is less likely to be useful than one published by a renowned quack.
Leaving aside the physical components of the hypothetical computer and output system here, all the outputs of such an algorithm can tell you is "what is the randomization process being used to mix the symbols." A great example of substrate independence. This , is why I think information has to be defined in terms of underlying probabilities.
The information content of the output can't be measured based on the "meanings" of the symbols. To see why, consider that in this seemingly infinite library would be books explaining step by step ways to decode seemingly random strings of text and symbols (the majority of the output) into coherent messages. Following these methods, incoherent pages might become coherent, while coherent ones become nonsense. Exact replicas of messages on some other page might be decoded from a different page. A string might have very many coherent ways it can be decoded. The only way to make sense of this is through the underlying probability distribution.
---
Another thing I always think about when I ponder this example is: "how many characters would need to be on each page in such an algorithm before every discernable human thought has been encoded in the output." Obviously human language can be recursive, which allows for a larger number of discernible messages, but at a certain point levels of recursivity would become indiscernible.
Obviously it's not a very small number, but I'd imagine it's also a far cry from 2,000.
Numbers are computed in language
If asked the question "what is one plus one", as the answer is not contained in either number, I need to carry out a computation in my mind.
If I put one pebble on a table, and then put another pebble next to it, I can see two pebbles.
I don't need any mental computation to know that I see two pebbles, in the same way that I don't need to compute that I see the colour green. Seeing the colour green is the direct effect of the cause of a wavelength of 550nm entering the eye.
Regarding causation, if Bertrand Russell was correct that the notion of causality is objectively redundant, there would be no work for the National Transportation Safety Board which investigates every civil aviation accident in the United States, for example.
Therefore, I only need to carry out a computation if presented with a problem expressed in language, ie, in the computation of numbers where language and naming cannot be ignored. In language, one object is named "one". When another object is added, the set of objects is named "two". When another object is added, the set of objects is named "three", etc. Therefore, when I see one pebble on the table, I can say "I see one pebble". When I see another pebble added I can say "I see two pebbles". I can then answer the question "what is one plus one" as "two".
Therefore, the computation of numbers within the mind can only occur within language.
I think the miscommunication here is that you are thinking of conscious computation, thinking about adding figures together.
I was referring to how neurons carry out computations by sending electrical and chemical signals that result in state changes.
Seeing green for example, doesn't occur just because a light wave hits the eye. People with damage to the occipital lobe often lose the ability to experience vision, even if their eyes are completely fine. They neither see nor dream/visualize. Most of the information received at the eye is discarded early in processing, and processing is what creates the world of vision that we experience.
In some sense, they do still see, via the phenomena of blind sight, but they have no conscious experience of color.
The question of "what is computation" and "what is a computer" are different. The latter seems straightforward: a computer is a Turing machine, or something that can emulate one. What is wrong with that.
What distinguishes a computer from other physical systems is not that they have states that evolve, but that they can be set up to compute anything computable. You won't find this in any physical systems other than brains and computers.
Quoting Count Timothy von Icarus
If not a name, 10/2 is certainly another form of 5. And transforming numbers from one form to another, like the transformation of all information, requires work. This work of transforming information from one form to another is called "computation". Does that sound reasonable?
Quoting Count Timothy von Icarus
This doesn't seem quite right. In the ordinary sense of the word, a broken computer doesn't "compute" anything. And yet it has C2s that follow from C1s. What is special about computers is not that its states evolve, but that it can be set up to implement ad hoc rules that proceed completely independently of their underlying physical implementation.
This is seen already with assembly language. It doesn't matter how an assembly language is implemented, only that it is implemented faithfully to its specification. A steam computer would work the same as a silicon computer that both implement the same assembly language. And on top of these abstract rules, more rules can be implemented, that don't resemble even the assembly language. This tower of increasing abstraction can be incredibly tall, and culminates in distributed systems like the web and cryptocurrencies.
What makes computers special is that they are not bound by physical, causal reality. It is as if, in them, the informational component of reality broke free of the physical component. Brains are especially impressive, in that they are not just computers, but computers which which managed to create computers.
It does. And this is the main problem I have with current abstract conceptions of computation, this work is largely ignored. To be sure, it shows up in the classification of computational complexity and in formalism to some degree, but these are more exceptions.
I'm not sure about this. In theory, a computer can compute anything a Turing Machine can, in actuality they need their inputs in a very precise format.
Both digital computers and brains only function in this dynamic fashion within a very narrow band of environmental settings. The brain is particularly fragile.
A human mathematician will not be able to compute algorithms thrown her way if we do something like project the inputs onto a screen with an orange background and use a, for her, shade of orange font that is indistinguishable from the background. All the information is there, but not the computation. The same is true for infrared light, audio signals outside the range of the human ear, etc.
Likewise, a digital computer needs its information to come in through an even narrower band of acceptable signals. Algorithms must be properly coded for the software in use, signals must come in through a very specific physical channel, etc. A digital computer takes in very little information from the enviornment without specialized attachments, cameras, microphones, etc. An unplugged digital computer acts not unlike a rock.
So I think the unique thing about either is that, given they exist in the narrow band of environments where they will function properly, and given information reaches them in formats they can use effectively, they can do all these wonderful things. How is this? My guess is that it comes down to the ability to discern between small differences. This is also what instruments do for humans and computers, allow for greater discernablity.
With a rock, the way the system responds to most inputs is largely identical. Information exists relationally, defined by the amount of difference one system can discerned about another. Complexity and computational dynamism seem tied to how well a system can discerned between differences in some other system. Zap most physical objects with the signals coming out of an Ethernet cable and the result will be almost identical regardless of what information was coming out of the cable. Not so for our computer. Give humans a bunch of CDs with different information encoded on them and they will be unable to distinguish any difference unless they use specialized instruments.
The key, or at least part of it, is to be able to undergo different state changes based on a much wider array of discernablity for at least some subset of possible medium used as inputs. A rock can have tons of state changes, just hear it up enough, but it can't respond differently to most inputs.
A computer is something that computes. My point has been that there are no computers in a mindless universe.
Is there anything in a mindless universe? Or anything we can say about one? By definition, no one will ever observe such a thing.
Given a mindless universe, could universals/abstract objects exist? I would tend to think not, but that's pretty far afield.
But you're not saying only minded things compute, right?
Do they ever exist? Certainly not in the sense that gas clouds and galaxies exist. But wherever sentient beings evolve they will be able to discern them. So they're real as intelligibles, not as phenomena per se.
Quoting Count Timothy von Icarus
Are you sure about that? I recall reading Simon Conway Morris about the mathematics of the 'protein hyperspace', the number of possible combinations of molecules that could form proteins - and that if these combinations were made by a purely random process, then it would take far longer than the age of the known universe to hit upon the specific combinations that actually comprise working proteins (see his book Life's Solution for details).
Likewise with your imaginary symbol-generation algorithm, whilst one can imagine the possibility of such a computation, it might require vast amounts of time to output all of the actual books, alongside the enormously greater number of 150-page collections of meaningless symbols. Maybe it will produce more 150 page collections than the total mass of the universe. It strikes me as simply a more abstract version of the 'millions monkeys' thought experiment.
Conway Morris' view is that in evolutionary time-scales, some forms are much more likely to emerge than others, because they solve problems (hence, the book's title). Wings and eyes and photosynthesis have evolved numerous times along completely different pathways to solve the same kinds of problems often by drawing on completely different elements and components.
No, I'm saying that minds are a necessary condition for computation. IOW, some mind has to observe the computational process in order for computation to occur. Without a mind giving meaning to it all, it's just changes in physical states.
I think most computation is unobserved though. Is it enough to see the final output of a computational process?
Suppose I run a nightly data job for a dashboard report. It's automated, so on any given night no one observes the job occuring, since it happens on a server in some regional data center through a virtual machine.
Are these just physical changes until someone checks the report in the morning, and then they become computation? Do the physical changes retroactively become computation? Or are they computation because I observed setting them up, or maybe because the aggregate CPU usage for the data center was observed by an employee during the night shift, and my job was a small component of this?
I'm looking at it more this way: without an observer, how is there anything other than a change in physical states? I don't think you can add on "computation" to the physical state changes without there being observation. Certainly, there needs to be an observer to attach meaning to the outcome of computation, whenever it occurs. What ontological status does a simulation have when no one's observing it? Is it even a simulation?
Right. A bundle of sticks that looks like this: VIII with no one to observe it is a bundle of sticks. It can't ever be more than that without some mind observing it and attaching additional signifiers. However, when the bundle of sticks is observed by someone who knows Roman Numerals, it's a bundle of sticks AND it picks up a new attribute courtesy of the mind observing it: it's a bundle of sticks and the roman numeral for 8.
Know exactly what you mean. I had a marathon thread here in the past about just this kind of thing. The broader situation is, modernity divides the Universe into subjective and objective. Then it says that the objective domain is entirely devoid of meaning, because meaning resides in the subject. Then it asks, why is it meaningless?
Another fun version I have thought about before:
A very simple algorithm outputs every possible combination of RGB values in a 1024x1024 pixel image. The program, which can be written in an afternoon by a competent programmer, produces 256^3^1024^2 images. Pretty pedestrian, as far as big numbers go. But this program's output will include:
...And this only scratches the surface of the surface of the surface of all the discernable images
.,..And yet, the vast, utterly overwhelming majority look like colored dots.
Quoting Count Timothy von Icarus
It just "names" (or, as we prefer, is another form of) every possible book, which is quite a different thing from any particular book. Selecting a desired book out of this heap is another computational problem, which the algorithms definitely do not solve.
So something changes in the computer when it is observed or is computation just in the mind of the observer? If the latter, why is it not the same for all universals, e.g., six rocks are not "six" until observed, a triangle isn't a triangle until it is observed, etc? I.e., nominalism.
I don't think I agree. It seems difficult to have information be mind independent but not computation. I won't comment on the status of such things in theoretical "mindless universes," but in the real universe meaning, at least at the level of reference to something external to the system, absolutely seems to exist sans observers, e.g. ribosomes are presumably not conscious but can read code that refers to something other than itself, and they in turn follow the algorithm laid out in the code to manufacturer a protein.
DNA computers organized to produce solutions to Hamiltonian path problems don't behave physically different from DNA in cells at a basic level, so it's hard to see what the difference would be.
No, looking at a computer doesn't cause any changes to the computer. However, observing the output of the computational process and attaching significance to it allows us to attach the word "computation" to the whole process. I don't think observation is sufficient to establish computation. I don't think a baby watching a computer counts. It has to be something that understands the output of the computation, something that can attach meaning to the output.
And is this the case for all universals? I can't say I find that to be an attractive position.
For one, look at the Chinese Room thought experiment. There it certainly seems like one can have computation without understanding. Or, in the example of the China Brain, you would have conscious entities carrying out computations they weren't aware of and couldn't ascribe meaning to.
In any event, I don't think this solves the pancomputationalism problem even if we accept your premise. The pancomputationalism problem/hypothesis arose because conscious philosophers and physicists observed computation everywhere. So the problem is still defining a non-arbitrary definition of computation, even if we bracket computation to just observed phenomena.
Isn't this guilty of the same division?
Humans are part of nature. Human minds presumably have natural causes and thoughts/subjective meaning are part of this natural world. The stick representing the number seven is a fact of nature, something empirically observable and testable.
That the sticks don't signal "7" to someone ignorant of Roman numerals or a dog is simply due to the relational nature of information. Having mind "create" new attributes seems to me to be falling into the same sort of (artificial) dualism.
You have the same problem within subjective experience when someone mistakes a fire alarm for a carbon monoxide detector or a burglar alarm. The information is substrate independent, but not arbitrary. If I mistake my fire alarm for a burglar alarm that does not turn the one into the other, regardless of the meaning I take from the signal. I can discern this if I trace the origins of the source.
Imagine an oscilloscope attached to an ethernet cable. Properly tuned, the image it displays will be sensitive to the electrical activity of the cable. Now record the oscilloscope with a video camera, and you have a system which is sensitive to minute changes of the cable over time.
But, recording and displaying this video is all it does. As sensitive as it is, the behavior of the system is still causally driven by the physical activity of the cable. You can understand the visual behavior in physical terms, which is why the oscilloscope is useful.
Contrast that with what happens when you plug the cable into the computer. The signal might be interpreted as an image, or a sound. Or logical instructions which when executed implement a set of abstract rules, such as how to play chess.
This is what I mean when I say that computers are not bound by causal reality. Unlike the oscilloscope, there are no laws of physics that correspond with the rules that it implements. There is no physical system that maps to the rules of chess, it is an abstraction realized by the computer. This I think is the key point that distinguishes computation from causality.
Interesting that the only place outside human activities and animal communications that something like transmission of information occurs is in living organisms and DNA, isn't it?
Quoting Count Timothy von Icarus
The mind is not something observable in nature. We can observe that other creatures are conscious and presume that they too have minds, but the mind is never a direct object of perception.
As for the interpretation of numbers and so on, humans inhabit a 'meaning world'. It doesn't comprise only objects, but also consists of a continuous process of interpretation, whereby we assign meaning to everything we encounter. Within that matrix, what is objective and what is subjective arise together - we don't see the world as if from no viewpoint, although we think it's easy to do so. But even the imagined panorama of an empty universe is organised around a point of view, without which there would be neither scale nor perspective.
First, isn't it the case that digital computers obey all the physical laws we know about? Hence why knowledge of physical laws has been essential to creating them in the first place.
The oscilloscope doesn't discern the same differences. For example, it is going to present instructions like "open this program and interpret the following in terms of this software," as simply a line, and the instructions as variations in that line. It is unable to discerned how prior signals can change the interpretation of later signals or how later signals can modify the context of earlier ones. Essentially, it lacks an ability to discern differences in signals over time, because it lacks memory.
Obviously part of what makes computers so useful is their ability to take new information/instructions and combine them with information already encoded in the computer itself. So, discernablity of inputs isn't the only important thing, there is also the ability to take on more discernibly different states that, importantly, will correspond to changes in their macrostate.
Computers are very low entropy, which means in terms of a Boltzmann distribution and how their micro constituents are organized, they can take on much fewer distinct microstates states that align with their current macrostate than say a rock or a volume of gas. But what is important is that these state differences are such that the systems they interact with can discern between the state changes, and in turn that these state changes can be transformed into discernible macrostate changes.
Letters or video appearing on a monitor, a human doing a dance or picking up a guitar, these are all discernible macro changes resulting from micro changes. Throw either of these amazing systems into a magical blender that mixes up the constituent molocules and you're extremely unlikely to get any microstates that produce macro changes by chance.
Of course, if you smack a computer monitor and a rock together, the result is identical regardless of what the monitor was displaying. But for us, the output on the screen matters quite a bit. I think this is what you and Wayfarer are getting at. The problem there is that there are plenty of other differences in the physical world we can think of that are completely indiscernible for us, but which radically alter how some other, presumably non-concious system responds to them. This is just the relational nature of information.
Is this the case? Doesn't water eroding topsoil generate information about its passage in the form of riverbeds? This seems to be why we can comment on the age of the Grand Canyon and the history of its formation, precisely because water encodes information on sandstone. Likewise, we can discover things about the atmospheres of distant planets because changes in light from far off stars due to the interactions between the light and the planets' atmospheres encodes information about these planets.
The Vortex optic/fire control system for the US Army's new 6.8mm rifle can instantly zero itself onto any target. It does this by symbolically representing inputs from a built in range finder and atmospheric sensor, which are then analyzed by the ballistics computer. SHARP, a full fire control solution goes a step further, by recognizing targets for the user. It is able to process symbols that represent something else (the target) well enough that a user only has to hold down the trigger continuously to place accurate fire. The weapon will only discharge on a calculated hit (reducing recoil and decreased volume of fire concerns that come with using a full power cartridge).
It will accomplish this symbolic representation even if mounted to a drone, with no immediate conscious observer.
Aren't our own minds the objects of direct perception? Arguably this is the only thing we observe directly, depending on how you define direct. Light, apples, cars, these are all filtered through the mind, Kant's old trancendental and all.
I'm curious on this line of inquiry though, do you think artificial intelligence could generate such meaning? Do dogs experience it?
It seems to me that this risks conflating the concept of information, which seems to be widely applicable to the natural world, with the presence of first person experience, which is on the one hand everywhere (all objects are subsumed in it) but also generally presumed to only be connected to a small fraction of all the external objects in intersubjective reality. I don't think the former is necessitated by the immediate presence of the latter, although perhaps the existence of information does require the potential of experience.
I say this because I think it's likely the case that light carried information about far off planets to the Earth even before the Earth had life one it. If it didn't, I don't know why we shouldn't just take the extra step of saying the Earth didn't properly exist until life did.
So, I would agree that there is an important sense in which differences/meanings that are only discernible for human beings do not exist when no human being observers them. That is, their existence or non-existence is identical for describing reality for some given period P. But, in an important way the information must exist during P, in that its potential is always there.
But that's informative to us. The difference with the information encoded in DNA is that it is morphogenetic, i.e. it causes things to happen, it transmits and stores information. THat is why some (although not all) biologists recognise an ontological distinction between life and non-life - living things are different in kind, not just in degree, to the elements of the periodic table.
Quoting Count Timothy von Icarus
Very tricky distinctions, but I say that it's not. The mind is primarily the subject of experience, that which objects are perceptible to. We can't stand outside of the mind and make it an object in the same sense we can objectify perceptibles. We can obviously talk about our state of mind and mental events, but the question of what the mind is, that has these experiences, is a deep one. There's a theme in current phenomenology about this idea, along the lines of 'the mind knows but is not known', as it is always the subject or recipient, never amongst the objects of perception. It's a question with an ancient heritage, and of course, you're right in mentioning Kant.
I tend to question what could be meant with that. We say a reflection on thought is one of our selves but do not overcome the distinction of the observing and the observed. The "observed" thought - the words in "mind" - is "there", in "space". Absolute Idealism hinted that this perception is already adequate. There is no arbitrary determination of things done "by the mind". The things themselves have already imposed their negativity, ie their restrictions on what they can be, on the subject which is a part of totality:
A try of reduction leads to the thought of two "me": It is the pure observance of "I am"-me that has the quality of "I think"-me. We tend to think the subject as the active - which it is (in common sense) only as long as it misunderstands itself as it's object. A camera being moved has the impression to move; the pure observance of "a tripod wanting to walk" bears the impression of a want to walk. How fitting!
It sounds like you guys are conflating information and interpretation. If these were the same information could not be interpreted in multiple ways. Only interpretation cannot occur without an observer, and this can include machines as well as minds.
Computers certainly operate on information.
Does a library at night have any information? Do all the books have information, or only the ones currently being read?
I don't know how you define information. If it is state, there is certainly state without interpretation.
Computers and libraries are human inventions. Whatever order they have originated from that.
Interpretation doesn't need a conscious observer though. Plenty of industrial systems are set up in such a way that the same signal is meant to represent different things in different contexts. This is true in software too.
Exactly. Books in a library have information because of their states. And notably, interpretation is also a question of states. If I tell one person "if you see me raise my hand it means go start the car," and another person "if you see me raise my hand I want you to grab my bag," my act has two different meanings because it is being computed in the context of previously exchanged information that resulted in state changes in my interlocutors.
I would argue that information exists "in the wild," as discernible differences. If information only existed when observed we would have to posit that observation somehow changes state differences into information, a change in the object, or that all information only exists in the minds of observers. So a riverbed wouldn't store information of the passage of water, but then its physical state, which seems identical to the total information that can be taken from it, is somehow different?
I think a lot of confusion comes from "one thing having different meanings." As in the example of the text shuffler above, such meanings do not come from the information source. Rather, they would come from the observers' knowledge of the meanings of certain arrangements of symbols, which has arisen through history and presumably been taught to them. This is the interaction of information in the signal and previously received, stored information in the individual, which requires computation. So, the Roman numerals VII doesn't store information about the number 7 explicitly itself, but rather it does so in the context of that relationship already having been transmitted to the observer and stored internally.
From the perspective of quantitative theories of information, which are used to define computation in physical systems, the text shuffler can only tell us information about the nature of the algorithm and how it shuffels the text, leaving aside information about the physical aspects of the computer. It doesn't have information on how to cure cancer, alternate endings to War and Peace, although it can produce text that can be interpreted in that way. This does set up a potential Gettier Problem in information theory though, although I haven't seen anyone write about it.
Having information rest solely in the minds of observers seems at risk of becoming subjective idealism. The information has to correspond to and emerge from external state differences or else how can we discuss incorrect interpretations of any signal?
That's why i suggested "two player" game semantics. The semantics of interaction isn't accommodated by the traditional conceptions of either computation or causality, both which define life to be a one-player game but disagree as to who the solitary player is.
Doesn't that imply that a discerner is a necessary condition for "discernible differences"? Or do you mean there are differences that are, potentially, discernible?
I do have a theory of how "computation is instantiated in the world". But first, I must take issue with "computation" as a Definition rather than an Action*1. If you can accept -- as a philosophical postulation -- the notion that Evolution is a process of Computation (a la Tegmark), then my own unorthodox thesis might make sense.
It begins from the assumption that everything in this world is a form of Generic Information (Energy + Logic). The mathematical Logic of Nature gives direction to the propulsion of Energy. If so, then we can use a neologism to label that creative Enforming process : EnFormAction*2. I won't try to explain that novel concept further, unless you think that it could be a viable answer to your topical question : natural computation is instantiated via En-Form-Action -- the act of evolutionary computation of novel forms of being from previous entities. :smile:
*1. Computation : the action of mathematical calculation.
___Oxford
Note -- calculation adds or multiplies two or more values in order to derive a third value. Metaphorically, that's also what Evolution does, as it creates novel forms of being.
*2. EnFormAction :
That neologism is an analysis and re-synthesis of the common word for the latent power of mental contents : “Information”. “En” stands for energy, the physical power to cause change; “Form” refers to Platonic Ideals that become real; “Action” is the meta-physical power of transformation, as exemplified in the amazing metamorphoses of physics, whereby one kind of thing becomes a new kind of thing, with novel properties. In the Enformationism worldview, EnFormAction is eternal creative potential in action : it's how creation-via-evolution works.
https://bothandblog3.enformationism.info/page23.html
Yes, but not a conscious observer. For example, an indivisible "particle" alone in its own universe would transmit no information, and since it has no proper parts, no information transfer occurs within it. It cannot be interacted with. Can such a thing be said to exist? It would have no existence outside of some bare haecceity proposed as unobservable brute fact.
Scott Mueller's Asymmetry: The Foundation of Information has some good examples of relative indiscernibility in physical systems. We can usefully distinguish between "all possible discernable differences," and "all possible discernible differences vis-á-vis one systems interactions with its enviornment."
For the purposes of modeling physical systems, you can ignore "possibilities" that aren't relevant, but philosophically they seem relevant. Most people would like to avoid saying that differences go in and out of existence depending on what the system is interacting with. I think the idea that information only exists in the context of conscious observers is just a more specialized version of that unappealing view.
I will have to give this one some more thought.
Tree rings contain evidence of forest fires, ice-cores atmospheric records. I'm not disputing that. But I'm saying that the mere existence of those data don't constitute information about anything until they're interpreted. The contrast to living organisms is that in them, information is dynamically interpreted by cellular processes moment by moment, it's intrinsic to any organic process.
Quoting Count Timothy von Icarus
I think you're referring to a rather simplistic conception of idealism, of the variety that Samuel Johnson attempted to refute by kicking the stone. I favour a form of objective idealism. It's not that 'the world exists in my mind', but that what we understand as reality entails an ineliminable subjective aspect, without which nothing would make any sense. And we supply that. The mind is continually interpreting and integrating information about the world so as to make it intelligible - and not only intelligible, but navigable - for us. That order is at once 'the order of perceptions' and 'the order of the world' - in very much a Kantian sense.
All due respect, I think the error you're making is that of metaphysical naturalism - the assumption that the world would exist, just as it seems to now, were no humans present within it. But even that apparently empty world is still organised around an implicit perspective. Take that away and you can't imagine anything whatever.
Wouldn't the interpretation have to be done by something with a mind?
I don't make that mistake though. Without life, there is no color, no texture as such, perhaps no space-time as we understand it.
My objection is to the idea that fundemental differences in external objects somehow do not exist or change within the object when conscious observation occurs. I think the mechanisms which allow topsoil to record the passage of water, or passing light to record the existence of far off exoplanets, are the same mechanisms that allow eyes or cameras to record light, that the same mechanisms that make rocks vibrate due to pressure waves are involved in hearing, etc.
I don't want to get into the hard problem of consciousness, but simply the means by which sensory organs can record incoming data and neurons can subject that data to computation aren't qualitatively different from other natural phenomena.
So, my objection is to differences, of which information is composed, not existing simpliciter in external states. If mind is required to create them, then how to minds come to agree so much on that information? Why posit external objects at all if the fundemental source of all knowledge of them is only created by conscious observation?
Our differences might be on definition. I see information arising from fundemental ontological difference. Although I don't much like Floridi's overall theory of information, I find his arguments against popular conceptions of digital ontology in physics quite compelling (Chapter 14 of the Philosophy of Information). Quite simply, a universe without difference is impossible. Even a universe consisting of a two dimensional plane must have points whose coordinates differ from one another.
An ontology of fundemental difference is maximally "portable," in that it can fit with many other ontologies, be it flavors of idealism, dualism, or physicalism.
I would, however, agree with the physicists who push digital ontology on the idea that information is ontologically more basic than physical structures. These fundemental differences are a necessary condition for physical structures to arise. And in any event the "physical structures" we understand we only know as abstractions of mind, so in a both an ontological and epistemological sense, information is prior to physical state differences, not something that emerges from an interaction of mind and physical systems.
In my computation = cause thesis, which I am not very committed to, elementary elements of physics would be akin to numbers in formalist interpretations of Peano Arithmetic, while the more essential logic and relations are informational in nature. The axioms define the numbers, just as, in a universe with different constants, an electron would not be an electron and would behave differently. If I wanted to be even more speculative, I would say these "axioms" in physics are unlikely to be arbitrary brute facts existing as seemingly eternal laws, but rather the result of dialectical processes through which contradictions are resolved, and that this might explain the presence of mind in a teleological sense (sort of what Nagel has in mind for a project in his Mind and Cosmos). This is very speculative though, something like Basarab Nicolescu's book on Jacob Boehme and modern physics.
Perhaps our disagreement is on definition though. Semantic information or "meaning" appear to require mind and often times this is taken as synonymous with information, while I prefer the bare mathematical definition.
Quoting Count Timothy von Icarus
You’re familiar with books such as ‘Just Six Numbers’ by Martin Rees? (Achingly dull read, I found.) It's about the fundamental physical constraints which must exist at a foundational level if the universe is even going to form matter. So I don't know if it's feasible that there could be an alternative, there's something about necessity woven into the fabric of the cosmos, seems to me. These ratios and values have to be a certain way, otherwise stars would not form.
As for information - I think the difference we have is roughly like the difference between pan- and biosemiosis. Pansemiosis proposes that all things, living and non-living, possess a form of semiotic or sign-making capacity, that everything in the universe, including animals, plants, rocks, and even inanimate objects, can be interpreted in terms of signs. Biosemiosis limits the scope of semiotics to living processes. It's an area of disagreement, but the latter seems more feasible to me.
Why do you assume reality is such that there exist external objects? I get why, I guess, but I think that assumption has to be argued for.
How would computation work in an idealistic reality? Would that solve some of the confusion here?
https://www.worldscientific.com/doi/10.1142/9789814295482_0004
I did not realize that Collier was the advisor of Scott Mueller. I thought his dissertation "Asymmetry: The Foundation of Information," was excellent.
Unfortunately, I feel like this is an article where the formalism hinders the argument more than it helps it. Formalization is great when it can demonstrate logical connections that are hard to follow otherwise, and even more so when it allows for practical calculations (Kolmogorov Complexity is a great example), but sometimes it can make it harder to grasp the core issue.
Philosophy of information, being at times considered a sub branch of philosophy of mathematics, does seem quite big on formalism. This isn't necessarily a good thing, because people can agree on equations, or understand how to use them in some cases, while disagreeing on the conceptual underpinnings, or having contradictory understandings of the formalism.
Very briefly, the agreement of people and instruments on key facts about the seemingly external world suggests to me that such a world does exist. I know there are maneuvers around this, but I am not a fan of Berkeley's "God does it all," explanation. It seems to me that subjective idealism requires a level of skepticism that should also put the existence of other minds and the findings of empiricism in doubt, in which case it becomes only arbitrarily distinct from solipsism.
It would depend on the system. In Kastrupt's system, external objects are indeed external to us, they are just composed of mental substance. I don't think anything changes here. As individuals, we are dissociated parts of a mental whole, and the differences that give rise to information, and thus computation, DO exist externally.
I think it still works the same way in something like Hegel's system, which is in some aspects foreshadows information theory. From Pinkard's "Hegel's Naturalism:"
In my own musings on the development of Information Theory, I take seriously the conclusion of quantum theorists that abstract analog Information is equivalent to Energy. If so, there can be both Potential Information (DNA) and Actual Information (protein). Any "uninterpreted information" would be like the Energy stored in Momentum or Position : it can be actualized in a "collision" that transforms Momentum into Action. That dynamic relationship works for both organic and non-organic aspects of Nature. Potential Energy (ability to do work) is the not-yet-activated Power of Position (relationship), as illustrated by gravity's changing force relative to a gravitational body.
A recent development in Physics is the notion that Information is a basic property of the Universe. Ironically, the philosophical implication that idea is that the fundamental element of the world is something similar to an information-processing Mind. Tegmark has proposed that our Reality is an ongoing computation by that hypothetical (mysterious) mind. Unfortunately, his mathematical theory is idealistic and unverifiable by empirical means. So, it remains a philosophical conjecture of reasoning from abstractions like logical/mathematical structure (ratios). You can take it or leave it, as seems reasonable to you. But you can't prove or disprove it. Perhaps treat it as a 21st century myth. But more romantic minds might prefer to imagine the Cosmic Mind as dreaming the apparent world, instead of mechanical Nature computing the physical world. :smile:
Information as a basic property of the universe :
https://pubmed.ncbi.nlm.nih.gov/8734520/
A Universe Built of Information :
https://link.springer.com/chapter/10.1007/978-3-030-03633-1_13
Physics Is Pointing Inexorably to Mind :
https://blogs.scientificamerican.com/observations/physics-is-pointing-inexorably-to-mind/
Thank you for sharing. I believe that my own understanding of computation has been significantly improved.
Quoting Bernardo Kastrup
Quoting Gnomon
I notice that the Information as a basic property of the Universe abstract says that 'Pure energy can perform no 'useful' (entropy reducing) work without a concomitant input of information' - but what is the source of that information? (I've found a brief profile of Tom Stonier here - quite an interesting fellow, but I am dubious that what he's saying really can be reduced to physics. There are any number of ID theorists who would exploit Stonier's observation by saying "well, you know who the source of that "information" must be" - not that I would endorse them. See The Argument from Biological Information.)
As a passage from the Kastrup OP you link to says:
And it is what Kastrup disputes as 'hand-waving word games'.
I didn't mean to imply that Kastrup's ontology is at risk for solipsism, just that it is completely compatible with computation as causation, even if he doesn't think so.
I think there is a hard and soft statement of this compatibility. The hard statement would be that information is the primordial, ontologically basic component of Kastrup's mental substance. The soft view would be that information is merely an epistemologically useful model of the basic elements of said substance, and computation is simply observably identical to causation, even if we think there is some sort of bare substratum of being that exists beneath that level of analysis.
I did not find Kastrup's dismissal of information-based ontologies particularly strong. What he seems to be arguing against are the models that have come out of physics where fundamental particles are simply replaced by qubits. These are the same sort that Floridi defeats with far more detail in his book. However, something like Floridi's maximally portable ontology, in which "information" is well defined (even if it isn't in the rest of the book) is compatible with what Kastrup is proposing. It's a logical truth that any toy universe needs to have differences to be coherent. You can't even have a 2D plane if none of the points on said plane differ from each other in any respect.
Let's ignore quantum difficulties for now.
Suppose we have 7 people in a room. We have cut them off from the rest of reality using a magical forcefield, thus they exist in a closed system. They are playing poker. We want to bet on the hands, or maybe even the conversation they make, who goes to the bathroom when, etc.
Well, with our Le Place's Demon, we can cheat, right? Just fire it up and have it evolve the system forward. It will create a fully accurate projection of the future.
Thus, information is not created or destroyed, at least in one sense, in that a complete description of the state of the room at time T1 tells us exactly what it will look like at T2, T3, etc.
However, I don't think this is where we should end the analysis. In order to create these predictions or retrodictions, the demon must complete step wise computations. That is, it needs to occupy a number of distinguishable states to produce its output. Perhaps this number of states is smaller than the number of distinguishable states the room passes through; it is possible the demon can take advantage of compression. Presumably, the Kolmogorov Complexity of a system can change over time. But this doesn't change the fact that the demon needs to move through these states to produce its output.
If it is storing outputs from its computations in memory, it is creating new information during this process. Even if we model the demon as a Markov chain, it is still passing through these many states. And here is the crux of my argument, a full description of each of the states the demon passes through to evolve the system from time T to time T' would require more information than is used to describe either T or T' alone. If you say, "not true, T3 tells you all about T4 and T5," my response would be, "if that is the case, show me T5 without passing through any more states." If T is truly equivalent to T', it shouldn't be discernible from it. If it is discernible, then difference exists (Leibnitz Law), and so to new does information.
That is, we cannot ignore the process of evolution, as is often done. Computation creates discernible differences across a time dimension, such that if we had a second Le Place's demon producing outputs about every state the first demon passes through, the output would be many times larger than the first's when it simply describes T' based on T.
Two points here:
1. I think this example explains why computation has to be thought of as existing abstractly outside of merely describing equivalencies between inputs and outputs for a given algorithm. Perhaps the equals sign might be better thought of as a "transformation" sign in some respects.
2. If our demons perfectly describe all aspects of causation in our room, to the most fundamental observable level, and if this is accomplished via computation, then I don't see a huge leap in saying there is a sense in which the system also "computes itself," leaving aside arguments about intentionality.
Unless there are immaterial factors that go into decision making, like conscious states. I'm not convinced that if we evolve the system forward, we'll get the same result every time. You're assuming a form of strict materialism where knowledge of all the particles and forces = 100% knowledge of the people in the room.
Yes, that is a conceit of Le Place's thought experiment. I don't mean to assert that this is a realistic experiment (the magic force field and all). I don't think this is material to the point though. I merely wanted to show how computation is indiscernible from what if often meant by "causation" when considering the classical systems that we normally encounter. I don't think we need to make any claims about ontology here; we can just consider empirically observed facts about the external world (which could be a mental substrate).
If quantum mechanics truly is stochastic in nature, as it appears to be, then the Demon can't produce just one output for T' given T. It will need to produce many, many outputs and assign probabilities to each.
If mind is non-physical, then presumably the demon can pinpoint the interaction between non-physical mind and the physical system. Maybe not though, perhaps Von Neumann's "Consciousness Causes Collapse," is the case. If that is so, I am not sure the Demon can do its job, I would need to think more about that.
It would seem though that consciousness cannot cause arbitrary changes in systems, since the results of collapse can be predicted probabilistically. This being the case, the effects on the physical world would still be computable. We would just need to bracket "causation = computation" to the "physical" world we can observe intersubjectively.
In a fundamentally stochastic system, computation still seems to be mirroring causation, just as a quantum computer rather than a classical one. Note though that this change kills the usefulness of our Demon. Even if our Demon can predict all possible outcomes and assign probabilities to them, the number of nearly equally likely states would multiply so rapidly that it would soon become a useless source of prediction.
Mathematician here. I think you're getting into trouble (in an interesting way). If the model is a discrete time Markov chain determined by a matrix P of transistion probabilities, with states v0, v1, .. at times T0,T1,... then you can calculate v1,v2,...,vn step by step, using v1 = P v0, v2 = P v1, etc. But you can also square P repeatedly, to get a high power of P, and go straight from v0 to vn. There is a lot of pre-computation, but once it's done you can fast-forward to states far in the future.
Quoting Count Timothy von Icarus
Well, you can't ignore the process of evolution completely, but you can skip large chunks of time. Not sure where this leaves your point 2.
(Some time ago I was thinking about Tonini's integrated information theory, and wondering if fast-forwarding would destroy consciousness. I don't want to get into the hard problem here.)
Thanks. Perhaps I'm not fully understanding your point, but does this actually reduce the number of computations required or just the length of the algorithm needed to describe the transition from T1 to Tn?
7^4 is simpler than writing 7 Ă— 7 Ă— 7 Ă— 7, which is simpler than 7 + 7 + 7.... 343 times, but computing this in binary by flipping bits is going to require the same minimal number of steps. Certainly, some arithmetic seems cognitively automatic, but most arithmetic of any significance requires us to grab a pen and start breaking out the notation into manageable chunks, making the overall amount of computation required (at least somewhat) invariant to how the procedure is formalized.
Information is substrate independent, so a process occuring faster in a model formed of a different substrate is also to be expected. I also think it is quite possible that computation which models parts of our world can be compressed, which would allow for a potential "fast forwarding". Indeed, I think the belief that one can make accurate predictions about the future from a model sort of presupposes this fact.
Just some other thoughts:
----
If anything, recognizing 7^2 is 49, 8^2 is 64, etc. as names/identities, i.e. making that fact part of long term memory, probably requires MORE computation/energy than doing difficult mental arithmetic. Elsewise, it seems we should have evolved to store all answers to problems we have worked out in long term memory, rather than relying on working memory, but maybe not, that's a complex issue. The idea that 7 squared is just a name for 49 might then be a bit of a cognitive illusion, long term storage being resource intensive in the big picture, but retrieval of facts from it being cheap.
If we suppose that entities in the world have all the properties we can ever observe them to have at all times, even when those properties are immaterial to their current interactions (and thus unobserved and unobservable without changing the context), then it is understandable that a computation that accurately represents their evolution can be reduced in complexity.
However, I can also imagine a position that says that properties only exist contextually. This view runs into problems if fast forwarding is possible, but I think you might be able to resolve these by looking at the different relationships that exist between you, the observer, the system you want to model, and your model. That is, different relationships MUST exist if you can tell your model/demon apart from your system in the first place, so this doesn't actually hurt the relational/contextual view, i.e, "a physical system is what it does."
It might reduce or increase the number of computations required - that would depend on many details. Perhaps it doesn't matter to you that the computation doesn't go through time in small steps.
One other thought: you might find the idea of functional information interesting. Eg https://www.nature.com/articles/423689a. Perhaps it is possible to come up with a notion of 'functional information processing' which would distinguish between arbitrary information processing (which you might call causation) and 'meaningful' information processing (which you might call computation).