What is Simulation Hypothesis, and How Likely is it?

noAxioms March 16, 2024 at 21:39 11600 views 93 comments
This is in reaction to several posts early in the 'Can computers think' topic, but it is fairly off-topic, so spinning off a new thread.

Quoting Ludwig V
Take simulations of people. It is possible to make a figure that is so like a person that people think it is a person - until they talk to it.

The simulation hypothesis has nothing to do with an imitation of a person, which would be an android or some other 'fake' human. So when somebody suggests 'if all is a simulation', this is not what is being referenced. For example:
Quoting RogueAI
What if this is all a simulation and everyone you think is conscious are really NPC's?
RogueAI is probably not suggesting an imitation person here.


Wiki has an article that contradicts itself at times. It says in short:
"The simulation hypothesis proposes that what humans experience as the world is actually a simulated reality, such as a computer simulation in which humans themselves are constructs."

The bold part is true in general, but humans are not always themselves constructs. This breaks the hypothesis into to major categories: A pure simulation as defined above, and a virtual reality (VR) in which real experiencers (minds) are fed an artificial generated sensory input stream (e.g. Matrix), which contradicts the bolded definition above.

Actual Simulation

The actual simulation hypothesis somewhat corresponds to the mathematical universe hypothesis, that the universe evolves by purely mathematical natural law, and that the simulation is simply an explicit execution of an approximation of those laws, on a closed or open system. A simulated human would not have free will as it is often defined.

It presumes that human consciousness is a purely physical process (physicalism), and thus a sufficiently detailed simulation of that physics would produce humans that are conscious. Technically speaking, there need not be humans at all, or consciousness. They perform for instance simulations of car crashes at the design phase, the result of which eventually generates a design that is safer. Such simulations likely have people in them, but only at a physiological level that can assess physical damage and injury/fatality rates. The experience of these occupants is not simulated since there's no need of it.

There is no technology constraint on any pure simulation, so anything that can be done by computer can be done (far slower) by paper and pencil. That means that yes, even the paper and pencil method, done to sufficient detail, would simulate a conscious human who would not obviously know he is being simulated.

If I am part of the system being simulated, I have little if any empirical empirical access to the universe upon which this universe supervenes. There is no reason to suggest that the simulation is being run by humans, especially since in principle, our physics cannot be implemented on a computer confined by the laws of this universe. So the simulation is likely being run by a universe with different laws that enable more powerful computations. Bostrom sees this and tries to get around this obstacle, but must turn a blind eye to a lot of problems in order to do so.


Virtual Reality

The other simulation hypothesis is Virtual Reality (VR), which is a form of dualism, and also corresponds significantly with BiV (Brain in Vat) philosophical discussions. The idea is that minds are real and exist in the same reality that is running the VR, providing artificial empirical input to the experiencer(s). Elon Musk seems to be a fan of this, but justifies it by referencing Bostrom's hypothesis, which shows that Musk doesn't know the difference. The idea here is that the mind has the free will to control a simulated avatar body, and experiences its sensory input. Whether or not some or all of the other people are similarly dualistic (avatars) or native (NPC, or P-zombie) is one avenue of investigation.

A VR must be performed at speed else the experiencer would notice the lag. There are several empirical methods to detect problems in this area, especially if the VR is not solipsistic. These make certain presumptions about the nature of the mind receiving the input, and since this is technically unknown, falsification by these methods is not sound. Too much to get into in just the OP here.

The best way to test this is to trace back the causal chain for decisions and see if the cause comes from actual physical process or it it comes from outside. The test must be done on yourself since anyone else could be an NPC.

Quoting Patterner
I think a simulation scenario could be otherwise. Maybe we are all AI, and the programmer of the simulation just chose this kind of physical body out of nowhere.

In the VR scenario, the mind would be hooked to the simulated empirical stream, but it would not be itself an AI. In fact, simulation (at least of a closed system) needs only brute calculation with no intelligence at all. It's just executing physical interactions, tedious work not in need of intelligence.

I bring up BiV due to the similar issues and falsifications. A brain in a vat need not be a brain at all, but some sort of mind black-box. Introspection is the only evidence. A non-human mind in a vat being fed false information that it is a human living on Earth has no clue that it isn't a pink squishy thing doing the experiencing, or exerting the will.

A big question of the VR hypothesis is where the minds come from. Not sure if the question can be asked since it probes the nature of the higher reality running the VR, and we have no way to investigate that.



A common issue: Any simulation must either be closed, or needs to deal with interactions from outside the simulated system. Where to draw the line between 'the system' and 'the rest' brings many of the issues of Sim and VR together. If one simulates Earth including the consciousness of all its inhabitants, then the moon is outside, and only the experience of it sitting up there is fed into the simulated Earth, very similar to the fake feed the VR gives to the mind, which is the closed system of a VR setup.

Bostrom goes to some lengths to attempt to define a complicated line dividing the system (which seems to be just humans, but a lot if not all of them), and everything else. He doesn't justify why anbody would want to do that, even given sufficient computing power to do it.

Comments (93)

Ludwig V March 16, 2024 at 22:59 #888554
Quoting noAxioms
The simulation hypothesis has nothing to do with an imitation of a person, which would be an android or some other 'fake' human.

The "simulation hypothesis" is indeed quite different from the hypothesis that there are imitations of people around. I'm not quite sure that it has "nothing to do" with fake people.

Quoting noAxioms
What if this is all a simulation and everyone you think is conscious are really NPC's?
— RogueAI
RogueAI is probably not suggesting an imitation person here.

Quoting noAxioms
The simulation hypothesis proposes that what humans experience as the world is actually a simulated reality, such as a computer simulation in which humans themselves are constructs."

On the face of it, this looks like a generalization from "there are some fake. imitation, simulated people around" to "everything is a simulation".
One complication is that we have a forest of similar concepts that work in the same space. Teasing out the differences between an imitation, a fake, a forgery, a pretence, a simulation, etc. would be very complicated. But I think that some general remarks can be made.

It is undoubtedly true that any money in your pocket could be forged. But it does not follow that all money everywhere at all times might be forged. On the contrary, a forgery can only be a forgery if there is such a thing as the real thing.

In all of these cases, there is always a question what is being imitated or forged or whatever. We should never use these words on their own. We should always specify what a simulation or imitation is a simulation of..., which means specifying what a real example is of the thing you are simulating.

Simulating or imitation a reality is simulating everything. So what is it a simulation of? To put it another way, what is the reality that is being simulated? Reality is a totalizing concept and so not an object like a picture or a tree or a planet. "Simulate" does not apply here.

Quoting noAxioms
mathematical universe hypothesis,

What empirical evidence could possibly confirm or refute this? I don't see this as a hypothesis at all, but as a methodological decision. In the 17th century, physicists decided to eject anything that seemed incapable of mathematical treatment, so colours and sounds were banished to the mind, placed beyond the scope of science. Science did not need those hypotheses.

Quoting noAxioms
simulation is simply an explicit execution of an approximation of those laws, on a closed or open system.

So how does a simulation differ from reality?
Quoting noAxioms
They perform for instance simulations of car crashes at the design phase, the result of which eventually generates a design that is safer.

Fair enough. But in those cases, it is clear what the simulation is a simulation of. We know what the real thing is. As you say, this has nothing to do with a simulation of everything.

I'm afraid I don't have the time to respond in detail to what you say about actual simulation and virtual reality. Perhaps later. I'll just say that, so far as I can see, the BIV hypothesis either presupposes the existence of normal reality or describes all of us right now. (The skull is a vat.)
wonderer1 March 16, 2024 at 23:51 #888566
Reply to noAxioms

Bostrom's speculation has always smelled grossly unparsimonious, to me.
noAxioms March 17, 2024 at 01:20 #888585
Quoting Ludwig V
The "simulation hypothesis" is indeed quite different from the hypothesis that there are imitations of people around.

The bit about imitation people (human-made constructs) is very relevant to the 'thinking computer' topic, and relevant only if not all people/creatures are conscious in the same way (a process running the same physics). The idea is preposterous at our current level of technology, so any imitation people would probably be of alien origin, something that cannot be ruled out. They'd not necessarily qualify as what we term a 'computer'.

Quoting Ludwig V
On the face of it, this looks like a generalization from "there are some fake. imitation, simulated people around" to "everything is a simulation".

OK, if not all the people are simulated the same, then the ones that are not (the NPC's) would be fake, not conscious, but controlled directly by some AI and not the brute implementation of physics that is the simulation itself. There has to be a line drawn somewhere between the simulated system and what's not the system. If it is a closed system, there need be no such line. A car crash simulation is essentially closed, but certain car parts are still simulated with greater detail than others.

Quoting Ludwig V
On the contrary, a forgery can only be a forgery if there is such a thing as the real thing.
Under simulation hypothesis (both Sim and VR), the forgeries are any external input to a non-closed system. Bostrum posits a lot of them.

Quoting Ludwig V
In all of these cases, there is always a question what is being imitated or forged or whatever.
Disagree. The car thing was my example: Simulation of a vehicle that has never existed. Our world could in theory be a simulation of a human word made up by something completely non-human, and perhaps not even a universe with say 3 spatial dimensions, or space at all for that matter. There need be no real thing. I personally run trivial simulations all the time of things that have no real counterpart. Any simple 1D-2D cellular automata qualifies.

Quoting Ludwig V
What empirical evidence could possibly confirm or refute this?
I hope to explore that question in this topic. For one, our physics has been proven non-classical, and thus cannot be simulated accurately with any classical Von-Neumann computer no matter how speedy or memory-laden. But that restriction doesn't necessarily apply to the unknown realm that is posited to be running said simulation. But it's good evidence that it isn't humans simulating themselves.

Quoting Ludwig V
Fair enough. But in those [car crash] cases, it is clear what the simulation is a simulation of.
Sort of. Yes, they have a model. No, it isn't a model of something that exists. There isn't a 'real thing' to it.

Quoting Ludwig V
I'm afraid I don't have the time to respond in detail to what you say about actual simulation and virtual reality. Perhaps later. I'll just say that, so far as I can see, the BIV hypothesis either presupposes the existence of normal reality or describes all of us right now. (The skull is a vat.)
The skull-vat view does not feed the mind a set of artificially generated lies. VR does.

The difference between Sim and VR is where the mind is, part of the simulation in Sim, and outside the universe in VR. Same difference as between physicalism and dualism. Same test as you would use to falsify dualism.


Quoting wonderer1
Bostrom's speculation has always smelled grossly unparsimonious, to me.
He does seem to throw the resources around, yes. A lot of it presumes that Moore's law continues unabated for arbitrary more time, which is preposterous. We're already up against quantum resolution, and chip fabs requiring nearly maximum practical resources.

We might be able to simulate a single human in a tight environment (a prison) for a short time. The human would need pre-packaged memories, and thus would not acquire them the normal way, by living a life, unless you have a lot of resources to simulate the growth of a baby to an adult, all withing its tight prison cell (our closed system). The person growing up that way would be pretty messed up.

Wayfarer March 17, 2024 at 02:13 #888602
Quoting noAxioms
It presumes that human consciousness is a purely physical process (physicalism), and thus a sufficiently detailed simulation of that physics would produce humans that are conscious


Which aspects of physical processes correspond with subjectivity?
Ludwig V March 17, 2024 at 02:32 #888604
Quoting noAxioms
if not all people/creatures are conscious in the same way (a process running the same physics).

I'm not sure about whether or in what way the actual physics of the person/computer are relevant. Clearly, we know that human beings are persons without knowing (in any detail) about their internal physics. On the other hand, the commentary on the current AIs seems unanimous in thinking that the details of the software are.

Quoting noAxioms
OK, if not all the people are simulated the same, then the ones that are not (the NPC's) would be fake, not conscious,

One needs to specify that "the same" means here. Otherwise, any difference between people (such as brain weight or skin colour) could lead to classifying them as not conscious, not people. I'm sorry, what are NPCs?

Quoting noAxioms
Sort of. Yes, they have a model. No, it isn't a model of something that exists. There isn't a 'real thing' to it.

Yes, there is an issue here. We can, of course construct, imaginary worlds and most of the time we don't bother to point out that they are always derived from the world we live in. As here, we know about real cars that really crash and what happens afterwards (roughly). That's the basis that enables us to construct and recognize simulations of them. "Star Trek" and "Star Wars" are extensions of that ability.

Quoting noAxioms
The skull-vat view does not feed the mind a set of artificially generated lies. VR does.

That's a bit unfair, isn't it? We know quite well what is VR and what is not, so it is clearly distinguishable from reality. Nobody pretends otherwise. Of course, we can frighten ourselves with the idea that a VR (In some unimaginably advanced form) could be used to deceive people; "Matrix" is one version of this. But, unless we are straightforward positivists or followers of George Berkeley, the fact that the difference between VR and reality is perfectly clear and the problem is no different from the problem how we tell dreams from reality.
noAxioms March 17, 2024 at 04:18 #888622
Quoting Wayfarer
Which aspects of physical processes correspond with subjectivity?
Not sure what is being asked. I mean, what aspects of physical processes would, if absent, not in some way degrade the subjective experience?

I think the question unfair. You're definitely of the dualism camp, to the point where you are not open to the idea that a very good simulation of all physical processes of a system containing a human would be sufficient for subjectivity of the human. So VR is your only option if you thus constrain yourself. A human is hooked to a false sensory stream, which in turn is uplinked to the mind attached to the human. Either that or the simulation somehow connects with a mind exactly in the same way physical bodies have.


Keep in mind that I am not supporting the simulation hypothesis in any form. I'm looking for likely ways to debunk it, but in the end, there can be no proof.


Quoting Ludwig V
Clearly, we know that human beings are persons without knowing (in any detail) about their internal physics.
The idealists for one would disagree with this. Idealism tends to lead to solipsism, where only you are real and all the other humans are just your internal representations (ideals) of them. You've no hard evidence that they're as real as yourself. Of course, modern video games are terrible at displaying other people, and you can tell at once that they're fake. But we're assuming far better technology here where it takes more work to pick out the fakes.


Quoting Ludwig V
One needs to specify that "the same" means here.
'The same' means, in a Sim, that both you and the other thing (a frog say) are fully simulated at the same level, perhaps at the biochemical level. You and the frog both make your own decisions, not some AI trying to fool the subject by making a frog shape behave like a frog.
Under VR, 'the same' means that the other thing is also externally controlled, so perhaps a real frog hooked up similarly to the VR set, fooled into thinking its experience is native. The fake things in VR are not externally controlled, but are rather governed by either physics or a resident AI that controls how the system interacts with things not part of the system. So for non-virtual things, 'the same' would mean either both self-controlled, or both AI controlled, so there are 3 different kinds of things: virtual control, physical control, and faked by AI. A Sim has just the latter two.

I'm sorry, what are NPCs?
Google it. Standard video game term for Non-Playing-Character. It typically refers to a person/creature in a game that isn't played by any actual player, They tend to be bad guys that you kill, or race against, or whatever. In the Sim scenario, it would be a person not actually conscious, but whose actions are controlled by an AI that makes it act realistically. In VR, NPC refers to any person not under virtual control, whether self or AI controlled.

The 'computers thinking' topic references NPC in several places.


Quoting Ludwig V
We can, of course construct, imaginary worlds and most of the time we don't bother to point out that they are always derived from the world we live in.
Conway's Game-of-Life (GoL) is not in any way derived from the world in which we live, so there's a counterexample to that assertion.

As here, we know about real cars that really crash and what happens afterwards (roughly). That's the basis that enables us to construct and recognize simulations of them.
Well yes, since there'd not be much point in simulating a car that crashes under different physics. The intent in that example is to find an optimal design based on the simulation results. Not so under GoL.

"Star Trek" and "Star Wars" are extensions of that ability.
Those are not simulations. Heck, the physics of those worlds are both quite different than our own. The Hollywood guys are hardly paid to be realistic about such things.

Quoting Ludwig V
We know quite well what is VR and what is not, so it is clearly distinguishable from reality.
If it's good enough, then no, it would not be easily distinguished from a more real reality, especially since the lies are fed to you for all time. Unl[ike with a video game. you have no memory of entering the VR. Of course all our crude VR does it feed fake vision and sound effects to you. Not the rest. You can feel the headset you're wearing. But even then, sometimes you forget.... It's pretty creepy in some of the scary games.

Of course, we can frighten ourselves with the idea that a VR (In some unimaginably advanced form) could be used to deceive people;
Yes, that's the idea (one of them) under consideration here. How do you know it's false? Just asserting it false is beyond weak.

"Matrix" is one version of this.
Implausible too, but that's entertainment for you.
But a good VR is far better than any dream. With a dream, I cannot glean new information, such as reading a sign that I don't already know what says. That's a huge clue that dreams are unreal. I frequently run into that in my dreams, but I'm also too stupid in my dreams to draw the obvious conclusion. Rational thought is far more in the background while dreaming.


Wayfarer March 17, 2024 at 04:27 #888624
Quoting noAxioms
Not sure what is being asked. I mean, what aspects of physical processes would, if absent, not in some way degrade the subjective experience?


When you say:

Quoting noAxioms
It presumes that human consciousness is a purely physical process (physicalism), and thus a sufficiently detailed simulation of that physics would produce humans that are conscious


This runs smack into the 'hard problem of consciousness', which is that no description of physical processes provides an account of the first-person nature of consciousness. Put another way, there are no subjects of experience modelled in physics or physical descriptions, physics is wholly concerned with objects.

//another way of putting it is, if it's a simulation, then who is subject to the illusion? A simulation is not what it appears to be, it is comparable to an illusion in that respect. But illusions and simulations only effect a consciousness that mistakes them for being real.//
RogueAI March 17, 2024 at 04:58 #888628
Reply to noAxioms If you're open to the possibility that consciousness could emerge from a computer simulation, are you also open to the idea that consciousness is already emerging in the simulations we're currently running? IOW, if simulation theory is possibly, is my Baldur's Gate party maybe conscious?
noAxioms March 17, 2024 at 05:26 #888632
Quoting RogueAI
If you're open to the possibility that consciousness could emerge from a computer simulation, are you also open to the idea that consciousness is already emerging in the simulations we're currently running?

Last I checked (which has been a while), they can do bugs, and even that is probably not a simulation of the whole bug, let alone an environment for it.

As for Baldur's Gate, that (like any current game) doesn't simulate any mental processes, and even if it did, the simulated character would be conscious, but the game is no more conscious than is the universe. It merely contains conscious entities. A computer simulating a bat would not know what it is like to be a bat, but the simulated bat would.


Quoting Wayfarer
This runs smack into the 'hard problem of consciousness', which is that no description of physical processes provides an account of the first-person nature of consciousness.
Pretty much, yea. All the same arguments (pro and con) apply.

RogueAI March 17, 2024 at 06:11 #888635
Quoting noAxioms
As for Baldur's Gate, that (like any current game) doesn't simulate any mental processes, and even if it did, the simulated character would be conscious, but the game is no more conscious than is the universe. It merely contains conscious entities. A computer simulating a bat would not know what it is like to be a bat, but the simulated bat would.


You're right about Baldur's Gate, but ChatGPT certainly simulates mental processes (or seems to. More about that in a second). You can have a full on conversation with it. Do you think it might be conscious?

Now, when you drill down on "simulate mental processes", what does that ultimately mean? Computers are essentially collections of electronic switches, so simulating mental processes just means that electric switches XYZ... are turning off and on in order ABC...so if you get a lot of switches (or not so many switches but a whole lot of time) and flip them on and off in a certain order, voila! You get consciousness. I think that sounds like magic, but everyone else is taking it seriously, so you also have to take seriously the idea that it might not take a whole lot of switching operations to generate consciousness. Why should it? So it seems that if we're going to take simulation theory seriously, we should be equally open to the idea that some of the simulations we're running now are conscious. Maybe some of the "creatures" in Conway's Game of Life are conscious. Why not?
bongo fury March 17, 2024 at 10:01 #888660
Quoting noAxioms
... I am not supporting the simulation hypothesis in any form. I'm looking for likely ways to debunk it, ...


Surely the problem is the one frequently pointed out, with the word "simulate" being ambiguous between "describe or theoretically model" and "physically replicate or approximate".

So the question occurs, are you holding this

Quoting noAxioms
That means that yes, even the paper and pencil method, done to sufficient detail, would simulate a conscious human who would not obviously know he is being simulated.


up for ridicule, or serious consideration?
Ludwig V March 17, 2024 at 11:37 #888675
Quoting noAxioms
Keep in mind that I am not supporting the simulation hypothesis in any form. I'm looking for likely ways to debunk it, but in the end, there can be no proof.

Thank you for telling me that. It helps a lot.
Quoting RogueAI
I think that sounds like magic, but everyone else is taking it seriously,

I agree with you, though I would describe it as hand-waving. I agree also that sometimes it is best to roll with the punch if someone takes an idea seriously and I don't. I've done it myself. It may not result in them changing their mind, but it does allow some exploration and clarification.

Quoting noAxioms
You and the frog both make your own decisions, not some AI trying to fool the subject by making a frog shape behave like a frog.

So if I miniaturized the AI hardware and grafted it into the frog, it becomes a simulation instead of a VR?

Quoting noAxioms
Conway's Game-of-Life (GoL) is not in any way derived from the world in which we live, so there's a counterexample to that assertion.

What made the game? Though I grant you, it is quite different from the kinds of simulation we have been talking about, and far from a VR. But it is an abstraction from the world in which Conway - and you and I - live.
There's an ambiguity here. There's a sense of "world" in which it comprises everything that exists. There are other similar words that aim to capture the same or similar ideas - "universe", "cosmos" and in philosophy "Reality", "Existence". There is another sense in which we speak of "my world" and "your world" and "the lived world" or "the world of physics" or "the world of politics. I thought we were using "world" in the first sense.

Quoting noAxioms
The intent in that example (sc. the simulation of a car crash) is to find an optimal design based on the simulation results. Not so under GoL.

I agree. I can't answer for Conway's intent, but it looks to me as if the intent is to explore and play with the possibilities of a particular kind of system. In which it has definitely succeeded, in most interesting ways.

Quoting noAxioms
Those (sc. Star Trek and Star Wars) are not simulations. Heck, the physics of those worlds are both quite different than our own

Well, I would say that those films are simulations of a fantasy scenario/world. But I'm not fussed about the vocabulary here. I am fussed about the idea that they have no connection with the actual world. That is simply false. For a start, there are human beings in it, not to mention space ships, planets and suns. As to the physics being different, that doesn't seem to bother people like Hume ("the sun might not rise tomorrow morning") or Putnam ("Twin Earth"). We can, after all, imagine that physics is different from our current one, and, believe it or not, there have been people who did not believe in our physics, but something quite different. Perhaps there still are.

Quoting noAxioms
Yes, that's the idea (one of them) (sc. the idea that VR might become good enough to deceive people) under consideration here. How do you know it's false? Just asserting it false is beyond weak.

Yes, there may be a need to say more. But the idea that VR might be used to deceive people itself presupposes that what is presented by the VR is not real. What might be more troublesome is a VR that re-presented the actual world around the wearer. Pointless, though there might well be a use for it in some medical situations. On the other hand, it couldn't work unless it was possible for the wearer to actually (really) act.

Quoting noAxioms
Clearly, we know that human beings are persons without knowing (in any detail) about their internal physics. - Ludwig V
The idealists for one would disagree with this.

I have the impression that idealists do not think that human beings have any internal physics. (Do they even think there is any such thing as physics?) I was not taking that issue into account, but was assuming a shared background assumption that we could call common sense. Are you an idealist?
noAxioms March 17, 2024 at 13:22 #888685
Quoting RogueAI
ChatGPT certainly simulates mental processes (or seems to. More about that in a second).
It simulates no mental processes at all. It answers on its own, not by simulating something that it is not. It is an imitation, not a simulation of anything.

Do you think it might be conscious?
That of course depends on your definition of 'conscious'. Most of the opponents of machine consciousness simply refuse to use the word to describe a machine doing the same thing a human is doing.

Dictionaries define it as 'aware' of and responding to its surroundings, so in a crude way, a thermostat is more conscious than chatGPT. A chat bot has no sensory feed except a network connection from which queries appear, and might possibly not be aware at all of where it actually is, not having any of the external senses that humans do. So by that definition, it isn't very conscious, and it probably isn't one thing, but rather a multitude of processes that run independently on many different servers.

A true machine intelligence would likely qualify as being conscious (except by those that refuse to apply the word), but it would be a very different kind, since humans cannot spawn off independent process, and cannot distribute their thinking to multiple sites far enough apart that quick communication isn't practical. Biological consciousness is thus far always confined to one 'device' that is forever confined within one head (sort of). Bees exhibit a more distributed collective hive consciousness. An octopus is quite intelligent but has its consciousness spread all out, most of it being in its arms. Machine intelligence would be a little closer to octopuses, but even an octopus cannot temporarily detach an arm and have it act independently until reattached.

On topic: No machine is going to get smarter than us by doing a simulation. Those are by nature incredibly inefficient.

Now, when you drill down on "simulate mental processes", what does that ultimately mean?
It probably means creating a map of brain neurons and synapses organization and running that in a dynamic simulation that not only follows neural activity (and input), but also simulates changes to the map, the creation/deletion of neural connections.


RogueAI: think that sounds like magic, but everyone else is taking it seriously, so you also have to take seriously the idea that it might not take a whole lot of switching operations to generate consciousness.

I don't think it takes very many, but to me, consciousness is a gradient, so the question is not if you're conscious, but how conscious. It is more of an on/off thing with a definition like Wayfarer uses, of having first person subjectivity or not. I don't really understand that since I don't see how an device with local sensory input doesn't have first person subjectivity.
Does my finger have subjectivity? It has first person sensory input, but all it does is send the taken measurement up a wire to be dealt with elsewhere. Ditto for the thermostat. It doesn't react any more to the sensory input other than to convey a signal. So maybe my boiler is crudely conscious because it processes the input of its senses.

Again, all this is pretty off topic. My boiler doesn't work by simulating a biological nerve system. I don't have the budget to have one that does it in such an expensive, inefficient, and unreliable way.

So it seems that if we're going to take simulation theory seriously, we should be equally open to the idea that some of the simulations we're running now are conscious.
... that the thing simulated is conscious. The simulation itself is no more conscious than is real physics. As I said just above, a sufficiently good simulation of a bat would not know what it is like to be a bat, but the simulated bat would.

Maybe some of the "creatures" in Conway's Game of Life are conscious. Why not?
I suppose it would require one to identify a construct as a creature. One can I think implement a Turing machine in GoL, so one you have that, there's little it cannot do.


Quoting bongo fury
Surely the problem is the one frequently pointed out, with the word "simulate" being ambiguous between "describe or theoretically model" and "physically replicate or approximate".
The simulation hypothesis does not suggest that any physical planet (Earth) was created as an approximation of some design/model/real-planet. It is nothing but a hypothesis of something akin to software being run that computes subsequent states from prior states. A VR is a little simpler and more complicated than that because the subsequent states are computed not only from prior states, but also from external input. Sim is deterministic. VR is not.

So the question occurs, are you holding this

That means that yes, even the paper and pencil method, done to sufficient detail, would simulate a conscious human who would not obviously know he is being simulated.
— noAxioms

up for ridicule, or serious consideration?
That was very serious. Sim is simply a computation, and any computation that can be done by computer can also be done by pencil and paper, albeit a lot slower and a lot more wasteful of resources. But time is simply not an object. One might consume 50 sheets of paper and one pencil a day, and the only reason it wouldn't work is because Earth would die before you got very far in a simulation of something as complicated as a person in a room.

A VR cannot be done this way.


Quoting RogueAI
I think [physical processes producing consciousness] sounds like magic, but everyone else is taking it seriously

For the last 5 centuries or so, science has operated under methodological naturalism which presumes exactly this, that everything has natural (physical) causes. Before that, it operated under rmethodological supernaturalism where supernatural (magic) was the cause of anything inexplicable, such as consciousness, the motion of the planets, etc. Presuming magic for the gaps contributed to keeping humanity in the dark ages. The other big cause was general illiteracy, but that continued until far more recently.
My point is, be careful what you label as the magic in that debate. The Sim hypothesis presumes naturalism, and if you don't at least understand that view, then you're not in a position to critique the SH.

Quoting Ludwig V
So if I miniaturized the AI hardware and grafted it into the frog, it becomes a simulation instead of a VR?
No. If you miniturize the VR set (the device that feeds fake sensory input to you, and conveys your responses to the VR) to fit a frog, then a frog can enter the VR just like the human does.
A simulated frog is just that. There's no real frog running it. It runs on its own. An imitation frog is even worse, and only appears to be a frog to something looking at it, but its actions are faked since it is outside the system being simulated.

But it is an abstraction from the world in which Conway - and you and I - live.
From this world yes, but it isn't a simulation of this world.

I thought we were using "world" in the first sense.
I'm using 'world' in many ways. There's the world that we experience. If it's a simulation/VR, then there is another world running that simulation, upon which this world supervenes. Maybe that world also supervenes on an ever deeper world, and (as Bostrom hints), it is turtles all the way down.

Well, I would say that those films are simulations of a fantasy scenario/world.
I would not say that. They are not 'simulations' as the word is being used in this topic. Those films (any film) are mere depictions of those fantasy worlds, not simulations of them.


But the idea that VR might be used to deceive people itself presupposes that what is presented by the VR is not real. What might be more troublesome is a VR that re-presented the actual world around the wearer. Pointless...
Good point, that VR need not involve deceit. One can use a VR setup to say control an avatar in some hostile environment. The military uses this quite a bit, but those are not simulations. Not all VR is a simulation, but this topic is only to discuss the ones that are. I cannot think of a VR into a simulated world that doesn't involve the deceit of making that simulated world appear real to the subject. It actually being real or not depends on your definition of 'real'.

Are you an idealist?
No, but their reasoning made a nice counterexample to your assertion that other people are necessarily as real as yourself. In a VR, and even in a Sim, this isn't necessarily true. I enumerated three different kinds of people, each of which operates differently. I suppose I should give them names for easy reference.
Patterner March 17, 2024 at 13:46 #888687
Quoting noAxioms
I think a simulation scenario could be otherwise. Maybe we are all AI, and the programmer of the simulation just chose this kind of physical body out of nowhere.
— Patterner
In the VR scenario, the mind would be hooked to the simulated empirical stream, but it would not be itself an AI.
Maybe not in the VR scenario. Still, maybe it's the truth of our existence.
NotAristotle March 17, 2024 at 13:51 #888688
I think I have heard it said that if a future people decided to make a simulation, they would make A LOT of such simulations. And these simulations would be nested -- simulations within simulations. If there are a huge number of simulations within simulations, that means only a small number of these simulations will be simulations that do not have a simulation that they are themselves running. But if we are living in a simulation, we must be living in one of the simulations that is not itself running a simulation. In that case, the odds that we are living in a simulation would be astronomically small.

On the other hand, I do not think we would be conscious if we were "in" what you are calling an actual simulation. But we are conscious. Therefore, we must not live in a simulation.

In any case, I know I am not living in a simulation.
Patterner March 17, 2024 at 13:53 #888689
Quoting noAxioms
This runs smack into the 'hard problem of consciousness', which is that no description of physical processes provides an account of the first-person nature of consciousness.
— Wayfarer
Pretty much, yea. All the same arguments (pro and con) apply.
I am not familiar with any arguments for how physical processes provide an account of the first-person nature of consciousness. It seems the answer from anyone who takes that stance boils down to: "Since we can't find anything other than physical processes using the methods of physical processes, there must not be anything other than physical processes. Therefore, the question of how physical processes provide an account of the first-person nature of consciousness is, they just do."
Patterner March 17, 2024 at 14:14 #888692
Quoting NotAristotle
In any case, I know I am not living in a simulation.
Agreed. With no reason to suspect things are not as they seem, I won't seriously consider the possibility that I'm living in a simulation, or a simulation myself, or a Boltzman brain, or whatever else. But I don't see reason to consider one type of simulation scenario any more ... "realistic" than any other.
RogueAI March 17, 2024 at 15:02 #888706
Quoting noAxioms
.. that the thing simulated is conscious.


Which is to say that a collection of electronic switches is conscious when there's a sufficient number of them and they're being turned on and off in a certain order.

I know I sound redundant about that, but doesn't that sound pretty fantastiscal? That you could wire up a bunch of switches and get the subjective experience of eating a bag of potato chips to emerge from them?
RogueAI March 17, 2024 at 15:13 #888709
Quoting Ludwig V
I think that sounds like magic, but everyone else is taking it seriously,
— RogueAI
I agree with you, though I would describe it as hand-waving. I agree also that sometimes it is best to roll with the punch if someone takes an idea seriously and I don't. I've done it myself. It may not result in them changing their mind, but it does allow some exploration and clarification.


Sure. Simulation Theory is fascinating. I don't reject it right off the bat like "you're a p-zombie and don't know it". But I do think the central premise is, as you said, pretty hand-wavy.
bongo fury March 17, 2024 at 17:58 #888728
Quoting noAxioms
The simulation hypothesis does not suggest that any physical planet (Earth) was created as an approximation of some design/model/real-planet.


Oh good.

Quoting noAxioms
It is nothing but a hypothesis of something akin to software being run that computes subsequent states from prior states.


So, a simulation as a description or theoretical model, distinct from any real or imaginary structure satisfying the description. A map, distinct from its territory, real or imagined. Good.

Quoting noAxioms
That was very serious.


Gosh. This?

Quoting noAxioms
That means that yes, even the paper and pencil method, done to sufficient detail, would simulate a conscious human who would not obviously know he is being simulated.


I have to say this appears to confuse the two senses of "simulate". Otherwise why the fascination with some amazing level of detail? This is generally a sign that the hypothesiser has allowed themselves to confuse map with territory.

A novel or a computer game can perfectly well describe or depict a conscious human that doesn't know he is being imagined, and it can equally well describe or depict a conscious being that does know. Detail is neither here nor there.
RogueAI March 17, 2024 at 18:41 #888730
Quoting noAxioms
That means that yes, even the paper and pencil method, done to sufficient detail, would simulate a conscious human who would not obviously know he is being simulated.


I missed this somehow. This is absurd. You're not going to be able to simulate a conscious person/generate consciousness from paper and pencil. This is getting into Bernardo Kastrup territory: is my house's sanitation system conscious?
https://www.bernardokastrup.com/2023/01/ai-wont-be-conscious-and-here-is-why.html
noAxioms March 17, 2024 at 18:49 #888731
Quoting NotAristotle
I think I have heard it said that if a future people decided to make a simulation, they would make A LOT of such simulations. And these simulations would be nested -- simulations within simulations.
That's pretty much Bostrom's argument, a sort of anthropic reasoned hypothesis that demonstrates a complete ignorance of how simulations work.

If there are a huge number of simulations within simulations, that means only a small number of these simulations will be simulations that do not have a simulation that they are themselves running. But if we are living in a simulation, we must be living in one of the simulations that is not itself running a simulation. In that case, the odds that we are living in a simulation would be astronomically small.
That was one of the counterarguments that I think itself fails to hold much water. If each simulation runs several internal simulations, the leaf ones (us) would be exponentially more in number than the base levels. Of course this exponential simulations that are simulating other machines running simulations is a big part of the reason the premises fall apart.

On the other hand, I do not think we would be conscious if we were "in" what you are calling an actual simulation.
Why not? I mean, if you deny that consciousness emerges from physical process, then it falls right out of the gate, but presuming physicalism, the simulated person wouldn't act correct if the simulation got the physics wrong.

For that matter, even under dualism, what prevents the simulated person from gaining access to this supernatural woo like the physical human does?

In any case, I know I am not living in a simulation.
How? Incredulity? I'm trying to gather actual evidence for both sides. Lots of people 'know' things for sure, and lots of what people 'know' contradicts what other people 'know'. Humans are quite good at being certain about things for which there is no hard evidence.
I mean, I don't buy the hypothesis either, but to declare to 'know' such a fact without any logical/empirical backing reduces it to a mere belief, rationalized, but not rational.

Quoting Patterner
I won't seriously consider the possibility that I'm living in a simulation, or a simulation myself, or a Boltzman brain, or whatever else.

This is better worded. It's an extraordinary claim and it requires extraordinary evidence to be taken seriously. The various proponents seem to to use very fallacious arguments in an attempt to demonstrate that evidence.

Being a Boltzmann Brain thing isn't a proposed hypothesis by anybody. It's simply an obstacle in the way of the validity of various proposed theories explaining how physics actually works. Very few people understand the significance of a BB. Sean Carroll sums it up:

[quote=SCarroll]A theory in which most observers are of the Boltzmann Brain type is ... unacceptable: ...
The issue is not that the existence of such observers is ruled out by data, but that the theories that predict them are cognitively unstable: they cannot simultaneously be true and justifiably believed.[/quote]
That means that no observer can have knowledge of the workings of such a universe.


Quoting Patterner
I am not familiar with any arguments for how physical processes provide an account of the first-person nature of consciousness.
I didn't say it did, any more than does the alternative view. The topic surely is discussed in more relevant topics on this forum or on SEP pages. It is a digression here. The Sim hypothesis presumes, as does the last 5 centuries of science, a form of physical monism. There's no hard problem to be solved. There's nothing 'experiencing' you first person.


Quoting RogueAI
.. that the thing simulated is conscious.
— noAxioms

Which is to say that a collection of electronic switches is conscious.

Well, if you simulating a collection of electronic switches (which a human is, in addition to a lot of other supporting hardware), and you consider that such a collection (the human) is conscious, then yes, the simulated thing will be conscious.



Quoting bongo fury
So, a simulation as a description or theoretical model, distinct from any real or imaginary structure satisfying the description.
The model is perhaps a design of a simulation. The simulation itself is the execution of it, the running of code on a computer for instance being one way to implement it, but paper and pencil also suffices. A simulation is a running process, not just a map.

Quoting bongo fury
Gosh. This?
Quoting RogueAI
This is absurd. You're not going to be able to simulate a conscious person with paper and pencil.

You both seem to balk at the paper/pencil thing, but what can a computer do that the pencil cannot? If you cannot answer that, then how is your denial of it justified?

bongo fury:A novel or a computer game can perfectly well describe or depict a conscious human that doesn't know he is being imagined, and it can equally well describe or depict a conscious being that does know. Detail is neither here nor there.
The NPC in the computer game would need that amazing level of detail to actually believe stuff (like the fact that he's not being simulated), and not just appear (to an actual player) to believe stuff.

NotAristotle March 17, 2024 at 19:09 #888736
Reply to noAxioms It is unclear to me why there would be more leaf worlds, could you spell that out for me?
bongo fury March 17, 2024 at 20:16 #888743
Quoting noAxioms
A simulation is a running process, not just a map.


A running process isn't just a succession of maps? Does magic happen?
RogueAI March 17, 2024 at 21:44 #888761
Quoting noAxioms
You both seem to balk at the paper/pencil thing, but what can a computer do that the pencil cannot? If you cannot answer that, then how is your denial of it justified?


Have you ever seen this?

https://xkcd.com/505/
bongo fury March 17, 2024 at 22:08 #888777
Quoting noAxioms
The NPC in the computer game would need that amazing level of detail to actually believe stuff (like the fact that he's not being simulated), and not just appear (to an actual player) to believe stuff.


Do you mean that some part of the computer running the game would need the detail? Then you're talking about an AI, a simulation in the unproblematic sense of a working model: a physical replication or approximation. You might consider subjecting it to an elaborate deception, of course, but then you would be in what you have rightly demarcated as a different set of problems: the VR ones.

Or do you mean that a fictional character described and depicted in the game would need the detail? To actually believe stuff, like the fact that he's not fictional?
noAxioms March 18, 2024 at 00:14 #888800
Quoting NotAristotle
It is unclear to me why there would be more leaf worlds, could you spell that out for me?

Picture 'reality' R0 as the trunk of a tree. It has 9 boughs (S1-S9) coming out of it, the simulations being run on R. Each of those has 10 branches, labeled S10-S99. Those each in turn have 10 sticks (next level simulations (S100-S999), Then the twigs (S1000-S9999) and the leaves (S10000-S99999). Every one of those simulation has say 10 billion people in it, so a given person is likely to be simulated (all except the ones in R0), and most of those (90%) find themselves in the leaves, the non-posthuman state as defined by Bostrom. So finding yourself in a state where such simulations are not possible is most likely. And this is presuming only 10 simulations per world, whereas Bostrom posits far more, so the numbers get even more silly.

This argument is a gross simplification, and presumes some outrageous things, for instance, that one world can be simulating trillions of consciousnesses at once, and there are motivations that people would want to run such a thing at all. It also presumes fate: that all initial random states with people in it will almost certainly eventually progress to this posthuman state. It also preposterously presumes the continuation of Moore's law for millennia, and posits no end to non-renewable resources. Those are many of the reasons Bostrom's proposal falls flat.
How about simulating a quantum universe with a classical machine? That's been proven impossible. I notice Bostrom suggests shortcuts, where the brute simulation needs to know what it's particles are doing and notice when intent emerges from the atoms so that it can actually change physics when one looks closely enough at something. The comedy never ends with that proposal.

Empirical methods of falsification are also interesting to explore.

Quoting bongo fury
A running process isn't just a succession of maps?

A description of a running process is a map. The process itself is not.

Quoting bongo fury
Do you mean that some part of the computer running the game would need the detail?
It would need to simulate the NPC down to the biochemical level. The NPC would need to be conscious to believe anything, and not just appear to believe stuff. Heck, Elon Musk 'appears' to believe he's in a VR (as a player presumably, not an NPC), but it is questionable if he actually holds this belief. Ditto for a few other notable celebrities that make heavy claims but seem to have ulterior motives.

Then you're talking about an AI
An AI is needed to make a convincing NPC that doesn't do its own thinking. It is far more efficient for the actions to come from an AI than it is to actually simulate the character's thoughts and other processes. A pure closed simulation (Sim or VR) needs no AI at all, just brute capacity. No current game has any character do its own thinking, and the NPC are really obviously an NPC since barely any processing power is budgeted to doing the AI better. It's getting better, but has a long way to go before the line between players and NPCs begins to fade.

Or do you mean that a fictional character described and depicted in the game would need the detail?
Heck no. A game need only simulate my sensory stream, nothing else. There's no reason to make the characters appear to ponder about what their nature is.

Quoting RogueAI
Have you ever seen this?
I've seen the xkcd thing, yes. I'm not the first to see it. There's lots of references to 1D and 2D simulations in that, but how else are you going to depict it in a comic?

L'éléphant March 18, 2024 at 01:16 #888811
Quoting noAxioms
A brain in a vat need not be a brain at all, but some sort of mind black-box. Introspection is the only evidence. A non-human mind in a vat being fed false information that it is a human living on Earth has no clue that it isn't a pink squishy thing doing the experiencing, or exerting the will.

I disagree with this. In the BIV, the brain is a given. That is, human brain. Because the point of the theory is skepticism, not that we are indeed brains in a vat. If I could experience the real world, then be hooked up to a machine that simulates the same thing I have experienced, seamlessly, that I would not be able to tell the difference, then the theory has made its point.
RogueAI March 18, 2024 at 01:32 #888816
Quoting noAxioms
You both seem to balk at the paper/pencil thing, but what can a computer do that the pencil cannot? If you cannot answer that, then how is your denial of it justified?


I remember raging arguments at the International Skeptics Society years ago about whether enough monks writing down 1's and 0's could simulate consciousness, like the guy in the comic I posted moving rocks around and simulating this universe.
wonderer1 March 18, 2024 at 06:21 #888872
Quoting noAxioms
You both seem to balk at the paper/pencil thing, but what can a computer do that the pencil cannot? If you cannot answer that, then how is your denial of it justified?


A computer can process information in ways that a pencil cannot. Why think consciousness can exist without the occurrence of information processing?
noAxioms March 18, 2024 at 12:15 #888906
Quoting L'éléphant
I disagree with this. In the BIV, the brain is a given. That is, human brain.
Well, to quote the BiV IEP page, very close to the top:
iep BiV:[i]Or, to put it in terms of knowledge claims, we can construct the following skeptical argument. Let “P” stand for any belief or claim about the external world, say, that snow is white.

[1] If I know that P, then I know that I am not a brain in a vat
[2] I do not know that I am not a brain in a vat
[3] Thus, I do not know that P.[/i]

https://iep.utm.edu/brain-in-a-vat-argument/#:~:text=The%20Brain%20in%20a%20Vat%20thought%2Dexperiment%20is%20most%20commonly,experiences%20of%20the%20outside%20world.

So if P happens to be "the nature of that doing my experiencing is a human brain", then that cannot be known. Sure, the BiV does originally posit a human brain in a vat, but for purposes of the relevance to VR scenario, anything in a VR cannot know the true nature of its own mind, especially if it can prove that the physics that the VR is conveying cannot be the cause of your actions.

Because the point of the theory is skepticism
Yes, that's exactly the point, and yet most VR discussions (say the thing that Musk suggests is almost certainly true) fail to be skeptical about his true nature, something for which he has pretty much zero empirical evidence if his skepticism is true.

If I could experience the real world, then be hooked up to a machine that simulates the same thing I have experienced, seamlessly, that I would not be able to tell the difference, then theory has made its point.

Why wouldn't you then remember being hooked up to the machine? You only have memories of a world where such a machine is not possible (yet), so an actual transition from reality to VR is not plausible.


Quoting RogueAI
I remember raging arguments at the International Skeptics Society years ago about whether enough monks writing down 1's and 0's could simulate consciousness, like the guy in the comic I posted moving rocks around and simulating this universe.

And did the nay-sayers actually come up with a reason why it could not? The only reason I can think of is that of dualism: Total denial that consciousness can be a physical process at all. It needs magic to fill what are seen as gaps, and a simulation (both computer or paper) for some reason is denied access to that same magic.

Both you and B-F have yet to justify why a sim on paper is fundamentally different from the exact same computation done by transistors. But it seems a third person is joining the ranks:

Quoting wonderer1
A computer can process information in ways that a pencil cannot. Why think consciousness can exist without the occurrence of information processing?

Same question then: What information can a computer possibly process that a pencil cannot? Time of computation seems to be the only difference, and time of computation is not a factor at all with the Sim hypothesis, even if it is absolutely critical to the VR hypothesis.
I do very much agree that a VR cannot be done by paper & pencil, but I never suggested otherwise,
flannel jesus March 18, 2024 at 12:36 #888914
Quoting noAxioms
Why wouldn't you then remember being hooked up to the machine? You only have memories of a world where such a machine is not possible (yet), so an actual transition from reality to VR is not plausible.


I don't think so. If someone made such a machine, that someone could know enough about a brain to manipulate memories too. They can manipulate your entire experience of your world, why not your memory?
wonderer1 March 18, 2024 at 13:30 #888930
Quoting noAxioms
A computer can process information in ways that a pencil cannot. Why think consciousness can exist without the occurrence of information processing?
— wonderer1
Same question then: What information can a computer possibly process that a pencil cannot?


A pencil is not an information processing system. A pencil may be part of an information processing system which includes a person and a pencil and piece of paper, but the brain of the person is playing the key role in whatever information processing occurs.

To answer your question, a pencil can't process the video file found here.
noAxioms March 18, 2024 at 16:47 #888977
Quoting flannel jesus
I don't think so. If someone made such a machine, that someone could know enough about a brain to manipulate memories too. They can manipulate your entire experience of your world, why not your memory?

This would be a violation of the premise, that only the inputs and outputs are artificial, and the experiencing entity itself is left to itself. If you posit that even your memories are open to direct manipulation at any time, then you end up in the Boltzmann Brain scenario, where,such a hypothesis, as Carroll put it, "cannot simultaneously be true and justifiably believed".


This does bring up a very relevant point though. Let's dumb down Bostrom's scenario (A Sim this time, not a VR). Instead of simulating a planet, we do just one person Bob, born in say 1870. How does one go about setting the initial state of such a simulation? The machine only knows how to do physics at some specific level of detail. It knows what we do: how cells grow, get nutrients, split, neuron and axon interactions and network changes. But it doesn't know how consciousness emerges from that. It doesn't know what it's like to be the person. It cannot set a state if it doesn't know how the memory works, and what memories to give our subject. The only way to plausibly start such a simulation is from a zygote. From there it evolves as an open system, with all Bob's inputs and outputs faked by plausible but not fully simulated surroundings. His mother for instance would be an imitation, one good enough to fool our subject.
This is what I mean by two kinds of people in a Sim (and three in a VR). Mom is an imitation, outside the open system. Bob is simulated, being the open system.

The computing capability to do this is possibly something that can be done in the foreseeable future, but running a simulation of a life all the way from a zygote is a long time to wait, and takes an incredible amount of AI to give Bob a realistic and believable environment in which he gets raised.


Quoting wonderer1
A pencil is not an information processing system. A pencil may be part of an information processing system which includes a person and a pencil and piece of paper, but the brain of the person is playing the key role in whatever information processing occurs.

The one (at a time) person operating the pencil and paper was implied. Also not explicitly missing is a society to breed, train, feed, and otherwise support the efforts of the series of people doing the primary task. A big part of that support is replacement of paper/parchment as it decay into unreadability before it is actuall needed as input for a subsequent step. But the computer also needs to do this, and a lot more frequently than every few centuries or so. Computer memory rots and needs to be refreshed a few hundred times per second.
Point is, all these implicit additions not being explicitly stated doesn't make the statement false.

To answer your question, a pencil can't process the video file found here.

That's right. As you point out it would need a person operating the pencil, which, based on your protest above, is something you feel needs to be explicitly specified.
The video is digital so not even an A->D conversion is needed to get to the part where the video can be digitally processed. I do admit that a pencil is a poor tool for analog signal processing.

flannel jesus March 18, 2024 at 17:16 #888984
Quoting noAxioms
This would be a violation of the premise, that only the inputs and outputs are artificial, and the experiencing entity itself is left to itself. If you posit that even your memories are open to direct manipulation at any time


It doesn't have to be "at any time", it can just be at the start. And presumably a baby could be hooked up to the machine anyway, without any concern for their memories, no?
noAxioms March 18, 2024 at 19:52 #889019
Quoting flannel jesus
It doesn't have to be "at any time", it can just be at the start. And presumably a baby could be hooked up to the machine anyway, without any concern for their memories, no?

Well, the Sim hypothesis (all versions) as how we might know we are or are not in a sim or VR. You're speaking of a VR in this case. Your memories define who you are, and if those are totally wiped, it's somebody else in the VR, not the person who entered it.

This VR is portraying a world of 2024 to me, a world in which the technology for such a setup isn't going to exist for at least a century. So if I've been put into it some time in say 2200, then all my memories have been wiped, and they're just running somebody else on what's left of my hardware, rewriting it into a new person that thinks it is 2024. Who would volunteer for that?

Sure, it could be done to a baby who doesn't question the change in environment, but why would anyone take a baby and subject it to indefinite VR? How does anybody in a VR not just atrophy away from disuse of all limbs? People permanently paralyzed have pretty short life expectancies, regardless of how much fun their brain might be having.
NotAristotle March 18, 2024 at 20:09 #889027
Quoting noAxioms
Picture 'reality' R0 as the trunk of a tree. It has 9 boughs (S1-S9) coming out of it, the simulations being run on R. Each of those has 10 branches, labeled S10-S99. Those each in turn have 10 sticks (next level simulations (S100-S999), Then the twigs (S1000-S9999) and the leaves (S10000-S99999). Every one of those simulation has say 10 billion people in it, so a given person is likely to be simulated (all except the ones in R0), and most of those (90%) find themselves in the leaves


Ah, I see, thanks for explaining!

Regarding your objection re: physicalism. The problem with conscious people within/part of a simulation has to do, in my opinion, with the historical necessities of consciousness. That is to say a simulated person does not have the requisite history to be conscious. We need not evoke anything supernatural in this description of consciousness, we can keep everything purely physical. All we're saying is that someone that is conscious must be alive, and someone alive must come from someone else who is alive, that is, from the womb.
Ludwig V March 18, 2024 at 23:09 #889101
Quoting noAxioms
Most of the opponents of machine consciousness simply refuse to use the word to describe a machine doing the same thing a human is doing.

I don't think this is Lewis Carroll's tortoise arguing with Achilles. Understanding this is heart of the problem. We need to be much more careful about what "doing" means in the context of planets and the weather and in the context of people. People and inanimate objects are not in the same category, which means that understanding planets or the weather and understanding people involve different language-games. Machines have a foot in both camps. The answers are not obvious.

Quoting noAxioms
Ditto for the thermostat. It doesn't react any more to the sensory input other than to convey a signal. So maybe my boiler is crudely conscious because it processes the input of its senses.

My boiler, on its own, is clearly not conscious, even if it contains a thermostat to switch it off when the water is sufficiently hot. Neither is the thermostat that switches it on. Neither keeps the house warm. What keeps the house warm, (not too hot and not too cold) is the entire system including the water, the pump and the radiators, with its feedback loops and not any one component. You can call the system "crudely conscious" if you like, but I think few people will follow you. But you are right that it is in some ways like a conscious being.
A computer is arguably more like a conscious being, that it is probably too rational to count as one. AI is more like. There's no simple, once-for-all, distinction.
One reason why it is so hard is that it is not just a question of a matter of fact about the machine (putative person) but also of how we treat them. So there's a circularity in the debate.

Quoting L'éléphant
If I could experience the real world, then be hooked up to a machine that simulates the same thing I have experienced, seamlessly, that I would not be able to tell the difference, then the theory has made its point.

If that's the point, we don't need the theory. We all experience dreams from time to time. And we know how to tell the difference. But we can't tell the difference while we are dreaming. What's so exciting about the theory?
bongo fury March 18, 2024 at 23:28 #889110
Quoting noAxioms
Do you mean that some part of the computer running the game would need the detail?
— bongo fury
It would need to simulate the NPC down to the biochemical level. The NPC would need to be conscious to believe anything, and not just appear to believe stuff.


How isn't this as confused as saying "the computer would need to simulate the weather event down to the level of water droplets. The weather event would need to be wet and windy, and not just appear to be wet and windy."
Wayfarer March 18, 2024 at 23:33 #889111
Bernardo Kastrup says you can get a computer to run an exquisitely-detailed simulation of kidney function, but you wouldn't expect it to urinate.
RogueAI March 18, 2024 at 23:46 #889113
Reply to Wayfarer He uses that one a lot.
Wayfarer March 18, 2024 at 23:54 #889116
Reply to RogueAI It's a graphic way of making a sound point.
RogueAI March 18, 2024 at 23:55 #889118
Reply to Wayfarer It's a good one. I hope one of these days Sam Harris has him on his show.
noAxioms March 19, 2024 at 01:07 #889134
Quoting NotAristotle
Regarding your objection re: physicalism. The problem with conscious people within/part of a simulation has to do, in my opinion, with the historical necessities of consciousness. That is to say a simulated person does not have the requisite history to be conscious.
The simulation needs to provide an initial state that provides that history. History is, after all, just state. Hence my suggestion of starting the sim of a human as a zygote since there is no need to provide it with prior experience. But then you have to simulate years of experience to give it that history, but at least you don't need to presume what the mature brain state might be.

and someone alive must come from someone else who is alive
It has to start somewhere, so the womb would be outside the system, an imitation womb, empirically (to the child) indistinguishable from a real mother, in every way. I suppose the placenta would be included in the system since it is, after all, the child and not the mother, but when it is severed, the sim needs to remember which half to keep as part of the system.

Quoting Ludwig V
People and inanimate objects are not in the same category
To a simulation of low level physics, they pretty much are the exact same category, and both have the same problem of needing to exert some kind of effort to keep track of what is the system and what isn't, a problem that real physics doesn't have since it operates on a closed system.

What keeps the house warm, (not too hot and not too cold) is the entire system including the water, the pump and the radiators, with its feedback loops and not any one component.
Similarly, a person (and not a brain) is what is conscious. Not even that, because an environment is also needed.

A computer is arguably more like a conscious being, that it is probably too rational to count as one.
Irrationality is required for consciousness? A computer is rational? I question both. Deterministic is not not rationality. I do agree that irrationality is a trait of any living creature, and a necessary one.


If that's the point, we don't need the theory. We all experience dreams from time to time. And we know how to tell the difference.
Any sim would be distinguishable from a dream state.
But we can't tell the difference while we are dreaming.
Sometimes. One is often reft of rational thought while dreaming, but not always. I can tell sometimes, and react to knowing so.



Quoting bongo fury
The weather event would need to be wet and windy, and not just appear to be wet and windy."
Yes, Wayfarer just below quotes Kastrup suggesting exactly that.

Quoting Wayfarer
Bernardo Kastrup says you can get a computer to run an exquisitely-detailed simulation of kidney function, but you wouldn't expect it to urinate.
It would be a piss-poor kidney simulation (pun very intended) if it didn't.

NotAristotle March 19, 2024 at 01:20 #889139
Reply to noAxioms What is the difference between the simulation and reality if you are constructing "simulated people" based on the same historical states that result in non-simulated people? If the physicalness of both systems is identical in all respects, what is the difference?
AmadeusD March 19, 2024 at 01:47 #889144
Quoting wonderer1
Bostrom's speculation has always smelled grossly unparsimonious, to me.


I agree, generally. The paper, on it's face, is fairly convincing but it requires such a ridiculous set of premises (similar to the Fermi Paradox) that it doesn't seem all that apt to the Universe we actually inhabit.
Wayfarer March 19, 2024 at 01:47 #889145
Quoting noAxioms
It would be a piss-poor kidney simulation (pun very intended) if it didn't.


I’m sure simulations of kidney functions, like other organic functions, may be extremely useful for medical research and pharmacology, without literally producing urine. I’m sure you could model the effects of cardiac arrest without actually having a heart attack. They don’t need to do that to be effective as simulations. That’s the point - simulations may be useful and accurate, but they’re still simulations, not real things.

Reply to RogueAI Kastrup has nothing good to say about Harris on his blog.
AmadeusD March 19, 2024 at 01:57 #889147
Reply to Wayfarer I've been meaning to find somewhere to mention - that five-hour Kastrup thing you laid out for me months ago was great. I've done more reading, and while I think Kastrup is on to something, I am slowly getting the message when another philosopher I speak with regularly noted "Kastrup is a cult leader" hehe. Seems very unopen to not-his-theories.
Wayfarer March 19, 2024 at 02:16 #889149
Reply to AmadeusD I've read quite bit of Kastrup. I definitely don't think he's any kind of cult figure, that is just ad hominem, but you expect that kind of hostility because he questions the mainstream consensus. Overall I think he's an effective and articulate advocate for idealism.
AmadeusD March 19, 2024 at 02:29 #889150
Quoting Wayfarer
Overall I think he's an effective and articulate advocate for idealism.


Agree with this - potentially the only one currently.
noAxioms March 19, 2024 at 04:21 #889154
Quoting NotAristotle
What is the difference between the simulation and reality if you are constructing "simulated people" based on the same historical states that result in non-simulated people? If the physicalness of both systems is identical in all respects, what is the difference?

Unclear on the question. The difference between reality (which doesn't supervene on something higher) and the sim (which does) is just that. Reality is supposedly a closed system, and the simulation (either kind) is not, and there is one of the places to look for empirical differences between the two.
As for the 'historical states', I need clarification. I propose a 'system' that is smaller, with one or a few people say who are actually simulated, and the rest are outside the system, not simulated, but are rather imitation appearance (sensory input to the simulated ones) of other people. AI controls these sensory inputs, and if it is good enough, nobody can tell the difference.
Bostrom gets into this, except all people are in the system, so there are no imitation people, but most other things are imitation. A wall is not particularly simulated, but it still needs to show wear after time. Paint needs to peel. Dead things need to rot, or at least need to appear to. Physics of simple things is often simple, but changes upon close inspection. That's really hard to do in a simulation, but Bostrom is apparently not a software person and has many naive ideas about it.



Quoting AmadeusD
I agree, generally. The paper, on it's face, is fairly convincing but it requires such a ridiculous set of premises (similar to the Fermi Paradox) that it doesn't seem all that apt to the Universe we actually inhabit.
Bostrom assumes otherwise, but whatever realm is running his simulation doesn't need to be a universe like our own.
As to the Fermi thing, I have opinions, but they're only opinions.


Quoting Wayfarer
I’m sure simulations of kidney functions, like other organic functions, may be extremely useful for medical research and pharmacology, without literally producing urine.
If you or Kastrup expect a kidney in one universe to produce urine in another, then you don't really know what a simulation does.

That’s the point - simulations may be useful and accurate, but they’re still simulations, not real things.
But the question asked is how we might know (and not just suspect) that we are not the product of a simulation. A detailed simulation of you would likely deny his own unreality (as you use the word here), and would also deny that his consciousness is the product of his underlying physics. If he does this, he would be wrong about both. I'm not sure what you'd expect that simulation to yield.

Wayfarer March 19, 2024 at 04:36 #889155
Quoting noAxioms
But the question asked is how we might know (and not just suspect) that we are not the product of a simulation.


So, you don't think there's any criterion by which we can discern the difference between simulation and reality. You admit the possibility that you're not actually a real being. Is that what you're saying?

Quoting noAxioms
you don't really know what a simulation does.


I think it's pretty clear. This is the definition:

Simulation: imitation of a situation or process.
"simulation of blood flowing through arteries and veins"
the action of pretending; deception.
"clever simulation that's good enough to trick you"
the production of a computer model of something, especially for the purpose of study.
"the method was tested by computer simulation"
noAxioms March 19, 2024 at 05:15 #889159
Quoting Wayfarer
So, you don't think there's any criterion by which we can discern the difference.
I do think there are ways, but most of the posters are using fallacious methods to justify their assertions.
I can think of ways, albeit technologically unrealistic, to falsify a VR with multiple people (non-solipsism) in the VR. If there's just one, other methods need to be used.
For instance, put me under anethesia. To me, I appear to awaken after only a little time has passed. The only way a VR could do that is to put the real person similarly to sleep, and not just pipe in the sensation of awakening after a short time. That fake 'moving the clocks forward' trick only works under solipsism.

Quoting Wayfarer
You admit the possibility that you're not actually a real being.

The possibility that I am a real being already is contingent on the definition of 'real', and not being a realist, perhaps my not believing that has nothing to do with any suspicion of being a product of a simulation.

Bottom line still is, per my chosen handle: Don't hold any beliefs that are beyond questioning. The worst things to accept unquestioned are the intuitive ones.
Wayfarer March 19, 2024 at 05:38 #889161
Quoting noAxioms
Don't hold any beliefs that are beyond questioning


Per Descartes, I hold that the fact of one's own existence, that one is a subject of experience, is apodictic, it cannot plausibly denied. That is not a belief.

Would a simulation of agonising pain be actually painful? If it was, it can't really be a simulation, but as the primary attribute of pain is the feeling of pain, there's nothing else to simulate.
AmadeusD March 19, 2024 at 05:43 #889162
Quoting noAxioms
Bostrom assumes otherwise, but whatever realm is running his simulation doesn't need to be a universe like our own.


That it is another universe, is one of hte ridiculous premises required for its probability to be an effective argument. This is what I'm getting - on it's face, its mathematically almost certain we are in a simulation set up by future generations. But the invocations required to actually, practically, in real life take that seriously are unnerving to say the least, and perhaps the sign one is not being honest with themself.. if the theory convinces one.
L'éléphant March 19, 2024 at 06:12 #889170


Quoting noAxioms
Well, to quote the BiV IEP page, very close to the top:

Or, to put it in terms of knowledge claims, we can construct the following skeptical argument. Let “P” stand for any belief or claim about the external world, say, that snow is white.

[1] If I know that P, then I know that I am not a brain in a vat
[2] I do not know that I am not a brain in a vat
[3] Thus, I do not know that P.


But you did not go further into the argument. That is the opening argument for the BIV. But Putnam continues on to counter-argue that premises or claims above are necessarily false. If you're a BIV then to say "I am a brain in a vat" is false because you wouldn't be referring to a brain and to a vat. There's no reference at all! There is no causal link to make the argument sound.

So going back to what you said in your previous post that ...

Quoting noAxioms
A brain in a vat need not be a brain at all, but some sort of mind black-box. Introspection is the only evidence. A non-human mind in a vat being fed false information that it is a human living on Earth has no clue that it isn't a pink squishy thing doing the experiencing, or exerting the will.

If it is indeed just a black-box or non-human mind being fed false information, anything that comes out of its mouth referring to anything about the physical world is false.
Because to refer to a tree, snow, or brain, is to go outside of the BIV world yet isn't it true that we just made the argument that we are just a BIV. So, are you or are you not a BIV? You can't be both.

The simulation hypothesis is a pitfall -- it looks attractive because it allows us to make arguments like "how do you prove we're not in a doll house?" but we fail to recognize the contradiction of the utterance.

Quoting Ludwig V
If I could experience the real world, then be hooked up to a machine that simulates the same thing I have experienced, seamlessly, that I would not be able to tell the difference, then the theory has made its point. — L'éléphant

If that's the point, we don't need the theory. We all experience dreams from time to time. And we know how to tell the difference. But we can't tell the difference while we are dreaming. What's so exciting about the theory?

Actually, I take back what I said in what you quoted from my previous post. Let's start again.

The theory posits that there are a scientist outside the BIV and a BIV. If I am a BIV, I cannot make claims like "I am a brain in a vat" because I am making no reference to the "brain" and "vat". So, if I say that sentence, it is false.
Wayfarer March 19, 2024 at 06:35 #889171
Reply to noAxioms Actually, you're right, I have no interest in pursuing the argument further. However if it helps, there's an encyclopedia entry on the 'brain in a vat' thought experiment here.
NotAristotle March 19, 2024 at 11:50 #889202
Reply to noAxioms You said you would start the sim as a zygote. I am asking: what is the difference between this zygote and a zygote in reality? Or is the zygote you are postulating a mere simulation of a zygote? If so, that seems problematic.
noAxioms March 19, 2024 at 16:40 #889261
Quoting AmadeusD
That it is another universe, is one of hte ridiculous premises required for its probability to be an effective argument. This is what I'm getting - on it's face, its mathematically almost certain we are in a simulation set up by future generations.
I agree that the logic presented is completely valid, but the premises are outrageous, and the conclusion is only as sound as those premises.

But the invocations required to actually, practically, in real life take that seriously are unnerving to say the least, and perhaps the sign one is not being honest with themself.. if the theory convinces one.
I don't take the argument seriously due to the faulty premises. I see no reason to actually suspect that I am a product of simulation, but I also don't rule it out, nor would I personally find it unnerving to actually find evidence that such is the case.
I do my best to be open minded to any possibility, or at least possibilities where knowledge can be had. So if I'm for instance in a VR being fed fiction, then I have no choice but to make sense of the fiction being fed to me, and to not worry about the inaccessible nature of whatever feeds it to me.


Quoting L'éléphant
But you did not go further into the argument. That is the opening argument for the BIV. But Putnam continues on to counter-argue that premises or claims above are necessarily false. If you're a BIV then to say "I am a brain in a vat" is false because you wouldn't be referring to a brain and to a vat. There's no reference at all! There is no causal link to make the argument sound.
OK. I admit to not reading the whole thing because I was only trying to point out similarities in the issues of BiV and VR, which are often aligned.

If either has memory of being put in the vat, then the arguments become more sound. Any video game is like that. You have memory of starting the game, and have evidence that you've not spent your life there (although close with some of my kids).

Quoting L'éléphant
If it is indeed just a black-box or non-human mind being fed false information, anything that comes out of its mouth referring to anything about the physical world is false.
I don't follow that. If it says (without evidence) that it is a BiV, then the utterance is true if that is indeed the fact. It's just not something justifiable, at least not if the lies being fed to it are quality lies. So it isn't knowledge, but not all utterances are necessarily false. What about 2+2=4? Is that also one of the lies?

The simulation hypothesis is a pitfall -- it looks attractive because it allows us to make arguments like "how do you prove we're not in a doll house?" but we fail to recognize the contradiction of the utterance.
OK, I haven't brought this up, but if it is a true sim (not a VR), the sim is computing the values of a mathematical structure (this universe), which is sort of presuming something like Tegmark's MUH.

If I am a part of a mathematical structure, somebody computing that structure doesn't enact the creation of that structure, but rather just works out details of that structure that already is, sort of like Pi is (supposedly) a constant that is not just a property of this universe. It can be known in any universe independently, and the ratio of circumference of a circle to its diameter is pi even if nothing knows that. Computing ii is like the simulation. It doesn't create pi, it just makes the approximate value of it known to whatever is running the computation. The sim may work similarly, making this universe known to the runner of the simulation, but doesn't constitute an act of creation of the universe, which doesn't need to be simulated in order for parts of it (us) to be what we are, which is conscious of the parts of the universe to which we relate.



Quoting NotAristotle
You said you would start the sim as a zygote. I am asking: what is the difference between this zygote and a zygote in reality?

Several differences. The sim is run at some finite level of detail. Does it have mitochondria? Depends on the level of detail, if it matters to the entity running the sim. The sim probably cannot run at the quantum level, and the real zygote does, and even deeper if there is a deeper.

The sim zygote is an open system, and the real zygote is part of a close system. That is a second major difference. Something has to imitate the interactions with the parts external to the system, and that requires making up fiction now and then, and one can attempt to catch contradictions in that fiction. Of course it helps a lot to know where the system boundary is.

Or is the zygote you are postulating a mere simulation of a zygote?
Yes, that. You don't need to pre-load the simulated thing with memory of a past consistent with the fake initial state of the simulation. That's the problem we're trying to get around. Don't know why you find this problematic. The system simulated then grows up into a conscious human with real memories of its upbringing, not fake memories planted by an initial state that probably doesn't know how memories are stored. The whole point of the sim after all is to learn these things.



Quoting Wayfarer
Per Descartes, I hold that the fact of one's own existence, that one is a subject of experience, is apodictic, it cannot plausibly denied.

And here I go doing exactly that, not denying it, but having doubts about it to the point of abandoning the realism it fails to explicitly posit.

Funny that right after I go on about humans not being rational, but being very good at rationalizing. Conclusion first, then an argument that leads to it. Descartes starts with all this skepticism, and builds up from this simple state that, lacking any knowledge of modern physics, leaves him with something he decides can be known with certainty. I'm fine with that, and I'm admittedly not very familiar with his work, but he goes from there to conclude, surprise, surprise, the exact mythological teachings of his own culture and not any of the other thousand choices of other cultures. That's a great example of rationalization. It was his target all along. A more rational progression from those beginnings leads to idealism/solipsism.

AmadeusD March 19, 2024 at 19:23 #889282
Quoting noAxioms
I also don't rule it out, nor would I personally find it unnerving to actually find evidence that such is the case.


Nice. Similarly, myself.
Ludwig V March 19, 2024 at 19:43 #889287
Quoting noAxioms
Similarly, a person (and not a brain) is what is conscious. Not even that, because an environment is also needed.

Yes, that's right. I agree also that persons, as we understand them, can only exist in an environment. Whether one includes that environment as part of the person or not is a tricky question and I don't know the answer. In our paradigm case (the only one that we actually know), a person is a human being, i.e. an animal. An animal is a physical body. (I'm setting aside the dualistic possibility of persons existing without a body.) Some physical structures are machines, and hence not animals, but I don't see why such structures cannot possibly constitute people.
But if they are to constitute people, they would indeed need at least to behave as people spontaneously and not because they are following a detailed set of instructions about what to do and when. They need to learn to do that for themselves. So a machine that was designed and built to behave as a person could not be anything except a sim.

Quoting noAxioms
It has to start somewhere, so the womb would be outside the system, an imitation womb, empirically (to the child) indistinguishable from a real mother, in every way. I suppose the placenta would be included in the system since it is, after all, the child and not the mother, but when it is severed, the sim needs to remember which half to keep as part of the system.

So I think you are right to argue that some such process as this would be necessary to create a machine person. The catch is that I'm not at all sure that this would be a sim, rather than a real person - especially as the process of its creation would be very close to the process of creating human beings. I think this is the same point as here:-
Quoting NotAristotle
You said you would start the sim as a zygote. I am asking: what is the difference between this zygote and a zygote in reality? Or is the zygote you are postulating a mere simulation of a zygote? If so, that seems problematic.


Quoting noAxioms
Irrationality is required for consciousness? A computer is rational? I question both. Deterministic is not not rationality. I do agree that irrationality is a trait of any living creature, and a necessary one.

Well, perhaps I'm being provoking. My point is that when people act, they do so on the basis of values that they hold, that is, their emotions and desires. It may be a distortion to call them irrational, but standard ideas of logic and reason are well recognized (since Aristotle) to be incapable of generating actions on their own.
Calculating is widely recognized as a rational activity. To me, it makes no sense to deny that computers can calculate. The catch is that such rational activities are not sufficient to be recognized as a person. Ever since the Romantic protest against the Enlightenment, emotion and desire have been regarded as essential elements of being a human person.

Quoting noAxioms
Sometimes. One is often reft of rational thought while dreaming, but not always. I can tell sometimes, and react to knowing so.

This may be a side-issue. I know that there is an issue about lucid dreaming. But I doubt whether the unsupported memory of a dreamer is sufficient to establish the phenomenon, except that I accept that the reports exist and I don't believe they are lies. But the possibility that the dreamer is dreaming the phenomenon cannot, it seems to me, be excluded.

Quoting noAxioms
To a simulation of low level physics, they pretty much are the exact same category,

I don't know what you mean by "a simulation of low level physics", but you clearly have a different concept of categories from mine.

Quoting noAxioms
That's (sc. Descartes' argument) a great example of rationalization. It was his target all along.

A side-issue. If you call it a rationalization, you have already decided the argument is invalid or unsound. But knowing that someone had in mind a specific conclusion before formulating the argument does not, of itself, show that their argument is invalid or unsound.

Quoting Wayfarer
Would a simulation of agonising pain be actually painful? If it was, it can't really be a simulation, but as the primary attribute of pain is the feeling of pain, there's nothing else to simulate.

Another side-issue, but you are presupposing a dualistic concept of pain. On that concept, you are right. But whatever exactly may be the relevant conception of pain, I think your point survives, in the sense that whatever caused the pain would have to cause real pain and not zombie pain, just as the anger would have to be real anger, etc.

Quoting L'éléphant
If I am a BIV, I cannot make claims like "I am a brain in a vat" because I am making no reference to the "brain" and "vat". So, if I say that sentence, it is false.

If I am a brain in a vat, my claim is true, even if I can't refer to brain and vat, so long as "brain" and "vat" refer to the appropriate objects in that context. Perhaps I cannot know that my claim is true, but that's different. Actually, I don't really see why a brain in a vat cannot refer to itself as a brain in a vat.
Wayfarer March 19, 2024 at 20:56 #889306
Quoting noAxioms
Descartes starts with all this skepticism, and builds up from this simple state that, lacking any knowledge of modern physics, leaves him with something he decides can be known with certainty. I'm fine with that, and I'm admittedly not very familiar with his work, but he goes from there to conclude, surprise, surprise, the exact mythological teachings of his own culture and not any of the other thousand choices of other cultures. That's a great example of rationalization.


The logic of cogito ergo sum is neither rationalisation nor myth, it is the indubitable fact that, in order to be subject to an illusion, there must be a subject. And this whole line of argument was anticipated by Augustine centuries prior:

But who will doubt that he lives, remembers, understands, wills, thinks, knows, and judges? For even if he doubts, he lives. If he doubts where his doubs come from, he remembers. If he doubts, he understands that he doubts. If he doubts, he wants to be certain. If he doubts, he thinks. If he doubts, he knows that he does not know. If he doubts, he judges that he ougth not rashly to give assent. So whoever acquires a doubt from any source ought not to doubt any of these things whose non-existence would mean that he could not entertain doubt about anything." (Augustine, On the Trinity 10.10.14 quoted in Richard Sorabji, Self, 2006, p.219).


I have my doubts about Descartes, in that I believe his dualistic separation of the physical and mental as separate substances is profoundly problematical and has had hugely deleterious consequences for Western culture, but as for the essential veracity of his ‘cogito’ argument, I have no doubts.

Quoting Ludwig V
real pain and not zombie pain


I had the idea that zombies don’t feel pain, at least they never do in zombie flicks. You have to literally dismember or disintegrate them to overcome them, merely inflicting blows or wounds does nothing.
J March 19, 2024 at 21:33 #889315
If you want to read a first-rate philosopher discuss all these issues, try Reality+, David Chalmers' new book. It sheds light on a lot of what's being debated here.
noAxioms March 19, 2024 at 21:36 #889316
Quoting Ludwig V
a person is a human being, i.e. an animal. ... Some physical structures are machines, and hence not animals, but I don't see why such structures cannot possibly constitute people.
There's a contradiction here. People is animal. A machine is not animal. But a machine can be people? That means a machine is animal and not animal.

But if they are to constitute people
I think you are again envisioning imitation people, like Replicants. That's a very different thing than the simulation hypothesis which does not involve machines pretending to be people.
If you're going for an empirical test, it doesn't work. If a convincing replicant is possible in a sim but not in reality, the runners of the sim can see that and know that their simulation isn't very accurate, and the people in the sim don't know that replicants should be different, so they have no test.

Secondly, where do you get this assertion that machines must lack spontaneity? I mean, deep down, you're a machine as well running under the same physics. I think you're confusing determinism with predictability.

So I think you are right to argue that some such process as this would be necessary to create a machine person.
No. The simulation is creating a biological person, not a machine person. Try to get that. Replicants are not grown from a zygote. A replicant can be trivially tested by an x-ray or just by sawing it in half, or so I suggest. Apparently in Blade runner, it was very hard to tell the difference, but that's also a fiction.

Calculating is widely recognized as a rational activity.
That's right. Physics doesn't do spontaneous things (quantum mechanics excepted, which is a big problem if you want to simulate that). But classical physics isn't spontaneous, and yet spontaneity emerges from it, or at least the appearance of it. Anything in the simulation would have to behave just like that.

To me, it makes no sense to deny that computers can calculate. The catch is that such rational activities are not sufficient to be recognized as a person.
Yet again, no computer is pretending to be a person, so it isn't a problem.

If you call it a rationalization, you have already decided the argument is invalid or unsound.
Probably invalid in this case, and yes, I've decided that, but on weak grounds since I have never followed the argument from beginning to a preselected improbable conclusion.

Would a simulation of agonising pain be actually painful?
If the simulation is any good at all, and presuming monism, then yes, it would be painful to the subject in question. No, the computer running the sim would not feel pain, nor would the people responsible for the creation of the simulation, despite suggestions from Kastrup that they apparently should.



Quoting Wayfarer
The logic of cogito ergo sum is neither rationalisation nor myth, it is the indubitable fact
I didn't say that was the rationalization. I even accepted it since it was a reasonable statement in the absence of modern physics. It is him building on that foundation to his later conclusions that is the rationalization, which I clearly spelled out in my post.
As for it being indubitable, well, I dubit it, as I do everything *. The Latin phrase translates roughly to 'there is thinking, therefore thinker" which suggests process, a state that evolves over time, but presumes (without doubt) that all said states are states of the same thing, which is for instance in contradiction with quantum interpretations like MWI, which you probably deny because it is fairly incompatible with the dualistic view of persisting identity. That denial is fine since nobody can force your opinion, but absent a falsification of the interpretation, the assertion is hardly indubitable.
And no, I don't accept MWI either, but I don't claim it has been falsified.


* why isn't 'dubit' a word? It ought to be.
Wayfarer March 19, 2024 at 22:13 #889320
Quoting noAxioms
I even accepted it since it was a reasonable statement in the absence of modern physics.


At risk of opening a can of worms, how does 'modern physics' come into it?

Quoting noAxioms
As for it being indubitable, well, I dubit it, as I do everything


If you dubit it, you must exist, in order to dubit it. If you don't exist, then your opponent has no argument to defend.

Persistence of self-identity over time is not discussed in Descartes, but I don't believe it has much bearing on the argument. Again, any statement along the lines of 'I (the speaker) do not exist' is self-contradicting.

Quoting noAxioms
all said states are states of the same thing


Beings are not objects or things (except for from the perspective of other beings - I see you as 'an object', in a way, although to treat you as an object would be, at the very least, discourteous). The nature of the identity of a being is quite a different matter to the nature of the identity of a thing.

In fact, this is where I criticize Descartes - he designates the subject as 'res cogitans', which is translated as 'thinking thing'. And I think there's a deep, implicit contradiction in that designation, as it obfuscates a real distinction between 'things' (as objects) and 'beings' (as subjects of experience.)

(In Crisis of the European Sciences, Husserl concurs that describing the subject (res cogitans) as a "thing" does not do justice to the nature of the subject of experience. His phenomenological method emphasizes the intentionality of consciousness—consciousness is always consciousness of something—and the embodied and situated character of human existence. This perspective seeks to bridge the gap between the subject as a mere "thing" and the subject as an experiencing, intentional "being." Descartes' formulation overlooks the role of consciousness and the subjective, experiential dimension of being in constituting the world of objects (and hence reality) as it is experienced by living beings. Descartes, in removing that situated and intentional nature of the subject, and seeking certainty in mathematical abstractions, in fact gave rise to the worldview which makes the 'brain-in-a-vat' scenario conceivable in the first place - as the IEP article indicates.)
L'éléphant March 20, 2024 at 03:00 #889344
As this thread is not about BIV in particular, but simulation, I will respond to the below briefly:

Quoting Ludwig V
If I am a brain in a vat, my claim is true, even if I can't refer to brain and vat, so long as "brain" and "vat" refer to the appropriate objects in that context. Perhaps I cannot know that my claim is true, but that's different. Actually, I don't really see why a brain in a vat cannot refer to itself as a brain in a vat.

You do not understand what "refer" means, in other words.

Quoting noAxioms
I don't follow that.If it says (without evidence) that it is a BiV, then the utterance is true if that is indeed the fact.

Then you misunderstand what "true" means in statements.




Ludwig V March 20, 2024 at 06:58 #889378
Quoting Wayfarer
The logic of cogito ergo sum is neither rationalisation nor myth, it is the indubitable fact that, in order to be subject to an illusion, there must be a subject.

The analysis of Descartes' argument is a bit off-topic here, so I'll resist commenting.
Quoting Wayfarer
I have my doubts about Descartes, in that I believe his dualistic separation of the physical and mental as separate substances is profoundly problematical and has had hugely deleterious consequences for Western culture, but as for the essential veracity of his ‘cogito’ argument, I have no doubts.

But I can't resist saying that I agree with you.

Quoting Wayfarer
I had the idea that zombies don’t feel pain, at least they never do in zombie flicks. You have to literally dismember or disintegrate them to overcome them, merely inflicting blows or wounds does nothing.

Yes. I did not put my point well. I was thinking of philosophical zombies, which would (if I've understood the idea correctly) not behave like zombies in the flicks.

Quoting noAxioms
There's a contradiction here. People is animal. A machine is not animal. But a machine can be people? That means a machine is animal and not animal.

Quoting noAxioms
I mean, deep down, you're a machine as well running under the same physics. I think you're confusing determinism with predictability.

Are these two remarks compatible? My point is that there is no easy and clear way to state what the Turing hypothesis is trying to articulate.
Quoting noAxioms
I think you are again envisioning imitation people, like Replicants. That's a very different thing than the simulation hypothesis which does not involve machines pretending to be people.

Thank you for the clarification. I misunderstood what the thread was about. My apologies. It is clear now that I haven't understood what the simulation hypothesis is. However, when I checked the Wikipedia - Simulation hypothesis, I found:-
Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct).

For me, a conscious being is a person and a simulated person is not a person, so this confuses me. Can you perhaps clarify?

Quoting noAxioms
why isn't 'dubit' a word? It ought to be.

Well, since you have now used it, and I understand it (roughly, I think), it is a word now. Who knows, it may catch on and then you'll be awarded a place in the dictionaries of the future!

Reply to L'éléphant
I agree that BiV is a different kettle of fish and I don't particularly want to pursue it, but I can't resist one reply, because your remark was so incomprehensible to me. I don't expect to resolve our differences, just to clarify them a bit.

Quoting L'éléphant
You do not understand what "refer" means, in other words.

You seem to think I cannot refer to anything that I have not experienced. But the reference of a word is established in the language in general, not by what I may or may not have experienced. So when I can refer to the President of the United States even if I don't know that Joe Biden is the President.
Quoting L'éléphant
Then you misunderstand what "true" means in statements.

I agree with @noAxioms, except that I would add that it's not something it can justify on the basis of its subjective experience.
Patterner March 20, 2024 at 12:48 #889443
Quoting noAxioms
There is no technology constraint on any pure simulation, so anything that can be done by computer can be done (far slower) by paper and pencil. That means that yes, even the paper and pencil method, done to sufficient detail, would simulate a conscious human who would not obviously know he is being simulated.
It seems to me you cannot simulate with paper and pencil, because it is not an active medium. You can write about the game of basketball in all conceivable detail. You can write down every rule, and describe as many scenarios as you like, explaining how each rule applies at each moment. You can describe every required object, as well as the physical, mental, and emotional characteristics of every possible player. You can write all this down in every conceivable detail, but it would never be a basketball game.

You can describe a game that actually took place, or a fictitious one, in every conceivable detail. Exact speed and spin of the ball at every moment. Exact angle it took every time it hit the floor or backboard. Exactly how it lost its spherical shape with each impact. Heck, even how much sweat came out of each of every player's pores.

In neither scenario is there an actual basketball game. Not even simulated. Because you need action for a simulation. It is just squiggles on paper that. When someone who knows what those squiggles represent interprets them, describe events and possible events, and allow the reader to imagine any events that you have not described (assuming you have not described every possible event). But the events are not taking place. Not even as a simulation. There is no action.

Even an actual gathering of all the people and objects required for a basketball game is not a basketball game if all the players do not act in accordance with the rules.

If you program everything necessary to simulate consciousness into a computer**, but never hit Run, you will not have a simulated consciousness. If it is running, and you hit Stop, or cut the power, you no longer have a simulation.


**You would have actual consciousness. no such thing as simulated consciousness.
noAxioms March 20, 2024 at 13:07 #889450
Quoting Wayfarer
At risk of opening a can of worms, how does 'modern physics' come into it?
I joined this and other forums to find out how the prominent philosophers (the ones you learn of in class) dealt with modern physics (narrowing the search to recent ones of course) and found that for the most part, they either didn't know their physics, or didn't care about it.
So I learned physics, or at least the parts of it relevant to the subjects I cared about.

Relativity threw significant doubt to Newtonian absolutism where there was one preferred frame and time was posited to be something that flows or progresses, that there was a preferred moment in time, and the universe was static, and either infinite age or somehow set in motion from some initial state at some point. Much of religious myths (especially the creation parts) requires the universe to be contained by time instead of the other way around, and this did not become apparent until about 110 years ago. The universe having a finite age is about a century old, and some religious teachings did at least bend with that one and put the creation event there.

Quantum mechanics really threw a spanner into the gears with suggestions that ontology might work backwards (that existence depends on interaction with future things), that identity of anything (electrons, rocks, people) is not at all persistent and thus I am not the same I as a second ago.

One can of course pick an interpretation consistent with your preferences and avoid the implications of the ones you don't like, but if doubt is to be eradicated, all the alternative interpretations contradicting the thing of which you are certain must be falsified.


And who knows what else might get discovered. Nobody saw QM coming, so all these people who held certain beliefs with certainty found themselves to be wrong or at least potentially wrong. So a declaration of 100% certainty is irrational. I mean, my certainty rests on the sum of two numbers (a pair of arbitrary real numbers say) being exactly one other real number, always and anywhere. I don't significantly doubt that, but I still question it. What if it's only a property of this universe that such a sum comes to that one solution and not a different one elsewhere?

Persistence of self-identity over time is not discussed in Descartes
Indeed it isn't, but the assumption is implicit. It's too obvious to bother calling out explicitly, or at least it was obvious until ~50 years ago.

[quote]Beings are not objects or things
Your opinion. The opinion of others may vary.


Quoting Ludwig V
I was thinking of philosophical zombies
I knew what you meant, even if Wayfarer chose to reply to what you said instead of what you meant.

My point is that there is no easy and clear way to state what the Turing hypothesis is trying to articulate.
The Turing test (The closest 'Turing Hypothesis gets is the Chuch-Turing thesis, concerning what is computable, and is oddly relevant below) is an intelligence test for when a machine's written behavior is indistinguishable from that of a human. The large language models are getting close, and the easy way to tell the difference is to not ask them questions with factual answers. They also are not designed to pass the Turing test, so all one has to do is ask it what it is.

Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct).
For me, a conscious being is a person and a simulated person is not a person, so this confuses me. Can you perhaps clarify?
A simulated person would be a person, just in a different universe (the simulated one). It's likely quite a small universe. You seem to define 'person' as a human in this universe, and no, the simulated person would not be that.

why isn't 'dubit' a word? It ought to be.
— noAxioms
Well, since you have now used it, and I understand it (roughly, I think), it is a word now.
And it was already used in somebody else's reply.


Quoting Patterner
It seems to me you cannot simulate with paper and pencil, because it is not an active medium.
Not sure what the term 'active medium' means. Googling it didn't help. I can implement a Turing machine armed with nothing but paper and pencil. Per the Church-Turing Thesis mentioned by mistake above, that means I can do anything that is computable, including the running of the simulation.

The papers hold not a description of how the simulation works, nor a novel about the lives of the characters simulated, but rather are utilized as memory in the execution of the algorithm, which is doing exactly what the high-power computer is doing. Sure, some of the paper needs to hold the algorithm itself, just like the computer memory is divided into code space and data space.
The pencil exists to write new memory contents, to change what a paper says to something else, exactly as a computer rewrites memory location.

If you program everything necessary to simulate consciousness into a computer**, but never hit Run
But I am hitting 'run'. I wouldn't need the pencil if I didn't 'run' it.
Ludwig V March 20, 2024 at 15:01 #889473
Quoting noAxioms
A simulated person would be a person, just in a different universe (the simulated one). It's likely quite a small universe. You seem to define 'person' as a human in this universe, and no, the simulated person would not be that.

I describe human beings, in contexts like this, as our paradigm of a person. That's not exactly a definition - I'm not aware of any definition that is adequate. A paradigm, for me, is an example or sample that one uses in an ostensive definition. However, I think that looking for definitions is inadequate on its own, because the important feature of a people is the way we interact with them as different from the way we interact with objects.

I have to say, if these beings are to be conscious, I wish you luck in getting your project through your research ethics committee.

My question now, is why not just talk about people living in a different universe? (I'm not going to get picky about the point that the sims you are describing are clearly in the same universe as we are. I would prefer to describe their situation as being in a different lived world from us. Though even that is not quite right.)

Talking of sims, do you regard chess or (American) football as a simulation of war? That is what they say of both (only they don't use the word "simulation".)
wonderer1 March 20, 2024 at 15:34 #889483
Quoting Ludwig V
I have to say, if these beings are to be conscious, I wish you luck in getting your project through your research ethics committee.


:up:
wonderer1 March 20, 2024 at 15:53 #889493
Has anyone here read Stanislaw Lem's The Cyberiad?

Much earlier than Bostrom, and if not the best, at least the funniest thinking on such topics.
Patterner March 20, 2024 at 16:20 #889497
Quoting noAxioms
Not sure what the term 'active medium' means. Googling it didn't help.
That's because I just made it up. Sorry. I'm not well read almost anything that's ever discussed here. There are many in which I'm not at all read. I know what I want to say, but often don't know what words are normally used. I had hoped I explained it well enough to make what I am thinking clear.


Quoting noAxioms
But I am hitting 'run'. I wouldn't need the pencil if I didn't 'run' it.


If you say the simulation is not found only in the paper and the squiggles on it, but also in the pencil, the handholding the pencil, the mind directing the pencil, you still cannot simulate human consciousness this way. I know human consciousness is a fairly hotly contested issue. But does anyone disagree that it involves multiple processes taking place simultaneously? If we agreed that a process can take place in the scenario you're describing, you cannot write multiple things simultaneously. You can't write two, much less the presumably huge number that are required for human consciousness.

You can write, "The Following list of 200 processes occur simultaneously." But writing that doesn't make it happen. That can't happen with things written on paper. It can't happen if you write the words of one process over top of the words of another process. It can't happen if you have different processes on different sheets of paper, and stack them on top of one another.

It can't even happen in the mind that is writing these things down. Nobody can hold that many things, much less that many processes, in their mind at the same time. (If someone could, would they need to bother with the paper and pencil?)

At no time, in no sense, is everything needed for human consciousness happening at the same time in the paper and pencil scenario.

If a computer can simulate human consciousness, it would have to be because it can run the same number of processes at once that our brains can run.
Patterner March 20, 2024 at 17:47 #889503
I just corrected the last sentence of my previous post.
noAxioms March 20, 2024 at 21:21 #889538
Quoting Ludwig V
I describe human beings, in contexts like this, as our paradigm of a person.
Remember, we're not worrying about what those running the simulation are calling the simulated things. We're supposing that we are the subjects here, the ones being simulated, and we (and only we) call ourselves human beings or people. That's the only definition that matters.
It is the people in the simulation that are tasked with finding evidence that they are the subject of a simulation. What we're called by the occupants of the reality running the simulation is irrelevant.

I have to say, if these beings are to be conscious, I wish you luck in getting your project through your research ethics committee.
That's kind of like suggesting that God is unethical to have created a universe that has beings that feel bad, and yes, there are those that suggest exactly that.

My question now, is why not just talk about people living in a different universe?
I wanted a universe that is simulated, instead of being instantiated in some other way. I do suppose that the simulated universe is a part of the container universe, but it's still a separate universe. That's questionable if it's an open simulation, but not all of them are. Much depends on the goal of running the simulation. Bostrom actually posits what that purpose would be, even if it is a totally naive one.

the sims you are describing are clearly in the same universe as we are.
It is the same universe as we are, because I posit that we are the simulated ones. How would be tell if that were true? The topic isn't about how to run a sim. The topic is about what it's like to be one.

Talking of sims, do you regard chess or (American) football as a simulation of war?
There are definitely war elements in both, but that makes it more an analogy than a simulation. The do run simulations of war all the time, pretty much continuously. Yay cold war. Those simulations don't simulate the consciousness of anybody, and I don't think they even have people beyond statistical counts.



Quoting Patterner
I know human consciousness is a fairly hotly contested issue. But does anyone disagree that it involves multiple processes taking place simultaneously?
It is a parallel process, yes. Per relativity, simultaneous is an ambiguous term for events, and no, nothing in a any physical system requires spatially separated components of any process to be simultaneous in any frame. Per the principle of locality, one cannot depend on the other (they are outside each other's causal light cone), and thus the interactions can be simulated in any order, serially.
The computer would likely do the same thing, but a truly serial process would be much like a Turing machine, and incredibly inefficient design, but performance was never the point.

If we agreed that a process can take place in the scenario you're describing, you cannot write multiple things simultaneously.
Granted, but there's no need to, per the above comment. Any such transactions can be computed in any order without altering the outcome. Per the principle of locality, no spatially extended process can have a requirement of simultaneous operation.

A regular computer would do it that way as well, but the big weather simulation machines are often very parallel, operating on large vector quantities rather than single numbers (technically refered to as SIMD (single instruction, multiple data) machines). The cray supercomputers worked that way, but not sure how much modern high-end machines use SIMD architectures. Point is, doing it serially is just slower, but it doesn't produce a different outcome.

At no time, in no sense, is everything needed for human consciousness happening at the same time in the paper and pencil scenario.
On the contrary, time in the simulation has nothing to do with time for the guy with the pencil. Our pencil guy can set everything aside for a year and get back to it later. The simulated guy will not notice. No doubt each transaction will have a location/timestamp, and there's nothing preventing multiple transactions (all the transactions in a single iteration of the data) from having the same recorded timestamp. That is pretty much how simulations are done. Here is the state at time X, and then it uses that state to compute the next state at X+ where the increment might be a microsecond or something. It might take a minute for a machine to simulate all the transactions to generate the next state. It might take the pencil guy several lifetimes to do the same thing, so we're going to need that society to train his replacements each time he retires.
Ludwig V March 21, 2024 at 06:36 #889635
Quoting noAxioms
Remember, we're not worrying about what those running the simulation are calling the simulated things. We're supposing that we are the subjects here, the ones being simulated, and we (and only we) call ourselves human beings or people. That's the only definition that matters.
It is the people in the simulation that are tasked with finding evidence that they are the subject of a simulation. What we're called by the occupants of the reality running the simulation is irrelevant.

So how does this question differ from the brain in a vat, from Descartes' demon or from the supposed possibility that we are all dreaming?
Quoting noAxioms
The topic isn't about how to run a sim. The topic is about what it's like to be one.

So how does this topic differ from the question what it's like to be a bat?

I'm afraid I didn't realize what the philosophical background is, essentially, Bostrom. I don't find the question interesting, because if we posit that there is no way of telling, then there is no way of telling. Similarly, if there's no way to be a bat without becoming a bat, we can't know what it's like to be a bat.
The interesting question is under what circumstances we would accept that something we designed and built is a conscious being, i.e. a (non-human) person.

Quoting noAxioms
That's kind of like suggesting that God is unethical to have created a universe that has beings that feel bad, and yes, there are those that suggest exactly that.

This is the traditional problem of evil. I am one of those who think the problem has no solution and that therefore no such God exists. Of course, that doesn't prove that there are not other gods around or that it is only the Christian conception of God is wrong.

Quoting noAxioms
There are definitely war elements in both, but that makes it more an analogy than a simulation. The do run simulations of war all the time, pretty much continuously.

I wish I knew what the difference is between a simulation and an imitation, a simulation and a mimicry, a simulation and an analogy, and a simulation and a model.
noAxioms March 21, 2024 at 15:12 #889688
Quoting Ludwig V
So how does this question differ from the brain in a vat, from Descartes' demon or from the supposed possibility that we are all dreaming?
Nothing like dreaming.
VR has many of the same issues as the first two. The actual simulation hypothesis does not suggest an artificial sensory stream, except necessarily at system boundaries.

So how does this topic differ from the question what it's like to be a bat?
We are not bats. It's not about what it's like to be something we're not. We know what it is like to be a human. The question is, how might we (being the subject of simulation) detect that fact?

I'm afraid I didn't realize what the philosophical background is (essentially, Bostrom).
Bostrom is half the story. Most popular fictions depict VR, not a sim. Matrix is a good example of a VR, however implausible.

I don't find the question interesting, because if we posit that there is no way of telling, then there is no way of telling.
I didn't posit no ways ot testing. But depending on the quality of the simulation, it might get difficult. The best test is probably to recognize that there must be limits, and to test those limits.

The interesting question is under what circumstances we would accept that something we designed and built is a conscious being, i.e. a (non-human) person.
The 'can a computer think' topic was sort of about that. I suppose we could copy our own design and build an actual biological human, but in something other than by the normal way. Anything else is going to be trivially detectable. Not sure how that 'built' person would get loaded with experience. It's not like you can just upload software to a human. Doesn't work that way.

From that topic:
Quoting Relativist
The Turing Test is too weak, because it can be passed with a simulation. Simulating intelligent behavior is not actually behaving intelligently.

There is mention of the Turing test in earlier posts here. Passing it with a simulation is doing it the hard way. We're getting close to something that can pass the test now, but nowhere close to actually simulating the way a human does it. Perhaps you, like Ludwig here, mean 'imitation', which anything that passes the Turing test is doing by definition.

And if a machine passes the test (it's a text test, so there's no robot body that also has to be convincing), then it exhibits intelligent behavior. The test is not too weak. What they have arguably already does this, since a machine can exhibit intelligent behavior (even more so than us) long before it can successfully imitate something that it isn't. I mean, I cannot convince a squirrel that I am one of them, but that doesn't mean they're more intelligent than me. I've done it to birds, speaking their language well enough for them to treat me like one of their own kind. It's not hard with some birds.


Quoting Ludwig V
This is the traditional problem of evil.
Pain is not evil. I'd never want to change myself to be immune from pain. It serves an important purpose, and not an evil one.
The problem of evil argument against God only has teeth if you posit a God that has and follows the same moral values as we envision, such as it being an act of evil to create something humans deem evil.

I wish I knew what the difference is between a simulation and an imitation, a simulation and a mimicry, a simulation and an analogy, and a simulation and a model.

A statue, puppet, or a speaker blaring bird-of-prey noises to scare away geese, or a wooden duck lure, are all imitations/mimicry.

A video game is a VR, which, by definition, feeds artificial sensory input to the real player.

Conway's game of life (the description of it) is a dynamic model. The execution of the rules of that model (on a computer, paper, pebbles on a go-board, whatever) is a simulation.
They make computer models of cars. The model is a description of the physical car, what parts are where, and what they're made of, how they're connected. The simulation of that model might throw it into a solid wall, or another car at some high speed, to learn how the initial state in the model deforms by the stresses of that collision. Simulations typically serve some sort of purpose of the runner of the simulation.

Relativist March 21, 2024 at 16:21 #889695
Quoting noAxioms
And if a machine passes the test (it's a text test, so there's no robot body that also has to be convincing), then it exhibits intelligent behavior. The test is not too weak.

The Turing Test is passed by fooling people into believing there's a human giving responses in a conversation. This is feasible today at least within a limited range of conversation topics. What more are you looking for? A wider range of topics? Regardless, human responses are the product of thought processes (including feelings, reactions, influenced by motivations that could change during the course of the conversation). Example: a human can express true empathy; a computer can produce words that sound like it's expressing empathy - but it actually is not. The human may change her behavior (responding differently) based on this; will the computer?






Patterner March 21, 2024 at 22:04 #889781
Quoting wonderer1
Has anyone here read Stanislaw Lem's The Cyberiad?

Much earlier than Bostrom, and if not the best, at least the funniest thinking on such topics.
Never heard of it. But the first few paragraphs are already a riot!

Ludwig V March 22, 2024 at 08:16 #889871
Quoting noAxioms
It is the people in the simulation that are tasked with finding evidence that they are the subject of a simulation. What we're called by the occupants of the reality running the simulation is irrelevant.

I think that you are not talking about the same question as Relativist. (See below). You are positing that it is people who are "in" the sim - i.e. (I assume) being fed the data.
Plus, if I've understood you, you are positing that the subjects cannot communicate with whatever is running the sim - merely they merely seem to themselves to communicate.

Quoting noAxioms
And if a machine passes the test (it's a text test, so there's no robot body that also has to be convincing), then it exhibits intelligent behavior. The test is not too weak.

Here, you are positing that you are starting with a machine. In that case, the question is whether the behaviour is really intelligent or merely seems to be intelligent. But if it's a machine, we already know that it is not intelligent. Actually, I don't think that is right, but even if the response was intelligent, it does not follow that the machine is conscious or sentient.

Quoting Relativist
The Turing Test is passed by fooling people into believing there's a human giving responses in a conversation.

I think that you are not talking about the same test as noAxioms. (See above). Plus you are positing that it is a machine that is responding, so you are begging the question. (As Turing also does in his formulation of the test.)

The fundamental point is whether we can even formulate the question without begging it. We have to identify the subject of the Turing test as a machine or a person. Whichever we say, we will interpret the responses in different ways. Whatever the machine responds, we will interpret the response as that of a machine - and that will be true. Whatever the person responds, we will interpret the response as that of a person - and that will be true. There is no magic empirical bullet of evidence that will settle the issue.
noAxioms March 22, 2024 at 14:26 #889970
On the Turing test discussion:

Quoting Ludwig V
I think that you are not talking about the same question as Relativist. (See below).
Indeed. I dragged in Relativist since the topic of Turing test came up, and he suggests that the test is insufficient to determine intelligence.
The Turning test has nothing to do with a simulated reality, but rather with a device that imitates a human's text responses, as a test of intelligence.

Quoting Ludwig V
And if a machine passes the test (it's a text test, so there's no robot body that also has to be convincing), then it exhibits intelligent behavior. The test is not too weak.
— noAxioms
Here, you are positing that you are starting with a machine. In that case, the question is whether the behaviour is really intelligent or merely seems to be intelligent.
Here again, the quoted comment concerns the Turing test, not the simulation hypothesis.

Quoting Ludwig V
even if the response was intelligent, it does not follow that the machine is conscious or sentient.
The Turning test is not a test for either of those. There's not even a test that can tell if your neighbor is conscious/sentient. If there was, much of the p-zombie argument would be immediately settled by some empirical test. The whole point of the term 'conscious' is that it is always defined in such a way that is immune from empirical evidence.

The fundamental point is whether we can even formulate the question without begging it.
The question is simple. I am communicating with some unknown entity via text messages, much like we do on this forum. The question is, is that with which I am communicating a human or not?

I don't see begging in that wording. I am a moderator on a different forum, and one job is to spot new members that are not human. They're not allowed. I've spotted several, but it's getting harder.
I've even been charged human health insurance rates for a diagnosis provided by a machine, and I protested it at the time. They provided no service at all to me, but they charged me anyway.


Quoting Relativist
The Turing Test is passed by fooling people into believing there's a human giving responses in a conversation.
In a text conversation, yes. That's pretty hard to do, and we're not there yet.

This is feasible today at least within a limited range of conversation topics.
Well, one of the ideas is to go outside those topics. I mean, none of the chat bots have long term memory, so one of their traits is that they don't ask any questions of their own since they cannot learn. I suppose clarification requests of questions posed to it might count as asking something.

If the entity was to pass the test, then nothing is off limits. Be insulting to it, or annoying. It should react accordingly. If it doesn't, it's not passing the test. If it does, it is probably already considerably more intelligent than humans, since it requires far more smarts to imitate something you are not that it does to just be yourself. The entity is not human, and to imitate human responses, especially those involving human emotions, would require superior ability. It doesn't require the entity to actually have human emotions. It is not a test of 'is it human?', but rather 'is it intelligent?'.

What more are you looking for?
You claimed the test is too weak. I claim otherwise. If it passes, it has long since surpassed us in intelligence. As a test of human-level intelligence, it is more than enough.

a computer can produce words that sound like it's expressing empathy - but it actually is not.
It's not empathy, but it very much is expressing empathy. People are also quite capable of expressing empathy where there is no actual empathy, such as the politicians that send their 'thoughts and prayers' to mass-shooting families, but do nothing about the problem.


On the Simulation discussion:

Quoting Ludwig V
You are positing that it is people who are "in" the sim - i.e. (I assume) being fed the data.
In a VR, yes, exactly that. People are real, and are fed experience of a simulated reality. Every video RPG does this.
It the simulation case, there is no experiencer in the world running the sim. There are only fully simulated people inside 'the system', and if that system is not closed, the system needs to be fed artificially generated causes from outside. So for instance, if you look up, you see imitation stars, not fully simulated stars.

This is one of the reasons Tomb-Raider is less abusive of the processing power of your gaming machine than is something like Minecraft. The former is in a tomb, a very confined limited region in need of simulation. Minecraft on the other hand is outdoors, and my son needs to limit his render distance, else the computer can't generate the background as fast as it needs to. So distant things suddenly appear when you get close enough to them, very unlike reality where there is unlimited sight distance. This is only a problem for a VR where speed of computation matters.

Plus, if I've understood you, you are positing that the subjects cannot communicate with whatever is running the sim
No. If you can do that, you very much are aware of the creator/creation status. It would be like talking to a god. In a VR, you can talk to the other players, and you can talk to the NPCs if the NPCs have enough intelligence to talk, but you can't talk to anybody outside the simulated universe.

Relativist March 22, 2024 at 15:11 #889980
Reply to noAxioms Reply to Ludwig V
Thanks for clarifying the question- sorry I had missed it.

Regarding the question "are we in a simulation?" I interpret this as similar to "is solipsism true?" It's impossible to prove one way or another, but nevertheless - it's rational to believe we are not.

Regarding the Turing test: it has been passed - to a degree. See: https://www.reading.ac.uk/news-archive/press-releases/pr583836.html

Conversely, humans have "failed" the Turing test (https://www.nbcnews.com/news/amp/ncna163206) -- observers inferred that a human's responses were not humans.

Regarding "true" AI: IMO, it would entail a machine engaging in thoughts, learning as we do, processing information as we do, and producing novel "ideas" as we do. Artificial Neural Networks (ANNs) seem the most promising way forward on this front. Progress would not be measured by fooling people, but by showing there are processes that work like our brains do. Benefits include confirming our theories about some of the ways our brains work. The long game: success makes the "simulation hypothesis" that much more incredible, but never impossible.
Ludwig V March 22, 2024 at 17:40 #890037
Quoting noAxioms
Here again, the quoted comment concerns the Turing test, not the simulation hypothesis.

Quite so. But I notice that you don't disagree with what I say. My argument is that if one starts the Turing test by specifying that the subject is a machine, the test cannot provide evidence to the contrary and this is the version that I have most commonly seen. But if one did start by specifying that it is a person, one would not get any evidence to the contrary either. (If the responses from the machine seem to be intelligent or sentient or whatever, we have to decide whether the responses really are intelligent or sentient or whatever.) Knowing what the subject of the test is governs one's interpretation of the replies, which consequently can't provide evidence either way. That applies also to your version, in which one doesn't know whether the subject is machine or person (and to a version I've seen that provides two subjects, one machine and one human)
The point is that it is not a question of evidence for or against without a context that guides interpretation of the evidence.

Quoting noAxioms
If there was, much of the p-zombie argument would be immediately settled by some empirical test.

Quite so, and the set-up specifies that there can be no empirical evidence. But then, the argument is devised as a thought-experiment with the aim of persuading us to accept that there are qualia, or some such nonsense.

Quoting noAxioms
The whole point of the term 'conscious' is that it is always defined in such a way that is immune from empirical evidence.

Quite so. That's why the attempt to distinguish between the two on the basis of empirical evidence (Turing test) is hopeless.

Quoting noAxioms
I've even been charged human health insurance rates for a diagnosis provided by a machine, and I protested it at the time.

That's capitalism for you. But it might turn out that the machine is more successful than human beings at that specific task,

Quoting noAxioms
If it does, it is probably already considerably more intelligent than humans, since it requires far more smarts to imitate something you are not that it does to just be yourself.

I think that a machine can diagnose some medical conditions. Whether it can imitate diagnosing any medical conditions is not at all clear to me.

Quoting noAxioms
I am a moderator on a different forum, and one job is to spot new members that are not human.

I frequent another forum which developed criteria for sniffing out AI. However, I may be wrong, but I don't think there is any follow-up on whether people's judgements are correct or not. Do you get confirmation about whether your "spots" are correct or not?

Quoting noAxioms
The entity is not human, and to imitate human responses, especially those involving human emotions, would require superior ability.

Parrots imitate talking. Are they smarter than human beings?

Quoting noAxioms
There are only fully simulated people inside 'the system',

I thought you said that there were people inside the system. Now I'm really confused.

Quoting Relativist
Progress would not be measured by fooling people, but by showing there are processes that work like our brains do.

Yes, the appeal to how things work inside is a popular refuge in these uncertain times. But we don't (can't) rely on our limited understanding of how we work to establish what is the same and what is different. Even if we could, I would not be persuaded to rule out the possibility of personhood simply on the grounds of different internal physical structures. The output is what counts most.
noAxioms March 22, 2024 at 21:20 #890067
Quoting Relativist
Regarding the question "are we in a simulation?" I interpret this as similar to "is solipsism true?" It's impossible to prove one way or another, but nevertheless - it's rational to believe we are not.
In that sense, the two are similar. Also, quite often, in both VR and a true sim, solipsism is true, but you know it because there are clues. We here are envisioning a scenario where the simulated reality is good enough that those clues get harder and harder to find.

Regarding the Turing test: it has been passed - to a degree.
Cool. I wasn't aware. Nice controlled test, and kind of pre-chat-bot, which is maybe a good thing. I wonder how trained the judges were; where was the focus of their questioning? To pass today with tools like chatGTP around, you'd have to dumb down the machine answers since it 'knows' more than any human, even if the majority of what it knows is wrong.

Conversely, humans have "failed" the Turing test (https://www.nbcnews.com/news/amp/ncna163206) -- observers inferred that a human's responses were not humans.
It would seem fairly easy to pretend to be an unintelligent machine, but I presume these people were not attempting to appear nonhuman.
I administer a small Turing test all the time for unsolicited callers on the phone. Most phone bots record, but don't parse, any of your responses, so usually one small question is enough. That will change soon.
The voice-response ones (with limited options to traverse a menu) comprehend profanity, the use of which is often the fastest way to get a human online.

Regarding "true" AI: IMO, it would entail a machine engaging in thoughts, learning as we do, processing information as we do, and producing novel "ideas" as we do.
Agree. The game playing AI does all that, even if it is confined to game playing. Early chess or go playing machines were like self-driving cars, programmed by the experts, using the best known strategies. Then they came up with a general AI (like AlphaZero) that wasn't programmed at all to play anything specific. There was only a way to convey the rules of the game to it, and it would learn on its own from there. After a few days of practice, it could beat anybody and any of the specifically programmed machines. That definitely meets all your criteria.
It doesn't pass the Turing test, but given enough practice, something like it might, but you can't gain a human experience through introspection, no via training data from the web. It would have to actually 'live' a life of sorts, and questions to test it should focus on life experiences and not educational facts.


Progress would not be measured by fooling people, but by showing there are processes that work like our brains do.
Totally agree. Progress by imitation has its limits, but since a computer is not a human, to pass a Turing test it will always have to pretend to be something it isn't, which is hard to do even well after it has surpassed us in intelligence.

Benefits include confirming our theories about some of the ways our brains work.
That is more relevant to this topic. To demonstrate how our brains work, you (probably) have to simulate it. To simulate it, you need to give it state and an environment (all this was brought up in prior posts). The state in particular is not exacty something you can make up. It needs to have grown that way through experience, which means a quick sim won't do. You have to start it from well before birth and run this really complicated simulation through at least years of life, providing it with a convincing environment all the while. Tall order. It would presumably take centuries for a single test to run, during which the hardware on which it is running will be obsoleted multiple times

Thanks for joining the topic.


Quoting Ludwig V
My argument is that if one starts the Turing test by specifying that the subject is a machine
Then the test is invalid, I agree. If you click the link about the test being passed, the judges did not know which conversations were machines and which were people. They did know that there were five of each. Everybody (judges, machines, human subjects) knew it was a test.

That's why the attempt to distinguish between the two on the basis of empirical evidence (Turing test) is hopeless.
The Turing test was never intended as a test of consciousness.

But it might turn out that the machine is more successful than human beings at [medical diagnosis]
True. Machines can detect skin cancer better than any human, and that's worth paying for (but there's probably a free app). In my case, the non-doctor tech that saw me googled my symptoms and read back to be verbatim the same information google gave me at home, but leaving off the part where it said "see your doctor if you have these symptoms". Obviously no actual doctor was consulted.

I think that a machine can diagnose some medical conditions. Whether it can imitate diagnosing any medical conditions is not at all clear to me.
A 3 year old can imitate giving a diagnosis. Its how daddy gets covered by 20 bandaids. And if a machine can give a diagnosis (they can), then why would they have to imitate the ability that they actually have?

Do you get confirmation about whether your "spots" are correct or not?
A few are false positives, which are often confirmed by a simple PM to them. The bots don't hold conversations, but rather give single replies to a question, and no more. Short, and often correct but obvious and not particularly helpful. If you reply to a bot-post, the bot will probably not notice it.

Some are real easy, and can be spotted before they ever submit a single post. Russia was very big on bots that created sometimes hundreds of sleeper accounts that never posted anything. I banned many of them en-masse. Those have dried up since I think Russia closed the internet connection to the world so the public cannot see how the rest of the world views their war.

Parrots imitate talking. Are they smarter than human beings?
No more than is a tape recorder. Parrots don't pass a Turing test.

I thought you said that there were people inside the system. Now I'm really confused.
In the Simulation Hypothesis, we are the simulated people, the ones inside the system. Do not confuse this with the VR hypothesis where the people are real and only their experience is artificial. Read the OP if you don't get this distinction.





Ludwig V March 22, 2024 at 22:52 #890084
Quoting noAxioms
we are the simulated people

So I have to imagine myself as being a sim - and hence not a person - and not knowing it?

Quoting noAxioms
The Turing test was never intended as a test of consciousness.

So what was it intended to be a test for? (I assume you mean "intended by Turing"?)
noAxioms March 23, 2024 at 05:33 #890134
So I wanted to address the Simulation Hypothesis from Bostrom directly.
I quote only the abstract and a few parts of the intro.

[quote=BostromSimHypothesis]This paper argues that at least one of the following propositions is true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
(3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor?simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.[/quote]
Posthuman is defined here:
[quote=BostromSimHypothesis]The simulation argument works equally well for those who think that it will take hundreds of thousands of years to reach a “posthuman” stage of civilization, where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints.[/quote]
The trichotomy is reasonable, but worded in a misleading way. Point 1 makes it sound like this preposterous posthuman state is somehow inevitable if the human race doesn't meet an untimely demise along the way. This is nonsense since the posthuman state described is totally unreasonable, and human technology seems heavily dependent on non-renewable resources upon which this gilded age depends.
The computer envisioned is a molecular machine that isn't electronic, but works with levers and gears and such, very small. But it needs a huge structure to supply energy and dissipate heat. The latter isn't problem, but a mechanical computer made of individually placed atoms would be phenomenally unreliable, and would be very size constrained. How does one fetch data from distant locations, using levers and shafts and such? The data set required by the description would require far more molecules than the described device would have.

The third point seems to suggest that all this fictional processing power would be regularly pressed into service doing what he calls 'evolutionary history', a simulation of our ancestors. This is not just unlikely, but actually impossible.
Say the people from 100 centuries in the future wants to simulate the world of today. To do that, they'd need to know the exact state of the world today, down to almost the molecular level, and I know for a fact that nobody has taken such a scan. Furthermore, any simulation of that state would last a few minutes/hours at best and then diverge significantly from what actually happened. So a simulation of one's own history cannot be done. At best, to simulate 'evolutionary history', one might set the world to the state of 20 million years ago with many of the species known to exist at that time, and see what evolves. It won't be themselves, but if those running the simulation are not human, then we're the unexpected things that evolves instead of them. That's plausible, but it isn't a simulation of their own history.

More problems when they claim to simulate the high performance machines that run the simulations later in time. He is after all claiming that there are simulations being run by the simulations. He seem to have no idea how inefficient this would be, that it takes millions of instructions to simulate a single interaction, coupled with all the side effects. I've written code to simulate the running of code (for profiling purposes). It didn't simulate transistors or anything, it just needed to assume that the processor works correctly and simulate at the intstruction level. It still took thousands of instructions to simulate one instruction.

That's just me tearing apart the abstract. The article goes on to suggest impossilbe future computer speeds, and tasks that are more than even that fictional processor could handle. There's a section specifically about substrate independence, with which I agree. It essentially says that doing it with paper and pencil, electronics, mechanical, etc all work the same. The outcome of the simualtion in no way depends on what substrate is used.


He does an estimation of 10[sup]33[/sup]-10[sup]36[/sup] instructions needed to do one of his simulations of human history. Apparently only the people are simulated, and the rest (animals, plants, geology, and much worse, all the computers) are only imitated, not simulated. He justifies this small number with 100 billion humans, 50 years per human, 30M seconds per year, 10[sup]14[/sup]-10[sup]17[/sup] brain ops per second, which comes to 15 times the figure stated above.

OK, it takes a lot of instructions to simulate all that goes on during a single brain op, and all that goes on between them. To simulate world history, it seems far more than just brains need to be simulated. At 100 billion people, only about a century or so of history can be simulated, nowhere near enough to get to the point of them running such simulations of their own.
Why 50 years? Is life expectancy expected to go down? What's the point of simulating minds at all when imitating people (as is done for everything else) is so much more efficient? The only reason is that Bostrom's idea doesn't hold water if you don't presume this needless complication.

Given future technology, simulation of a small closed system (maybe a person, or an island village) can be done. Actual world history? No actual history of any person, let alone all of them, can be done. Why does Bostrom choose to ignore this?


Quoting Ludwig V
So I have to imagine myself as being a sim and not knowing it?

Yes. That's Bostrom's whole point. He says we're probably all simulated, but it's based on the anthropic reasoning above, which makes many many unreasonable assumptions.
~
Ludwig V March 24, 2024 at 07:36 #890346
Reply to noAxioms
Philosophical discussions often start so many hares that I find myself trying to juggle several different lines of thought at the same time. Back to the matter in hand is a very good idea. I had looked up the original idea, but had little idea about how to tackle it. This was very helpful. Thank you.

I take your point about the limitations of what we could ever do. So, this being philosophy, I try to take the argument a little further.
Sticking to the question of what is practical, for the moment, couldn't one adopt the kind of approach that the weather forecasters (and, I believe, physicists trying to work out fluid dynamics, which is probably the same problem) have adopted? It seems to work, within its limits. Of course, it doesn't raise the scary possibilities about our individual lives that we have been discussing, but it could provide evidence for or against Bostrom's hypotheses.
Comment - this possibility high-lights for me a question about Bostrom's first two hypotheses. They seem to me to be empirical. But I don't see how one could ever demonstrate that they are true or even plausible without some sort of evidence. Without that one could never demonstrate any consequence of them as sound, as opposed to valid. En masse simulations could provide such evidence.

That would require us to define what is meant by "post-human" and "extinction". Then we would have to deal with the difference between two different possibilities. We may go extinct and be replaced (or ousted) by some other form of life or we may evolve into something else (and replace or oust our evolutionary predecessors).
Problem - Given that inheritance is not exact copy and the feed-back loop of survival to reproduction works on us just as surely as on everything else, can we exactly define the difference between these two possibilities? They say that birds evolved from dinosaurs, and that mammals took over as dominant species from dinosaurs. Which possibility was realized for dinosaurs? Both, it seems.
Another problem. Given that a feed-back loop is at work on these phenomena, can prediction ever be reliable? (This is the same problem as economics faces, of course).

The third hypothesis suffers, for me, from recognizing that it is very hard to see exactly how to draw the distinction between living in a sim as opposed to living as we do. (I mean the proposition that we are already brains in a vat.) One difference is, that we seem able to distinguish between reality as it is and reality as it seems to be - and it is our experience that enables us to do so. (That means recognizing that our experience is not a complete and consistent whole, but presents itself as inconsistent and incomplete.) The brain-in-a-vat scenario not only assumes that our experience is a complete and consistent whole, but imagines a different and wildly implausible actual reality - though not one that is in principle undiscoverable - without a shred of evidence.
wonderer1 March 24, 2024 at 08:43 #890350
Quoting Ludwig V
Comment - this possibility high-lights for me a question about Bostrom's first two hypotheses. They seem to me to be empirical. But I don't see how one could ever demonstrate that they are true or even plausible without some sort of evidence. Without that one could never demonstrate any consequence of them as sound, as opposed to valid. En masse simulations could provide such evidence.


The second premise - any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof) - seems obviously true to me.

To be clear, I am looking at the issue in terms of something like modelling at least a significant subsection of the world (say a solar system) in terms of subatomic particles, while needing to make use of subatomic particles in creating the simulation.

The simulator would need to consist of more particles than the system which is being simulated. That's a rather fundamental problem. In practice, only things that are simpler than the simulator (or things treated simplistically) can be simulated.

It seems to me that the person who would seek to disprove the second premise would need to prove that consciousness can arise in a simulation of something much more simplistic than the world we find ourselves in, or that it will be a routine matter for a post-human civilization to take all of the matter in a big solar system, and use it to model a smaller solar systems.
noAxioms March 24, 2024 at 15:06 #890416
Quoting Ludwig V
couldn't one adopt the kind of approach that the weather forecasters (and, I believe, physicists trying to work out fluid dynamics, which is probably the same problem) have adopted?
The weather is closer. Fluid dynamics of a system in stable state (say water moving through a pipe, dam spillway) needs a description of that state, a calculus task. If it is dynamic (simulation of water waves), then it's more complicated, closer to the weather.

No simulation of the weather will produce an accurate forcast a month away, no matter how accurate the initial state is measured. Trends can be predicted, but actual weather at location X at time T is not going to happen. Similarly, no simulation of people is going to predict them doing what history says actually happened, no matter how accurate the initial state.

One does not improve weather forecasting by simulating the formation of individual rain drops, but nothing else at that level of detail, yet Bostrom is suggesting that such an inefficient choice would be made on a regular basis, for seemingly no purpose except that his argument depends on it. He clearly isn't a programmer.

Comment - this possibility high-lights for me a question about Bostrom's first two hypotheses.
The entire paper is one hypothesis. There are not more that I am aware of.
Your following description doesn't help in me trying to figure out what you're considering to be 'the first two of presumably more than two hypotheses'.

That would require us to define what is meant by "post-human" and "extinction".
I posted his definition of 'posthuman', which is, in short, a level of technology capable of running the numbers he underestimates, and far worse, capable of simulating a posthuman set of machines doing similar simulations.
As for extinct, there would only be two possible definitions: 1) No being in the universe is biologically descended from what is the Human species today. This of course is totally undefined, since if we're simulated, the actual humans of 2024 may not appear human at all to us. Much depends on what era the simulation uses for its initial state.
2) The other definition is that no entity in the universe has the human race of today as a significant part of the causal history of its existence. In short, if there are human-created machines that have replaced us, then humans are technically still not extinct. This is very consistent with his choice of the term 'posthuman'. One can imagine the machine race actually getting curious about their origins, and knowing about humans and presumably having some DNA still around, they might run simulations in attempt to see how machines might emerge from that. Of course, the simulations would produce a different outcome every time, sometimes with humans going extinct quickly, or losing all technology and reverting essentially to a smart animal, much like how things were before people started digging metals out of the ground.

Then we would have to deal with the difference between two different possibilities. We may go extinct and be replaced (or ousted) by some other form of life or we may evolve into something else (and replace or oust our evolutionary predecessors).
There you go. You seem to see both routes. The third path is extinction, or simple permanent loss of technology.

Given that inheritance is not exact copy and the feed-back loop of survival to reproduction works on us just as surely as on everything else, can we exactly define the difference between these two possibilities?
What two possibilitie? Humans that evolve into something we'd not consider human by today's standard? Many species do that all the time. Other possibility is 'ousted' as you put it. Our biological line is severed, as happens to nearly all biological lines given time.

They say that birds evolved from dinosaurs, and that mammals took over as dominant species from dinosaurs.
Good example. There are no dinosaurs (which, unlike humans, is a collection of species). The vast majority of those species were simply ousted. They have no descendants. But some do, and the alligators and birds are their descendants. They are not dinosaurs because none of them is sexually compatible with any species that was around when the asteroid hit. They are postdinosaur.

Which possibility was realized for dinosaurs?

It depends on the species, or the individual. Mom has 2 kids. One of those has children of his own, and the other is ousted, a terminal point in the family tree.

Another problem. Given that a feed-back loop is at work on these phenomena, can prediction ever be reliable?
Prediction of what? A simulation of history makes no predictions. A simulation of the future is needed for that, hence the weather predictors.
To guess at the question, no simulation of any past Earth state will produce 'what actually happens', especially if that simulation is of evolutionary history. There is for instance no way to predict what children anybody will have, or when, so none of the famous people we know will appear in any simulation. Again, Bostrom seems entirely ignorant of such things, and of chaos theory in general.

The third hypothesis suffers, for me
You really need to tell me what these hypotheses are, because I know of only the one. Two if you count the VR suggestion, but that doesn't come from Bostrom. i know of several that support a VR view, but none that has attempted a formal hypothesis around it.

Anyway, Bostrom posits nothing that is equivalent to a brain in a vat. That is more appropriate to a VR discussion.


Quoting wonderer1
The second premise - any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof) - seems obviously true to me.
It the second possibility. He says one of the three must be true. It's not a list of three premises.
I agree that granted this super-improbable posthuman state, that indeed, nobody is going to run a simulation of the history that actually took place. It just cannot be done, even with the impossible technology required.

The simulator would need to consist of more particles than the system which is being simulated.
If it is simulating at the particle level, yes. I can run an easy simulation of the planetary motions without simulating each bit. Each planet/moon/asteroid can effectively be treated as a point object, at least until they collide.

That's a rather fundamental problem. In practice, only things that are simpler than the simulator (or things treated simplistically) can be simulated.
Yes, and Bostrom claims several levels of depth, meaning the simulation is simulating the machines doing simulations.

It seems to me that the person who would seek to disprove the second premise would need to prove that consciousness can arise in a simulation of something much more simplistic than the world we find ourselves in,
Yes. If the goal was to simulate consciousness, they'd probably do one person, or a small isolated community (a closed system). And it wouldn't be a simulation of anybody real, but rather just a learning tool to show that a simulated person behaves like we do. If it worked, it would be a big blow to the dualists, but I'm sure they'd find a way to explain the results away.
The dualists can similarly deal a pretty fatal blow to the physicalists, but they choose not to pursue such avenues of investigation, which to me sounds like they don't buy their own schtick.


Ludwig V March 24, 2024 at 18:43 #890462
Quoting wonderer1
It seems to me that the person who would seek to disprove the second premise would need to prove that consciousness can arise in a simulation of something much more simplistic than the world we find ourselves in, or that it will be a routine matter for a post-human civilization to take all of the matter in a big solar system, and use it to model a smaller solar systems.

I didn't pay enough attention to "extremely unlikely" in this hypothesis/axiom/premiss. That can't be verified or falsified in any of the usual ways. Your arguments are suggestive in support of it. But I can't see them as conclusive.
I agree also that a claim that consciousness can arise in certain circumstances is probably unfalsifiable. But it can be verified, if we find a case where consciousness does arise in those circumstances.
The contradictory of this proposition is "any posthuman civilization is certain to run a significant number of simulations of their evolutionary history (or variations thereof)", which is not meaningless or self-contradictory, so "any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof)" cannot be a priori.
So I classify the proposition under discussion as empirical.
Ludwig V March 24, 2024 at 19:37 #890475
Quoting noAxioms
The entire paper is one hypothesis. There are not more that I am aware of.

I wasn't sure whether to call the three numbered propositions you quoted hypotheses or axioms or what. I see that you call them possibilities, which is fine by me.

Quoting noAxioms
I posted his definition of 'posthuman', which is, in short, a level of technology capable of running the numbers he underestimates, and far worse, capable of simulating a posthuman set of machines doing similar simulations.

You did indeed. I didn't pay enough attention. Sorry. On the other hand, I'm not sure that it really matters very much whether we classify a civilisation with that technology as post-human or not.

Quoting noAxioms
There are no dinosaurs (which, unlike humans, is a collection of species). The vast majority of those species were simply ousted. They have no descendants. But some do, and the alligators and birds are their descendants. They are not dinosaurs because none of them is sexually compatible with any species that was around when the asteroid hit. They are post-dinosaur.

You are right. I think that's a better articulation than mine. I reckoned that picking a specific species would have the same problem as dinosaurs, since there can many sub-species of a given species, not to mention varieties of species and sub-species.

Quoting noAxioms
Prediction of what? A simulation of history makes no predictions. A simulation of the future is needed for that, hence the weather predictors.

Yes. This is muddled. Sorry. I was thinking of Bostrom's predictions.

Quoting noAxioms
To guess at the question, no simulation of any past Earth state will produce 'what actually happens', especially if that simulation is of evolutionary history. There is for instance no way to predict what children anybody will have, or when, so none of the famous people we know will appear in any simulation. Again, Bostrom seems entirely ignorant of such things, and of chaos theory in general.

Yes, that is what I was after. Thanks.

Quoting noAxioms
One can imagine the machine race actually getting curious about their origins, and knowing about humans and presumably having some DNA still around, they might run simulations in attempt to see how machines might emerge from that.

They might, and they might not. Imagination is a great thing. However, I can imagine several different scenarios. It seems to me quite likely that we will fail to control climate change and fail to adapt sufficiently, so either reverting to a pre-technological society or dying out. Or we might develop effective space flight and colonization and leave Earth. Or some idiot might starts an all-out war - atomic, biological and chemical. Or we realize the threat from the machines and destroy all the machines that might threaten us before they can take over. Or aliens might arrive and knock heads together until sanity is established. The possibilities are endless. I'm spoilt for choice. Like Buridan's ass, I need a reason to choose which to take seriously.
L'éléphant March 29, 2024 at 05:31 #891899
Quoting Ludwig V
You seem to think I cannot refer to anything that I have not experienced. But the reference of a word is established in the language in general, not by what I may or may not have experienced.


Then you also do not understand what causal link is -- and this is what the BIV theory is pointing out.


Quoting Ludwig V
So when I can refer to the President of the United States even if I don't know that Joe Biden is the President.

Right sentiment, wrong example.
Ludwig V March 29, 2024 at 06:59 #891903
Quoting L'éléphant
Then you also do not understand what causal link is -- and this is what the BIV theory is pointing out.

I wouldn't claim to understand what a causal link is. Tracking back our exchanges here, I realize that we have both been indulging a favourite trope in philosophy - accusing the other of not understanding what something is because the other has a different philosophical idea of what it is. It isn't at all constructive.
My current favourite examples is Searle:-
I think we all really have conscious states. To remind everyone of this fact I asked my readers to perform the small experiment of pinching the left forearm with the right hand to produce a small pain.
New York Review - Searle vs Dennett
Which begs to the question.

But we're also trying to respect the topic of the thread, so we're a bit trapped.

On the BiV, I had the impression that the Putnam's intention was to point out that Descartes' nightmare is an empirical possibility and that the causal theory of reference was presupposed. But I wouldn't want to be dogmatic about that.
The basis of my scepticism about what the BiV establishes is the private language argument (Stanford Encyclopedia)

I agree that reference is established by some sort of baptism ceremony (ostensive definition), though what that might consist of in practice is very flexible. We can think of two ceremonies. One establishes the public use of the term (think of the public naming of a ship); the other establishes the use for a specific speaker. In either case, there needs to be some sort of historical story that connects the ceremony with each occasion of use. Whether that amounts to a causal link depends heavily on one's definition of causality.
In addition, what one says about the BiV depends on whether what is referred to by a given term depends on the intention of the speaker or on the publicly established use of the term. Both theories are viable, in the sense that there are some philosophers who accept each of them. I think each has its place.

Quoting L'éléphant
Right sentiment, wrong example.

I wasn't happy with the example when I wrote it down. I was writing in haste and couldn't think of anything better.
But there is a problem. I am reminded of the paradox in the Meno about how one can recognize new knowledge when one is looking for it. Here, the paradox is that I must know something about the item I am referring to if I am to refer to it. So in one sense, I must know what a mule is when I refer to it. At the same time, it seems just obvious that I can refer to my mule (who is called "Freddy") without knowing that it is, by definition, an animal whose mother is a horse and whose father is a donkey. Perhaps you can think of a better example? (Though there may be more than one way.)