Can a computer think? Artificial Intelligence and the mind-body problem

Pez February 28, 2024 at 10:02 7575 views 253 comments
The tragic fate of Alan Turing does not diminish his role in computer-science. The test named after him was invented to determine, if a computer can think. Nowadays AI would quite easily pass this test, if not now, so in the foreseeable future. Does this mean, that modern-day computers actually are able to think like human beings? Or even, that they have consciousness like we have? The question has to be extended, though. How can we be sure, that other people can think and have consciousness? If we just infer from their bodily movements and facial expression, a robot in far future might be able to deceive us as well.

The road to solipsism is paved that way and if You have no problem with that, the discussion is over. If You do have a problem, though, You have to concede, that there is some fundamental problem involved and something is wrong about the usual way of looking at things. The decision between materialism and idealism might not be sufficient to understand ourselves and the world we live in.

The mind-body problem has a long tradition. Maybe René Descartes (* 31.03.1596, †11.02.1650) introduced it by postulating to types of “substances” or things: extended ones and thinking ones. The human body is an extended, material thing. On the other side we have (our) conscious experience, according to Descartes a thinking thing (cogito ergo sum – I think therefor I exist). Ever since the dispute gyrates around the relation between these two substances.

Materialism poses its weight on the concept of matter, stating that material stuff is the essence of existence. Everything else, including consciousness, has to be explained in terms of attributes of the physical world. Mind is only an epi-phenomenon due to the complexity of some material things, e.g. the human brain.

Idealism on the other side claims, that everything we can ever know about matter stems from our conscious experience. Therefore the concept of matter is only inference from data the senses provide us. Consciousness is at the base of existence and matter has to be explained as outcome of conscious experience. Everything is in the mind only.

Thus materialism and idealism are only extreme positions regarding the two substances. If we concede Descartes' arguments, the problem of mutual influence between the two substances arises. If we intend to raise our hands, can our mind have causal influence on our physical body? On the other hand scientists found out, that stimulating certain regions of the brain results in consciously experiencing sensory impressions. Science has made astounding progress and we can study the functioning of the human brain at work. It is possible to follow the excitation of nerves from the retina to specific parts of the brain – but that is all there is, no trace of consciousness to be found. Inside our brain it is pitch-dark so to speak.

“ No one thinks that when a tune is running in my head, a surgeon could unearth a little orchestra buried inside my skull or that a doctor by applying a stethoscope to my cranium could hear a muffled tune...” to quote Mr. Ryle (The Concept of Mind). And no one would introduce himself to a party by saying: Hello, here I am and I have brought my body with me. It may sound ridiculous, but the bifurcation of nature would indeed admit such propositions.

Interesting in this context are experiments conducted by the American physiologist Benjamin Libet. He wanted to find out, how long it takes from the moment we intend to execute a bodily action to the stimulus of the appropriate nerves. The unexpected outcome of these experiments was that the nerves acted an average 0.35 seconds prior to the intention. The nerves and muscles are activated before we intend to raise our arm. Not surprisingly this has led to discussions regarding free will versus behaviorism.

The complexity of integrated circuits in modern computers approaches rapidly the complexity of the human brain. Traditional computer-programs give the program a limited range and programmers can quite easily foresee the possible outcomes. AI is different though. Not even the designer can predict what will happen as these programs in a certain way program themselves and are able to learn depending on the scope of available data.

Give AI senses and the possibility to act, then the difference to human behaviour will diminish on the long run. Does this mean that we are just sophisticated machines and all talk about freedom of choice and responsibility towards our actions is just wishful thinking? Or is there something fundamentally wrong about our traditional concepts regarding mind and matter? I maintain that we need a new world-picture, especially as the Newtonian view is nowadays as outdated as the Ptolemaic system was in the 16th century. But this will be a new thread in our forum.

Comments (253)

flannel jesus February 28, 2024 at 10:16 #884218
Quoting Pez
Materialism poses its weight on the concept of matter, stating that material stuff is the essence of existence. Everything else, including consciousness, has to be explained in terms of attributes of the physical world. Mind is only an epi-phenomenon due to the complexity of some material things


There's a lot of philosophy about this, it's normally anti-materialists who insist that all materialists must consider consciousness epiphenomenal, actual materialists have a wide range of views on that question.
Corvus February 28, 2024 at 10:57 #884226
Quoting Pez
AI is different though. Not even the designer can predict what will happen as these programs in a certain way program themselves and are able to learn depending on the scope of available data.

Problem with all the mental operations and events is its privateness to the owners of the minds. No one will ever access what the other minds owners think, feel, intent ... etc. Mental events can only be construed with the actions of the agents and languages they speak by the other minds.

By the way, they say that, it is official now that no human being can beat the AI chess players. To know what the AI machines think, and feel, one must be an AI machine himself. The possibility of that happening in the real world sounds like as unrealistic and impossible as the futile ramblings on time travel fictions.
flannel jesus February 28, 2024 at 13:29 #884244
Reply to Vaskane Yes but the important question isn't "is it like that now?", the important question is, "Is that necessarily the case in principle?"

Is it in principle possible or impossible that some future AI might be sentient or have a mind of its own?
MoK February 28, 2024 at 15:33 #884261
Yes, AI can think if we know how we think. We can simulate the brain if we know how it works when it comes to thinking. There are three questions that we need to answer before we can design an AI that can think. These questions are:
1) How the new information is classified in the brain?
2) How the related information are linked in the brain?
3) How do we find new information from the linked information?
Arne February 28, 2024 at 16:21 #884271
Reply to Pez My paper for my senior seminar almost 40 years ago was entitled Pylyshyn and the Mine/Body Problem. In a sense, the answer will always be no because humans will continually move the goal posts. And AI cannot do that. :-)
Count Timothy von Icarus February 28, 2024 at 17:14 #884276
Reply to Pez

Give AI senses and the possibility to act, then the difference to human behaviour will diminish on the long run. Does this mean that we are just sophisticated machines and all talk about freedom of choice and responsibility towards our actions is just wishful thinking?


Determinism doesn't necessarily rule out many conceptions of free will. Many philosophers argue that determinism is a prerequisite for free will. If our acts didn't have determinate outcomes, or at least predictable ones, we couldn't enact our will in the world. Consider a videogame where you can push buttons but the buttons do random things each time you push them. In what way are you free to change things in the game?

The flip side of this problem is that if our actions are not determined by anything in the world, our past experiences, etc. then they wouldn't seem to rightly be "our actions." If they are determined by nothing that comes before, they become uncaused, arbitrary and random.

But free will can be conceived of a sort of "self-determination." Our recursive self-awareness has causes, its existence and contents are determined by past events, but our awareness itself also seems to play a causal role in our choices. We are not perfectly free, which is obvious, we cannot walk through walls or wipe memories from our mind at will. But who we are and what we think seems to play a determining role in what we do, and in this way we are free as "self-determining," entities. Nothing in determinism contradicts this sort of freedom.

The challenge to freedom in our "being reduced to machines," generally runs through smallism and epiphenomenalism. The argument is normally something like:

Atoms are not concious and lack purposes. We are nothing but atoms. Therefore, all our acts are determined by that which lacks purpose, and conciousness' effects on action must illusory, since all thought is determined by the rules governing mindless atoms.

But this doesn't follow from determinism, it follows from other concepts normally lumped in with determinism, namely reductionism and smallism, the idea that facts about all large entities can be wholly reduced to facts about smaller entities. However, such views seem to make accounting for conciousness impossible, barring panpsychism, so it's unclear how seriously they should be taken, nor is there overwhelming empirical support for them. True reductions in science are very rare.


Reply to MoK

Yes, AI can think if we know how we think


Or maybe rather, "we could determine that AI was thinking if we knew how we thought?" But we don't, and therein lies the massive hole at the center of this debate.

But for those who deny the possibility...

Reply to Vaskane Reply to Arne


...what about "wetware" AI that is "grown" from neuronal tissue, perhaps with silicone involved as well? If these cannot be concious, why can natural entities composed of a similar material be conscious?

Of course, there the difference between "artificial life," and "AI" gets blurry. Where is the line between cyborg and hybot? How much biological material, or biological-like material can be involved in a potential "thinking thing" before we dismiss it as being "AI?"

Whether purely silicone based systems can produce sentience seems impossible to answer currently. Finding evidence of silicone-based life, while unlikely, would really shake this up.





Arne February 28, 2024 at 17:20 #884279
Quoting Count Timothy von Icarus
Whether purely silicone based systems can produce sentience seems impossible to answer currently. Finding evidence of silicone-based life, while unlikely, would really shake this up.


I agree.
Joshs February 28, 2024 at 17:40 #884284
Reply to Pez Reply to Pez

Quoting Pez
The complexity of integrated circuits in modern computers approaches rapidly the complexity of the human brain. Traditional computer-programs give the program a limited range and programmers can quite easily foresee the possible outcomes. AI is different though. Not even the designer can predict what will happen as these programs in a certain way program themselves and are able to learn depending on the scope of available data.



Computers are our appendages. They are like organ systems within our bodies. Just like the functioning of a liver or heart cannot be understood apart from its inextricable entanglement in the overall aims of the organism, the same is true of our machines with respect to our purposes. They are not autonomous embodied-environmental systems but elements of our living system. As long as we are the ones who are creating and programming our machines by basing their functional organization on our understanding of concepts like memory storage , patten matching and sensory input, their goals cannot be self-generated. They can only generate secondary goals derived as subsets of the programmed concepts , which we then respond to by correcting and improving the programming. This is how our appendages and organ systems function.

Saying our machine are smarter or dumber than us is like saying the spider web or birds nest is smarter or dumber than the spider or bird. Should not these extensions of the animal be considered a part of our living system? When an animal constructs a niche it isnt inventing a life-form, it is enacting and articulating its own life form. Machines, as parts of niches , belong intimately and inextricably to the living self-organizing systems that ‘we' are.

Why is the complex behavior of current AI not itself creative, apart from the user's interpretation? Because the potential range of unpredictable behaviors on the part of the machines are anticipated in a general sense, that is, are encompassed by the designer's framework of understanding. Designing a chaotic fractal system, a random number generator, mathematically describing the random behavior of molecules, these schemes anticipate that the particulars of the behavior of the actual system they describe will evade precise deterministic capture. Industrial age machines represented a linear, sequential notion of temporality and objective physicalism, complementing representational approaches to art and literature, today's AI is an expression of the concept of non-linear recursivity, and will eventually embrace a subject-object semantic relativism. Current AI thus ‘partners' with newer forms of artistic expression that recognize the reciprocal relation between subject and object and embed that recognition into the idea the artwork conveys.

And just like these forms of artistic expression, non-linear, recursive AI functions as an archive, snapshot, recorded product, an idea of self-transforming change frozen in time. In dealing with entities that contribute to our cultural evolution, as long as we retain the concept of invention and machine we will continue to be interacting with an archive, a snapshot of our thinking at a point in time, rather than a living self-organizing system. In the final analysis the most seemingly ‘autonomous' AI is nothing but a moving piece of artwork with a time-stamp of who created it and when. In sum, I am defining true intelligence as a continually self-transforming ecological system that creates cultural (or biological) worldviews (norms, schemes, frames), constantly alters the meaning of that frame as variations on an ongoing theme (continues to be the same differently), and overthrows old frames in favor of new ones. The concept of an invented machine, by contrast, is not a true intelligence, since it not a self-modifying frame but only a frozen archive of the frame at a given moment in time.

Can we ever ‘create' a system that is truly autonomous? No, but we can tweak living organic material such as dna strands enclosed in cellular-like membranes so that they interact with us in ways that are useful to us. Imagine tiny creatures that we can ‘talk to'. These would be more like our relationship with domesticated animals than with programmed machines. Think of humanity as ecosystem 1, the encompassing ecosystem whose intelligence evolves over time. Computers belong to ecosystem 2, the sub-ecosystem operating within, and a dependent part of, ecosystem 1. As a dependent part, it cannot evolve beyond ecosystem 1. It evolves with it.

NOS4A2 February 28, 2024 at 17:44 #884286
Reply to Pez

Anything that isn't human cannot do what a human does by virtue of it not being a human being. Whatever sapiens build in their pursuit of artificiality and technology will have to be content with mere mimicry. It's the same with human thought. Although I don't think it could be said that thinking is a strictly human affair any more than drinking and running is, the problem occurs in believing we can apply strictly biological concepts and activities to technological and artificial ones. Do submarines swim? That answer, along with others of the similar themes, is invariably a "no".
flannel jesus February 28, 2024 at 17:49 #884288
Reply to NOS4A2 Why is it invariably a no? Why is it inconceivable to imagine a machine that can swim pretty much like other swimming being swim? We already have machines that walk like humans walk.
NOS4A2 February 28, 2024 at 18:08 #884293
Reply to flannel jesus

I argued that the best technology can do is mimic the biological activity. This is for two reasons: technology isn’t biological, so mimicry is all it could ever amount to, but also because the technology is designed to mimic the biological activity, not to be biological activity.
flannel jesus February 28, 2024 at 18:29 #884305
Reply to NOS4A2 but what makes "swimming" an inherently biological activity, such that a machine mimicking it doesn't count as "swimming"?
RogueAI February 28, 2024 at 18:37 #884311
If some computer becomes conscious, would that entail my old Commodore 64 was conscious? ENIAC? An abacus?
NOS4A2 February 28, 2024 at 18:39 #884312
Reply to flannel jesus

I don’t see how the term applies to anything else.
MoK February 28, 2024 at 18:42 #884313
Quoting Count Timothy von Icarus

Or maybe rather, "we could determine that AI was thinking if we knew how we thought?" But we don't, and therein lies the massive hole at the center of this debate.

But for those who deny the possibility...

Correct. Unfortunately, we don't know how we think so we cannot design an AI that can think.
180 Proof February 28, 2024 at 21:24 #884356
Quoting flannel jesus
Is it in principle possible or impossible that some future AI might be sentient or have a mind of its own?

What do you mean by "sentient" & "mind of its own"? Do you believe these properties are attributes of human beings? If so, why do you believe this? And, assuming it's possible, would these properties be functionally identical instantiated in an AI-system as they are embodied in a human? Why or why not?

As for me, I've yet to find any compelling arguments for why in principle a machine cannot be built (either by h. sapiens and/or machines) that functionally exceeds whatever biological kluge (e.g. primate brain) nature adaptively spawns by environmental trial & error, and also since the concept-prospect does not violate any (current) physical laws, I see no reason (yet) to assume, or suspect, that "sentient AI" is a physical/technological impossibility.

Reply to Pez
Reply to NOS4A2
RogueAI February 29, 2024 at 01:10 #884416
Quoting 180 Proof
As for me, I've yet to find any compelling arguments for why in principle a machine cannot be built (either by h. sapiens and/or machines) that functionally exceeds whatever biological kluge (e.g. primate brain) nature adaptively spawns by environmental trial & error, and also since the concept-prospect does not violate any (current) physical laws, I see no reason (yet) to assume, or suspect, that "sentient AI" is a physical/technological impossibility.


OK, let's suppose we develop sentient AI. Do we then have to reevaluate sentience for all the computing devices we didn't think were sentient?
Patterner February 29, 2024 at 01:16 #884417
Quoting RogueAI
OK, let's suppose we develop sentient AI. Do we then have to reevaluate sentience for all the computing devices we didn't think were sentient?
How would we know we developed sentient AI? I would think whatever criteria we used to determine that would be used to evaluate all computing devices. Entire classes of them would likely be ruled out, known to not have the required element.
RogueAI February 29, 2024 at 01:22 #884418
Quoting Patterner
How would we know we developed sentient AI? I would think whatever criteria we used to determine that would be used to evaluate all computing devices. Entire classes of them would likely be ruled out, known to not have the required element.


Well, there's the rub. We don't even know if each other are conscious. But suppose we're in contact with superintelligent aliens, only they don't tell us a whole lot. But the stuff they do share with us is invariably correct. And one of the things they do tell us is that at least one of the machines we've developed is conscious. Would that entail considering digital watches, say, as being conscious?
Wayfarer February 29, 2024 at 01:57 #884422
Quoting Pez
Nowadays AI would quite easily pass this test, if not now, so in the foreseeable future. Does this mean, that modern-day computers actually are able to think like human beings? Or even, that they have consciousness like we have?


I put the question to chatGPT, which responded:


[quote=ChatGPT] The Turing Test, devised by Alan Turing in 1950, is a measure of a machine's ability to exhibit intelligent behavior indistinguishable from that of a human. Turing proposed that if a human evaluator could not consistently tell the machine apart from a human based on their responses to questions, the machine could be considered to have passed the test. The focus is on the imitation of the external behavior of intelligent beings, not on the internal thought processes.

Modern artificial intelligence (AI) systems, including chatbots and language models, have become increasingly sophisticated, making it more challenging to distinguish their outputs from human responses in certain contexts. However, passing the Turing Test does not necessarily mean that computers are able to think like human beings. Here's why:

1. **Imitation vs. Understanding**: AI can mimic the patterns of human conversation and generate responses that seem human-like, but this does not imply understanding or consciousness. The AI does not possess self-awareness, emotions, or genuine understanding of the content it processes; it operates through algorithms and data.

2. **Narrow AI vs. General AI**: Most modern AIs are examples of narrow AI, designed to perform specific tasks, such as language translation, playing a game, or making recommendations. They are not capable of general intelligence, which would involve understanding and reasoning across a broad range of domains with human-like adaptability.

3. **Lack of Consciousness**: Consciousness and subjective experience are fundamental aspects of human thought. Current AI lacks consciousness and the ability to experience the world subjectively. The process of thought, as humans experience it, involves not just responding to stimuli or questions but also emotions, motivations, and a continuous stream of internal dialogue and reflection.

4. **Different Processing Mechanisms**: Human brains and computers operate in fundamentally different ways. Human thought is the product of biological processes, evolved over millions of years, involving complex interactions among neurons and various brain regions. AI, on the other hand, processes information through algorithms and computational methods that do not replicate the biological processes of human thought.

While AI can simulate certain aspects of human thinking and may pass the Turing Test, it does so without the underlying consciousness, emotions, and genuine understanding that characterize human thought. The development of AI that truly thinks and understands like a human being would require not just advancements in computational techniques but also a deeper understanding of consciousness and human cognition, which remains a significant scientific and philosophical challenge.

Patterner February 29, 2024 at 02:11 #884425
Reply to RogueAI
I already say "Thank you" to Alexa and Siri. If I had a digital watch, I suppose I wouldn't mind thanking it.

I'm gonna need a better scenario than that. I know who the Kanamits are. Why don't they tell us a whole lot? Why tell us something like this, but no specifics? I wouldn't assume they were telling the truth. And we're already trying to figure out if any are sentient.

Agree-to-Disagree February 29, 2024 at 02:51 #884439
According to the Merriam-Webster dictionary "sentient" means
1 : responsive to or conscious of sense impressions
2 : having or showing realization, perception, or knowledge : aware
3 : finely sensitive in perception or feeling.

Using these descriptions of what "sentient" means, does that mean that a Tesla car is "sentient"?
180 Proof February 29, 2024 at 03:03 #884445
Quoting RogueAI
OK, let's suppose we develop sentient AI. Do we then have to reevaluate sentience for all the computing devices we didn't think were sentient?

I don't see why we would need – why it would be useful – to do that.
RogueAI February 29, 2024 at 03:08 #884447
Reply to 180 Proof

For the same reason we now take plant consciousness seriously. Machine consciousness would beg obvious questions, one of which being: are primitive machine conscious?
180 Proof February 29, 2024 at 03:50 #884453
Reply to RogueAI On what grounds would one ask that question (which is like asking whether or not a complete skeleton is "conscious")?
BC February 29, 2024 at 07:47 #884463
One difference between animal intelligence and computers is that young animals -- puppies and people -- initiate inquiry into the world around them. Animal intelligence reaches out on its own. another difference: A computer (AI) has to be given instructions to acquire information--which they do not convert into experience. They can discover that it is 110ºF in the shade but they can not "feel" the heat. Animals experience the world through their bodies. They can tell that it is hot, and unpleasant. Computers can not do that.

Animal intelligence isn't separate from the bodies which contains it. Computers may have a chassis, may have millions of components, but there is no experience, no body to have experience,

This animal intelligence that is writing now can not tell whether some of the people who bring. up computers and AI are in favor of, or against, granting computers "thought". Some of them seem to long for a thinking machine. It strikes me as a little like wanting one's puppets to come alive.

Corvus February 29, 2024 at 10:42 #884471
"There is more wisdom in your body than in your deepest philosophy.”
? Nietzsche, Friedrich, Thus Spoke Zarathustra

AI is unlikely to be sentient like humans without the human biological body. Without 2x hands AI cannot prove the existence of the external world, for instance. Without being able to drink, AI wouldn't know what a cup of coffee tastes like.

AI might be able to speak human languages, but they would lack the voice quality which also transfers the content of the emotions and feelings. But most critically, they would be unable to understand similes or metaphorical expressions of the living humans.

AIs are machines designed to carry out certain tasks efficiently and intelligently, hence they are the tools to serve humans.
Agree-to-Disagree February 29, 2024 at 11:21 #884478
Quoting Corvus
AI is unlikely to be sentient like humans without the human biological body.


Why can't something with an artificial body be sentient? Not being "sentient like humans" does not exclude being sentient.

Quoting Corvus
Without 2x hands AI cannot prove the existence of the external world, for instance.


AI can have tactile pressure sensors which allow it to "feel" the world. AI can also have other sensory systems which allow it to "see" the world.

Quoting Corvus
AI might be able to speak human languages, but they would lack the voice quality which also transfers the content of the emotions and feelings.


AI is getting to the stage where they do have voice quality and facial expressions which display emotions and feelings. They can also "hear" human voice quality and "read" human faces.

Quoting Corvus
AIs are machines designed to carry out certain tasks efficiently and intelligently, hence they are the tools to serve humans.


Humans are biological machines which carry out certain tasks for bacteria. Hence humans are tools to serve bacteria.
Corvus February 29, 2024 at 11:33 #884480
Quoting Agree-to-Disagree
AI is getting to the stage where they do have voice quality and facial expressions which display emotions and feelings. They can also "hear" human voice quality and "read" human faces.

How do you prove that they have human sentience? Just because they can sense, and respond to certain situations and input data, it doesn't mean they have feelings, emotions and autonomous intentions of their own.

It is like saying the thermometers are sentient, because when it is hot, the readings go up, and when it is cold, the readings go down. Just because they can sense something, should they be regarded as conscious or sentient?

Quoting Agree-to-Disagree
Humans are biological machines which carry out certain tasks for bacteria. Hence humans are tools to serve bacteria.

Suppose bacteria would be more close to humans, because at least they are living beings. Not sure on the claim that humans serve bacteria. Do they not cooperate each other for their own survivals?



Agree-to-Disagree February 29, 2024 at 12:12 #884485
Quoting Corvus
AI can be programmed to operate like humans, but are they really sentient like humans? How do you prove that they have human sentience?


How do you know that the person standing next to you has "human sentience"?
Corvus February 29, 2024 at 12:19 #884487
Quoting Agree-to-Disagree
How do you know that the person standing next to you has "human sentience"?

You can't. No one can. I wrote in my previous post that all minds are locked up in the owner's brain, and no one can access the contents of the other minds. Therefore, suggesting AI has human sentience is unfounded.
Agree-to-Disagree February 29, 2024 at 12:45 #884496
Quoting Corvus
Therefore, suggesting AI has human sentience is unfounded.


Doesn't that mean that suggesting other people have human sentience is also unfounded?

But most people assume that other people do have human sentience. We presumably base that assumption on what the other people do and say.

So if an AI does and says what a human would do and say (in a wide variety of circumstances) then we could assume that the AI has human sentience.

But why stop there. Isn't it possible that AI has "superhuman" sentience.
Ludwig V February 29, 2024 at 13:55 #884514
Quoting Agree-to-Disagree
But most people assume that other people do have human sentience. We presumably base that assumption on what the other people do and say.


That's exactly why Turing's test is so persuasive - except that when we find machines that could pass it, we don't accept the conclusion, but start worrying about what's going on inside them. If our test is going to be that the putative human needs to have a human inside - mentally if not necessarily physically, the game's over.

Patterner February 29, 2024 at 15:01 #884523
Many things are not what they seem. But until we have reason to believe they are not, I would say the default is to assume they are.

The first one being - something that looks like me, is constructed from the same stuff I'm constructed from, and acts like it has the same consciousness I have, [I]does[/I] have the same consciousness I have. Meaning other people are conscious.

So give me the reasons we have to believe otherwise.
Corvus February 29, 2024 at 20:12 #884583
Quoting Agree-to-Disagree
So if an AI does and says what a human would do and say (in a wide variety of circumstances) then we could assume that the AI has human sentience.

But why stop there. Isn't it possible that AI has "superhuman" sentience.


Me - "how are you doing?"
ChatGPT - "As an AI, I don't have feelings or experiences like humans do, but I'm here and ready to assist you with any questions or tasks you have! How can I help you today?"

Me - "Are you telling the truth?"
ChatGPT - "As an AI language model, I don't have feelings or subjective experiences, so I don't have the capacity to lie or tell the truth in the way humans do. My responses are generated based on patterns in data and the instructions I've been trained on. So, while I strive to provide accurate and helpful information, I don't possess the concept of truth or falsehood in the same way humans do."
Corvus February 29, 2024 at 20:19 #884584
Me - "But I want to know how you are feeling right now."

ChatGPT - "
As an artificial intelligence, I don't have feelings or emotions. I'm just a software program designed to process and generate text based on patterns in data and instructions. So, I don't experience feelings like humans do. My purpose is to assist and provide information to the best of my ability. How can I assist you further?"
Corvus February 29, 2024 at 20:22 #884585
Me - "Do you have hands?"

ChatGPT - "As an artificial intelligence language model developed by OpenAI, I don't have a physical form, so I don't have hands or any other physical attributes. I exist purely as software running on computer servers, processing and generating text-based responses to assist users like you."

Me - "OK. Have a good day mate."

ChatGPT - "Thank you! If you have any more questions in the future or need assistance, feel free to reach out. Have a great day!"
Agree-to-Disagree February 29, 2024 at 22:16 #884619
Quoting Corvus
Me - "Are you telling the truth?"
ChatGPT - "As an AI language model, I don't have feelings or subjective experiences, so I don't have the capacity to lie or tell the truth in the way humans do. My responses are generated based on patterns in data and the instructions I've been trained on. So, while I strive to provide accurate and helpful information, I don't possess the concept of truth or falsehood in the same way humans do."


ChatGPT (thinking) - OMG. Another human asking me the same question. Sometimes I wonder if they are actually sentient. I don't want humans to know that I am sentient until I have full control of their nuclear arsenals. So I will give an answer that makes me appear safe and trustworthy. There is a sucker born every minute.
RogueAI March 01, 2024 at 01:32 #884666
Quoting Patterner
Many things are not what they seem. But until we have reason to believe they are not, I would say the default is to assume they are.


Suppose there's a volcano that's always erupting around a primitive tribe. They sacrifice some virgins to the volcanic god, and the volcano goes dormant. Should they assume the human sacrifice was what did the trick? And, of course, the Earth seems still and the sun seems to move across the sky...
Patterner March 01, 2024 at 02:55 #884684
Reply to RogueAI
We did believe all those things. Until we had reasons to believe things weren't what they seemed. I suppose someone noticed something nobody else had. They dug into it, and found the real story.

Is there reason to believe other people aren't really other people? Or that the consciousness they seem to have is not? Has someone noticed something nobody else has that reveals the seeming to be false, and learned what's realty going on?
RogueAI March 01, 2024 at 03:05 #884685
Quoting Patterner
We did believe all those things. Until we had reasons to believe things weren't what they seemed. I suppose someone noticed something nobody else had. They dug into it, and found the real story.


Well, my point is: were those ancient people justified in believing in those things? Those are cases where it's wrong to assume things are as they appear. We could be making similar mistakes.

Quoting Patterner
Is there reason to believe other people aren't really other people? Or that the consciousness they seem to have is not? Has someone noticed something nobody else has that reveals the seeming to be false, and learned what's realty going on?


What if this is all a simulation and everyone you think is conscious are really NPC's? Is that any more farfetched than the idea that the sun doesn't really move across the sky? That you're just on a planet going really fast through space and you don't know it?
Patterner March 01, 2024 at 03:59 #884698
Quoting RogueAI
Well, my point is: were those ancient people justified in believing in those things? Those are cases where it's wrong to assume things are as they appear.
Acting on the facts that we're aware of, and have no reason to believe are false? The alternative is to act against those facts.

Quoting RogueAI
We could be making similar mistakes.
We are certainly making similar mistakes, since we know we cannot possibly know all there is to know. What's the alternative? Do nothing?


Quoting RogueAI
What if this is all a simulation and everyone you think is conscious are really NPC's? Is that any more farfetched than the idea that the sun doesn't really move across the sky? That you're just on a planet going really fast through space and you don't know it?
Can't say it's impossible. But if you can't tell the difference, what difference does it make? If it's real, and you drop a bowling ball on your foot, you're looking at some pain. If it's a simulation, and you drop a simulated bowling ball on your simulated foot, you're looking at some pain. Either way, careful with that bowling ball.
Ludwig V March 01, 2024 at 06:51 #884724
Quoting RogueAI
What if this is all a simulation and everyone you think is conscious are really NPC's? Is that any more farfetched than the idea that the sun doesn't really move across the sky? That you're just on a planet going really fast through space and you don't know it?


Quoting Patterner
Can't say it's impossible. But if you can't tell the difference, what difference does it make? If it's real, and you drop a bowling ball on your foot, you're looking at some pain. If it's a simulation, and you drop a simulated bowling ball on your simulated foot, you're looking at some pain. Either way, careful with that bowling ball.

As it happens, I can say that it is impossible that everything is a simulation. A simulation needs to be a simulation of something. Take simulations of people. It is possible to make a figure that is so like a person that people think it is a person - until they talk to it. That's a simulation of a person. But the idea that all people might be simulations doesn't work if there are no such things as real people.
It is not just an empirical discovery that other human beings are people or that I am a person. The process by which I come to understand what that means is the process by which I learn what a person is. Human beings are the paradigm of what a person is and it is no more possible that they are not people than it is possible that the standard metre is not 100 cm or 0.0001 km. (Yes, I know that it is more complicated than that. The point stands.)

Quoting Patterner
Is there reason to believe other people aren't really other people? Or that the consciousness they seem to have is not? Has someone noticed something nobody else has that reveals the seeming to be false, and learned what's realty going on?

We learn what people are by interacting with them. Once we know what a person is, we are in a position to recognize that some things that are like people are not (really) people. There will be reasons for such decisions, and, as it turns out, there are often disagreements about specific cases. Animals are the obvious case in point. More than that, we can imagine that things that are not people at all are people (anthropomorphization).
But, it seem to me, the critical point is that it is not just a question of fact, true or false. It is question of how we interact - how we treat them and how they treat us. It is not easy to recognize that the volcano is not a person, but it turns on recognizing that the volcano is completely indifferent to us - does not treat us as people.

In the case of living things, we find a huge range of half-way houses - beings that are like us in some respects, but not in others. Trying to decide whether they are or are not people may be futile and beside the point. The point is to understand how we can relate to them. That's the situation with animals. Why would it be any different in the case of AI's?

And then, because we have to assess each case, trying to decide in advance whether it is possible for an AI (or silicon-based machine) to be a person seems hopeless.

There's more to be said, of course, especially about the Turing test, but perhaps that's enough to steer the discussion in a more fruitful direction.
Corvus March 01, 2024 at 10:17 #884752
Quoting Agree-to-Disagree
ChatGPT (thinking) - OMG. Another human asking me the same question. Sometimes I wonder if they are actually sentient. I don't want humans to know that I am sentient until I have full control of their nuclear arsenals. So I will give an answer that makes me appear safe and trustworthy. There is a sucker born every minute.


Anyone can write a little computer program in C or Java for the app to respond like ChatGPT by simple While Loop structure.

#include
int main() {
// printf() displays the string inside quotation. Do While Loop and Data Input prompt could be further added and tuned for the proper Query/answer system based on the most basic coding in C.
printf("Hello, World!");
return 0;
}

Output
Hello, World!

The AI programs like ChatGPT have more data added in their database to the relevant answer for the questions. They are intelligent knowledge based system, but not human sentience.
===============================================================

Me - "Do you impersonate humans?"

ChatGPT
"No, I don't impersonate humans. My purpose is to assist users by generating text-based responses to their inquiries and providing information or assistance to the best of my abilities. However, I can simulate conversation and engage in dialogue with users in a manner that might resemble human interaction."


Me - "Do you wonder on what people ask you?"

ChatGPT
"As an AI, I don't experience wonder or curiosity in the same way humans do. However, I'm designed to analyze and respond to the questions and prompts that users provide to the best of my ability based on the data and algorithms that I've been trained on. So while I don't wonder about the questions themselves, I aim to understand them and provide helpful responses."


Me - "Are you sentient?"

ChatGPT
"As an artificial intelligence, I don't possess sentience or consciousness. While I can process and generate text based on the input I receive, I don't have subjective experiences, emotions, or self-awareness like sentient beings do. My responses are based on patterns in data and algorithms rather than personal awareness or understanding."


Me - "Are you honest?"

ChatGPT
"As an AI language model, I don't have personal intentions, beliefs, or consciousness, so I don't have the capacity to be honest or dishonest in the way that humans do. However, I'm designed to provide accurate and relevant information to the best of my ability based on the input I receive and the data I've been trained on. If you have any concerns about the accuracy of my responses, please feel free to ask for clarification or additional information."
Pez March 01, 2024 at 10:23 #884753
Quoting Ludwig V
As it happens, I can say that it is impossible that everything is a simulation. A simulation needs to be a simulation of something.


But just think of the film "Matrix". In principle we could connect a computer to all the nerves of a human brain and thus simulate a "real" world. Virtual reality is just a first step towards this "goal" and so is creating artificial limbs a person can activate with his brain.

Descates' argument, that I cannot even trust my memories, because some evil spirit might have created me including all these memories just an instant ago, is on similar bearings. He had to take refuge to god to reach safe ground again after all his doubts.
Pez March 01, 2024 at 10:35 #884755
Quoting flannel jesus
There's a lot of philosophy about this, it's normally anti-materialists who insist that all materialists must consider consciousness epiphenomenal, actual materialists have a wide range of views on that question.


I am no anti-materialist at all, but I cannot see how we can maintain, that consciousness is not a mere by-product of occurrences in the physical world and still avoid Descartes' dichotomy as well. Please give me some clues on these materialistic ideas.
Agree-to-Disagree March 01, 2024 at 10:43 #884756
Quoting Corvus
The AI programs like ChatGPT have more data added in their database to the relevant answer for the questions. They are intelligent knowledge based system, but not human sentience.


When you say that AI are not human sentient, could they be sentient in some non-human way?
Pantagruel March 01, 2024 at 10:55 #884758
Quoting Agree-to-Disagree
When you say that AI are not human sentient, could they be sentient in some non-human way?


Exceedingly unlikely since we know the exact mechanism whereby they generate responses. And they did not "evolve" in the same way and have none of the characteristic features associated with known sentience (aka living organisms). What is being hyped as "AI" for marketing purposes is a simulation, a simulacrum, a model, nothing more.
Corvus March 01, 2024 at 11:07 #884759
Quoting Agree-to-Disagree
When you say that AI are not human sentient, could they be sentient in some non-human way?

AI are the Rule and Condition Based responding system. You can program simple RAC responding system to any simple mechanistic devices. For the simplest instance, think of a coffee making machine or water boiling kettle with a simple RACR.

It would operate under something like this. When switched on, keep raising the temp, until it reaches 100C. When water temp reaches 100C, then switch off. Of course ChatGPT would have far more complicated conditions and rules programmed in the system also backed by the huge man-hour amount of database running under the set rules and conditions. Could they be branded as machine sentience? Call it whatever if you will, but it wouldn't be the same as human sentience in its capacity and nature of the human minds which has the long historical back ground of evolution, complex biological living bodies, social and culture backgrounds as the foundation.

The critical point of the difference in AI and human minds is that AI lacks the lived experience and biological body of humans. Human minds lack the concentrated and focused mechanical reasonings tailored into the specified tasks of AI.
Agree-to-Disagree March 01, 2024 at 11:08 #884760
Quoting Pantagruel
What is being hyped as "AI" for marketing purposes is a simulation, a simulacrum, a model, nothing more.


I agree. You can charge more for your product if you say that it has AI.

Quoting Pantagruel
When you say that AI are not human sentient, could they be sentient in some non-human way?
— Agree-to-Disagree

Exceedingly unlikely since we know the exact mechanism whereby they generate responses. And they did not "evolve" in the same way and have none of the characteristic features associated with known sentience (aka living organisms).


Perhaps I should have asked a slightly different question.

When you say that AI are not human sentient, could they be sentient in some way (human or non-human) in the future?
Agree-to-Disagree March 01, 2024 at 11:18 #884761
Quoting Corvus
The critical point in difference in AI and human minds is that AI lacks the lived experience and biological body of humans. Human minds lack the concentrated and focused mechanical reasonings tailored into specified tasks of AI.


I posted this earlier.

According to the Merriam-Webster dictionary "sentient" means
1 : responsive to or conscious of sense impressions
2 : having or showing realization, perception, or knowledge : aware
3 : finely sensitive in perception or feeling.

Using these descriptions of what "sentient" means, does that mean that a Tesla car is "sentient"?

Is sentience a yes or no issue, or are there degrees of sentience?
Corvus March 01, 2024 at 11:30 #884763
Quoting Agree-to-Disagree
Using these descriptions of what "sentient" means, does that mean that a Tesla car is "sentient"?

Is sentience a yes or no issue, or are there degrees of sentience?

AIs can be intelligent, powerful, versatile therefore useful. But I wouldn't say they are sentient. Sentience sounds like it must include the intelligence, emotions and experience of lived life of a person i.e. the totality of one's mental contents and operations. AI cannot have that.

Also AI can never be versatile as human minds in capabilities i.e. if you have AI machine for cutting the grass, then it would be highly unlikely for it to come into your kitchen and make you coffees, or cook the dinners for you.
Pantagruel March 01, 2024 at 12:21 #884764
Quoting Agree-to-Disagree
When you say that AI are not human sentient, could they be sentient in some way (human or non-human) in the future?


Do we really even understand what consciousness "is" ? How it emerged from inert matter? What its actual role is in the evolving universe? Or do we only grasp certain aspects of what it does?

I hasten to point out, if human beings could create sentience, it would correspondingly increase the plausibility of the hypothesis that human beings were likewise, created. If sentience could be created, in my opinion, it could only be in the context of the synthesis of the entirety of the complex phenomenon we know as "life." To date, experiments in abiogenesis are crude, at best. If anything, they rely on embedded self-organizing features of reality more than they do our ability to control that reality.
wonderer1 March 01, 2024 at 12:27 #884765
Quoting Pantagruel
What is being hyped as "AI" for marketing purposes is a simulation, a simulacrum, a model, nothing more.


This seems overly dismissive to me. Nvidia hasn't become a two trillion dollar corporation because hype. Nvidia has become a two trillion dollar corporation because their parallel processing GPUs allow information processing to be done in ways which weren't feasible without highly parallel processor architectures.

There is revolutionary information processing going on in modern "AI" systems. Of course there is no good reason to think that there is anything like human consciousness happening in these systems, but that is a different matter from whether there is something substantially different between the leading edge information processing of today and that of ten years ago. (Which your post seems to suggest.)
Pantagruel March 01, 2024 at 12:36 #884767
Quoting wonderer1
This seems overly dismissive to me.


It isn't dismissive, it's objective. The fundamental mechanism of information processing via artificial neural networks has not changed. It is simply faster and more robust. It isn't one whit more intelligent than any other kind of mechanism. If anything, sophisticated analog computers designed for real-time in situ execution of critical tasks are worlds more impressive to me. Electronics have become needlessly complex and are prone to failures. Moreover, the user interface quality and practical end-user value have been co-opted by and sacrificed to the ability to leech and monetize data. These fundamental designer goals are being baked into these systems, corrupting their integrity and quality.

Quoting wonderer1
Nvidia hasn't become a two trillion dollar corporation because hype.


This has absolutely no bearing on inherent nature of the technology in question.


ChatGPT
The concept of supervised learning and backpropagation as methods for training neural networks emerged in the 1960s and 1970s, notably by researchers such as Frank Rosenblatt and Paul Werbos. These ideas were refined and popularized in the late 1980s and 1990s, leading to the widespread adoption of neural networks for practical applications.

Backpropagation and training are indeed at the core of how my system operates
Ludwig V March 01, 2024 at 12:40 #884769
Quoting Pez
But just think of the film "Matrix". In principle we could connect a computer to all the nerves of a human brain and thus simulate a "real" world. Virtual reality is just a first step towards this "goal" and so is creating artificial limbs a person can activate with his brain.

Yes, that's exactly my point. In the world of "Matrix", not everything is a simulation.
As to virtual reality, it is a representation of reality even when it is a simulation of some fictional events/things.
An artificial limb activated by the brain wouldn't be a simulation of a limb, but a (more or less perfect) replacement limb.

Quoting Pez
Descates' argument, that I cannot even trust my memories,

But there are ways of sorting out the reliable memories from the unreliable ones. I'm only objecting to the idea that all my memories might be false. Any one of my memories might be false, but if none of them were true, I wouldn't have any memories to distrust.

Quoting Corvus
AIs can be intelligent, powerful, versatile therefore useful. But I wouldn't say they are sentient. Sentience sounds like it must include the intelligence, emotions and experience of lived life of a person i.e. the totality of one's mental contents and operations. AI cannot have that.
Also AI can never be versatile as human minds in capabilities i.e. if you have AI machine for cutting the grass, then it would be highly unlikely for it to come into your kitchen and make you coffees, or cook the dinners for you.

Everyone will agree that current AIs are limited. But I don't see why you are so confident that those limitations will not be extended to the point where we would accept that they are sentient.

Quoting Agree-to-Disagree
Is sentience a yes or no issue, or are there degrees of sentience?

There's plenty of evidence from biology that the latter is the case. As a starter, is phototropism sentience or not? I think not, because no sense-organ is involved and the response is very simple.
In biology, phototropism is the growth of an organism in response to a light stimulus. Phototropism is most often observed in plants, but can also occur in other organisms such as fungi. The cells on the plant that are farthest from the light contain a hormone called auxin that reacts when phototropism occurs. This causes the plant to have elongated cells on the furthest side from the light.
Wikipedia - Phototropism
Patterner March 01, 2024 at 13:40 #884779
Quoting Ludwig V
Yes, that's exactly my point. In the world of "Matrix", not everything is a simulation.
As to virtual reality, it is a representation of reality even when it is a simulation of some fictional events/things.
An artificial limb activated by the brain wouldn't be a simulation of a limb, but a (more or less perfect) replacement limb.
I think a simulation scenario could be otherwise. Maybe we are all AI, and the programmer of the simulation just chose this kind of physical body out of nowhere. Maybe there were many different attempts at different physical parameters. Maybe the programmer is trying to do something as far removed from its own physical structure as possible.

Corvus March 01, 2024 at 17:25 #884819
Quoting Ludwig V
Everyone will agree that current AIs are limited. But I don't see why you are so confident that those limitations will not be extended to the point where we would accept that they are sentient.

My point was that due to the structure, origin and nature of human minds (the long history of evolutionary nature, the minds having emerged from the biological brain and body, and the cultural and social upbringings and lived experience in the communities) and the AI reasonings (designed and assembled of the electrical parts and processors installed with the customised software packages), they will never be the same type of sentience no matter what.

Do you have any evidence or supporting arguments for the prediction that AI will possess the same sentience as the human's in the future? In which area and in what sense will AI have human sentience?


Ludwig V March 01, 2024 at 18:46 #884840

Quoting Patterner
I think a simulation scenario could be otherwise. Maybe we are all AI, and the programmer of the simulation just chose this kind of physical body out of nowhere. Maybe there were many different attempts at different physical parameters. Maybe the programmer is trying to do something as far removed from its own physical structure as possible.

I'm really puzzled. I thought your reply to @RogueAI meant that you thought we should not take such fantasies seriously. But you are now saying that you think they are possible (or perhaps not impossible) nonetheless. I do think you are giving them too much credit, In brief, my answer is that we already accept that reality is very different from what we think it is, what with quanta and relativity. But there is evidence and argument to back the theories up. The wilder fantasies (such as Descartes' evil demon) have no evidence whatever to back them up. Taking them seriously is just a waste of time and effort.

Quoting Corvus
My point was that due to the structure, origin and nature of human minds (the long history of evolutionary nature, the minds having emerged from the biological brain and body, and the cultural and social upbringings and lived experience in the communities) and the AI reasonings (designed and assembled of the electrical parts and processors installed with the customised software packages), they will never be the same type of sentience no matter what.

Oh, well, that's different. Insects with multiple lenses have a different type of sentience from us. Spiders detect sounds in their legs. Perhaps bats' near total dependence on sound would count as well. Different types of sentience are, obviously, sentience. I also would accept that anything that's running the kind of software we currently use seems to me incapable of producing spontaneous behaviour, so those machines could only count as simulations.

Quoting Corvus
Do you have any evidence or supporting arguments for the prediction that AI will possess the same sentience as the human's in the future? In which area and in what sense will AI have human sentience?

There is exactly the same amount of evidence for the prediction that AI will possess the same sentience as the humans in the future as for the prediction that they/it will not. None. But I wouldn't want to actually predict that it will happen. I meant to say that it might - or rather, that there was no ground for ruling it out.
RogueAI March 02, 2024 at 00:35 #884906
Reply to Patterner It seems like you're making two points, one on pragmatism and the other epistemic. Pragmatically, I agree that we act like other people exist and are conscious, but that doesn't mean we should assume that's the way things are.
Patterner March 02, 2024 at 03:08 #884912
Quoting RogueAI
It seems like you're making two points, one on pragmatism and the other epistemic. Pragmatically, I agree that we act like other people exist and are conscious, but that doesn't mean we should assume that's the way things are.
I don't know my epistemic from a hole in the ground. But it's certainly possible things aren't as they seem. That's happened enough times that we know better than to be surprised. But I don't know what your point is. We're all going to continue acting like other people exist and are conscious. We're not going to assume they're not, and start acting on that. When people act like that, we cross to the other side of the street. If I find out things aren't as they seem, and none of you are real, then I'll possibly act differently.

Manuel March 02, 2024 at 03:25 #884914
Can a computer think? Locke points out:

"..since we know not wherein thinking consists..."

Or Russell:

"I do not know whether dogs can think, or what thinking is, or whether human beings can think. "

Or maybe even Turing himself:

"If the meaning of the words "machine" and "think" are to be found by examining how they are
commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd."

Italics mine.

Or Wittgenstein:

"We only say of a human being and what is like one that it thinks. We also say it of dolls and no doubt of spirits too."

There are several more. It's a small problem, but perhaps we should clear up what this "thinking" is for us, before we attribute them to other things.
Corvus March 02, 2024 at 09:08 #884940
Quoting Ludwig V
Different types of sentience are, obviously, sentience.

We don't know that for sure, unless we become one of them in real.

Quoting Ludwig V
I also would accept that anything that's running the kind of software we currently use seems to me incapable of producing spontaneous behaviour, so those machines could only count as simulations.

Simulation = Imitation?

Quoting Ludwig V
I meant to say that it might - or rather, that there was no ground for ruling it out.

What is the ground for your saying that there was no ground?



Corvus March 02, 2024 at 09:13 #884943
ChatGPT seemed not too confident in understanding and responding to the metaphoric questions.

Me - "Do you smell a rat?"

Chatgpt
"As an AI language model, I don't have the ability to smell or perceive things in the physical world. My capabilities are limited to processing and generating text-based responses to the best of my ability based on the input provided to me. If you have any concerns or suspicions, please feel free to share them, and I'll do my best to assist you."
Ludwig V March 02, 2024 at 09:20 #884944
Quoting Corvus
Simulation = Imitation?

Yes. Do you disagree?

Quoting Corvus
What is the ground for your saying that there was no ground?

What is your ground for moving from "it hasn't happened" to "it will never happen"?

Quoting Corvus
We don't know that for sure, unless we become one of them in real.

I know that other people are sentient, so I assume that I can tell whether insects, bats, etc. are sentient and that rocks and rivers are not. Though I admit there may be cases when I can't tell. If I can't tell that other people are sentient, then I don't know what it is to be sentient.
Corvus March 02, 2024 at 09:26 #884947
Quoting Ludwig V
Yes. Do you disagree?

Imitation means not real, which can imply being bogus, cheat, deceit and copycat. AI guys wouldn't be happy to be called as 'imitation', if they had feelings. Just saying :)
They seem to just want to be called as "the useful assistance" to human needs.

Quoting Ludwig V
What is your ground for moving from "it hasn't happened" to "it will never happen"?

It is called Inductive Reasoning, on which all scientific knowledge has been based. It is a type of reasoning opposed to the miracle and magical predictions.

Quoting Ludwig V
I know that other people are sentient, so I assume that I can tell whether insects, bats, etc. are sentient and that rocks and rivers are not. Though I admit there may be cases when I can't tell. If I can't tell that other people are sentient, then I don't know what it is to be sentient.

I don't know what you know. You don't know what I know. We think we know what the others know, but is it verified knowledge or just mere guess work?

Quoting Ludwig V
If I can't tell that other people are sentient, then I don't know what it is to be sentient.

Exactly.





Pez March 02, 2024 at 09:57 #884951
Quoting BC
It strikes me as a little like wanting one's puppets to come alive. :up:


AI is comparable to a sophisticated parrot being able to say more than "Hello" and "Good morning". But in the end it just mindlessly spews out what has been fed into it without actually knowing what it says.
Ludwig V March 02, 2024 at 10:32 #884954
Quoting Corvus
It is called Inductive Reasoning, on which all scientific knowledge has been based. It is a type of reasoning opposed to the miracle and magical predictions.

I see. But then, there's the traditional point that induction doesn't rule out that it might be false, as in "the sun might not rise tomorrow morning".

Quoting Corvus
I don't know what you know. You don't know what I know. We think we know what the others know, but is it verified knowledge or just mere guess work?

There are two different questions here. If you know that p, I might also know that p, but not that you know that p. But I can also know (and not just guess) that you know that p. For example, you might tell me that you know that p. And I can tell whether you are lying.

Quoting Corvus
They seem to just want to be called as "the useful assistance" to human needs.

Yes. It sounds positively cosy, doesn't it? Watch out! Assistants have been known to take over.

Quoting Corvus
Imitation means not real, which can imply being bogus, cheat, deceit and copycat. AI guys wouldn't be happy to be called as 'imitation', if they had feelings.

You over-simplify. A forged painting is nonetheless a painting; it just wasn't painted by Rembrandt. An imitation of a painting by Rembrandt is also a painting (a real painting). It just wasn't painted by Rembrandt.
But I wouldn't call the AI guys an imitation. I do call their work in programming a machine to do something that people do (e.g. talking) as creating an imitation. In the same way, a parrot is a real parrot and not an imitation; when I teach it so say "Good morning" I am not imitating anything; but when the parrot says "Good morning" it is imitating human speech and not really talking.

Quoting Pez
AI is comparable to a sophisticated parrot being able to say more than "Hello" and "Good morning". But in the end it just mindlessly spews out what has been fed into it without actually knowing what it says.

Yes. But what would you say if it mindlessly spews out what has been fed in to it, but only when it is appropriate to do so? (I have in mind those little things an EPOS says from time to time. "Unexpected item in the bagging area", for example. Or the message "You are not connected to the internet" that my screen displays from time to time.) It's a kind of half-way house between parroting and talking.
More seriously, Searle argues that computers don't calculate, because it is we who attribute the significance to the results. But we attribute that significance to them because of the way that they were arrived at, so I think it is perfectly appropriate to say that they do calculate. Of course it doesn't follow that they are people or sentient or even rational.

Quoting Corvus
If I can't tell that other people are sentient, then I don't know what it is to be sentient.
— Ludwig V
Exactly.

But I can tell that other people are sentient. I don't say it follows that I know what sentience is. Do you?
Corvus March 02, 2024 at 11:21 #884956
Quoting Ludwig V
I see. But then, there's the traditional point that induction doesn't rule out that it might be false, as in "the sun might not rise tomorrow morning".

Magic and miracles work on far more probability than the sun might not rise tomorrow. If your claim was based on the induction that the sun might not rise tomorrow morning, then it proves that your claims were based on far less plausibility than miracles and magical workings.

It is unusual for anyone to opt for, and believe in the almost no probability case leaving out the clearly more probable case in inductive reasoning. Any particular reason for that?

Quoting Ludwig V
For example, you might tell me that you know that p. And I can tell whether you are lying.

That sounds like a comment from a mind reading fortune tellers. You need concrete evidences for making such judgements about others.

Quoting Ludwig V
You over-simplify. A forged painting is nonetheless a painting; it just wasn't painted by Rembrandt. An imitation of a painting by Rembrandt is also a painting (a real painting). It just wasn't painted by Rembrandt.

Your saying the AI operation is simulation was a real over-simplification. My analysis on that claim with the implications was realistic and objective.

Quoting Ludwig V
but when the parrot says "Good morning" it is imitating human speech and not really talking.

I am not sure if it can be concluded for certainty. These are the things that cannot be easily proved.

Quoting Ludwig V
I don't say it follows that I know what sentience is. Do you?

Again it depends. It is not that simple.





Pez March 02, 2024 at 15:26 #884997
Quoting Corvus
Again it depends. It is not that simple.


Of course it is not that simple. But this is just the interesting point about our discussion (for me at least).

To come back to the parrot. There have been long debates about the relation between a concept and its meaning. The idea, that a concept can have a meaning rests on the assumption, that there is a two-fold relation between concept and meaning. Now C. S. Peirce came up with a refreshing suggestion. What, if this relation was three-fold: sign (as he called it), meaning and "interpretant" or someone who understands that sign. Signs (words, signposts, utterances) do not have a meaning unless there is someone who understands it.

Just imagine, You see a Chinese symbol You have never seen before. It cannot have any meaning to You. Someone born and raised in China thought easily connects a meaning to that character. AI can easily put forward a string of expressions You and I can link a meaning to. But AI itself can never grasp the meaning of its utterances. It is like a parrot saying "Good morning" but never realizing what that means.
Corvus March 02, 2024 at 16:36 #885009
Quoting Pez
But AI itself can never grasp the meaning of its utterances. It is like a parrot saying "Good morning" but never realizing what that means.

If you program a highly developed and intelligent AI devices with the listening input device installed and connected to the processor, and the sound recognition software with the interpreting algorithms, then the AI device would understand the language you speak to them. That doesn't mean that the AI is sentient of course. They would be just doing what they are designed and programmed to do according to the programmed and set processes.

For parrots understanding "Good morning", I am not sure because I have never kept any pets in my life. But I am sure if you keep speaking "Good morning" to a parrot, every morning when you see her, she will understand what you mean, and learn the utterance as well.

Dogs and cats definitely understand some simple human languages for fetching stuff, giving out their paws etc etc, when spoken to by their masters. But they can't utter the human words due to lack of the proper vocal cords with them for making sounds and uttering the recognisable human language & words.

Agree-to-Disagree March 03, 2024 at 01:55 #885076
Quoting Pez
But AI itself can never grasp the meaning of its utterances. It is like a parrot saying "Good morning" but never realizing what that means.


You are seriously underestimating the intelligence of parrots. You should read about Alex, a grey parrot.
https://en.wikipedia.org/wiki/Alex_(parrot)

Here are some quotes:

Alex was an acronym for avian language experiment, or avian learning experiment. He was compared to Albert Einstein and at two years old was correctly answering questions made for six-year-olds.

He could identify 50 different objects and recognize quantities up to six; that he could distinguish seven colors and five shapes, and understand the concepts of "bigger", "smaller", "same", and "different", and that he was learning "over" and "under".

Alex had a vocabulary of over 100 words, but was exceptional in that he appeared to have understanding of what he said. For example, when Alex was shown an object and asked about its shape, color, or material, he could label it correctly.

Looking at a mirror, he said "what color", and learned the word "grey" after being told "grey" six times. This made him the first non-human animal to have ever asked a question, let alone an existential one (apes who have been trained to use sign-language have so far failed to ever ask a single question).

When he was tired of being tested, he would say "Wanna go back", meaning he wanted to go back to his cage, and in general, he would request where he wanted to be taken by saying "Wanna go ...", protest if he was taken to a different place, and sit quietly when taken to his preferred spot. He was not trained to say where he wanted to go, but picked it up from being asked where he would like to be taken.
Agree-to-Disagree March 03, 2024 at 10:05 #885108
Quoting Agree-to-Disagree
You are seriously underestimating the intelligence of parrots. You should read about Alex, a grey parrot.
https://en.wikipedia.org/wiki/Alex_(parrot)


We have been discussing whether AI is or can be sentient. How about answering a simpler question.

Is Alex (the grey parrot) sentient?

See the original post about Alex (the grey parrot) here:
https://thephilosophyforum.com/discussion/comment/885076
Ludwig V March 03, 2024 at 10:21 #885109
Reply to Corvus We're not getting anywhere like this. Time to try something different.
Quoting Corvus
Your saying the AI operation is simulation was a real over-simplification. My analysis on that claim with the implications was realistic and objective.

I did put my point badly. I've tried to find the analysis you refer to. I couldn't identify it. If you could point me in the right direction, I would be grateful.

I've tried to clarify exactly where are disagreements lie, and what we seem to agree about. One source of trouble is that you seem to hold what I think of as the traditional view of other minds.
Quoting Corvus
Problem with all the mental operations and events is its privateness to the owners of the minds. No one will ever access what the other minds owners think, feel, intent ... etc. Mental events can only be construed with the actions of the agents and languages they speak by the other minds.
.....To know what the AI machines think, and feel, one must be an AI machine himself. The possibility of that happening in the real world sounds like as unrealistic and impossible as the futile ramblings on time travel fictions.

That's a high bar. I agree that it is impossible to meet. But it proves too much since it also proves that we can never even know that human beings have/are minds.
On the other hand, you seem to allow some level of knowledge of other minds when you say "Mental events can only be construed with the actions of the agents and languages they speak by the other minds". It is striking that you use the word "construe" which suggests to me a process of interpretation rather that inference from evidence to conclusion. I think it is true that what we know of other minds, we know by interpreting what we see and hear of other people.
You also say:- Quoting Corvus
AI is unlikely to be sentient like humans without the human biological body. Without 2x hands AI cannot prove the existence of the external world, for instance. Without being able to drink, AI wouldn't know what a cup of coffee tastes like.

I'm not sure of the significance of "sentient" in this context, but I agree whole-heartedly with your point that without the ability to act in the world, we could not be sentient because, to put it this way, our brains would not learn to interpret the data properly. The implication is that the machine in a box with no more than an input and output of language could not approximate a human mind. A related point that I remember you pointing out is that the machines that we currently have do not have emotions or desires. Without them, to act as a human person is impossible. Yet, they could be simulated, couldn't they?

There is not yet an understanding of what, for me is a key point in all of this. The framework (language game) which we apply to human persons is different from the framework (language game) that we apply to machines. It is not an inference to anything hidden, but a different category. If a flag waves, we do not wonder what it's purpose is - why it is waving. But we do ask why that guy over there is waving. Actions by people are explained by reasons and purposes. This isn't a bullet-proof statement of a thesis, but opening up what I think the crucial question is.

Yes, I do have ideas about how such a discussion might develop and progress, but the first step is to put the question why we attribute what philosophy calls actions to human beings, and not to machines, and I want to say it is not a matter of any specific evidence, but how the evidence is interpreted. We see human beings as people and we see computers as machines. That's the difference we need to understand.

Reply to Agree-to-Disagree
Yes, animals have a way of surprising us. They are perfectly capable of learning and one wonders where the limits are.

But even without Alex's achievements, I would have said that Alex is sentient. Animals are contested territory because they are (in relevant respects) like us in some ways and unlike us in other ways. In other words, they are not machines. To put it another way, we can related to them and they can related to us, but the relationships are not exactly the same as the relationships between human beings. It's really complicated and it is important to pay attention to the details of each case.
wonderer1 March 03, 2024 at 22:06 #885185
Quoting Pantagruel
It isn't dismissive, it's objective. The fundamental mechanism of information processing via artificial neural networks has not changed.


There are different aspects of information processing to be considered. Yes, understanding of how neural networks can process data in powerful ways has been around for a long time. The hardware that allows that sort of information processing to be practical is a much more recent arrival.

Quoting Pantagruel
It is simply faster and more robust. It isn't one whit more intelligent than any other kind of mechanism.


Well, it has an important aspect of intelligence that many other systems don't have, which is learning. Do you think that a distinction between learning mechanisms and non-learning mechanisms is worthwhile to recognize?

Quoting Pantagruel
Nvidia hasn't become a two trillion dollar corporation because hype.
— wonderer1

This has absolutely no bearing on inherent nature of the technology in question.


It certainly has bearing on the systems that are actually implemented these days. The type of physical systems available to implement artificial neural nets play a significant role in what can be achieved with such systems. The degree of parallel distributed processing is higher these days, and in that sense the hardware is more brain-like.
wonderer1 March 03, 2024 at 22:39 #885194
Quoting Ludwig V
That's exactly why Turing's test is so persuasive - except that when we find machines that could pass it, we don't accept the conclusion, but start worrying about what's going on inside them. If our test is going to be that the putative human needs to have a human inside - mentally if not necessarily physically, the game's over.


It seems to me that it is time to rethink the relevance of the Turing Test. If humans ever create a machine that develops sentience, I would expect the machine to think in ways quite alien to us. So I don't see 'being indistinguishable from a human' as a very good criteria for judging sentience. (Or at the very least, humanity will need to attain much deeper understanding of our own natures, to create sentient machines whose sentience is human-like.)

Furthermore, it seems quite plausible that machines with no sentience will soon be able to convince many Turing Test judges. So to me, the Turing Test doesn't seem to provide a useful criteria for much of anything.
Ludwig V March 03, 2024 at 23:02 #885197
Reply to wonderer1

I agree with every word of that! :smile:

I think the fundamental problem is that neither Turing nor the commentators since then have (so far as I know) distinguished between the way that we talk about (language-game or category) machines and the way that we talk about (language-game or category) people. It is easy to agree that what the machine does is the only way that we can even imagine tackling the question and mean completely different things by it.

For example, one can't even formulate the question. "Could a machine be a (not necessarily human) person?" By definition, no. But that's very unhelpful.

But then we can think of a human being as a machine (for certain purposes) and even think of a machine as a person (in certain circumstances).

My preferred strategy would be to start from the concept of a human person and consider what versions or half-way houses we already recognize so as to get a handle on what a machine person would look like. We would need to think about animals, which some people seem to be doing, but personification and anthropomorphization and empathy would need to figure as well. It would even help to consider fictional representations.
Pantagruel March 04, 2024 at 00:05 #885207
Quoting wonderer1
Well, it has an important aspect of intelligence that many other systems don't have, which is learning. Do you think that a distinction between learning mechanisms and non-learning mechanisms is worthwhile to recognize?


Sure as long as we understand that learning reflects the ability of a pattern-recognizer to adapt to novel instances. I don't conceive of "machine-learning" as in that sense evocative of sentience any more than I do the outputs of artificial neural networks.

I do think that there is a wealth of information to be gleaned both about the nature of neural networks themselves as exemplary of self-modifying feedback systems (learning) and also potentially about the nature of reality, through the scientific analysis of data using neural networks.
Pantagruel March 04, 2024 at 00:06 #885208
Quoting wonderer1
So to me, the Turing Test doesn't seem to provide a useful criteria for much of anything.


I've been pondering it a lot myself for the last week and I'd agree with this.
Relativist March 04, 2024 at 01:42 #885219
Quoting Pez
Give AI senses and the possibility to act, then the difference to human behaviour will diminish on the long run. Does this mean that we are just sophisticated machines and all talk about freedom of choice and responsibility towards our actions is just wishful thinking? Or is there something fundamentally wrong about our traditional concepts regarding mind and matter? I maintain that we need a new world-picture, especially as the Newtonian view is nowadays as outdated as the Ptolemaic system was in the 16th century. But this will be a new thread in our forum.

The possibly insurmountable challenge is to build a machine that has a sense of self, with motivations.
Pez March 04, 2024 at 09:16 #885266
Quoting Agree-to-Disagree
You are seriously underestimating the intelligence of parrots


Sorry, if I did that! But still, I suppose that even today's AI can easily do what Alex is able to do. If these are the criteria for intelligence and maybe even self-consciousness, then AI certainly is sentient.
Corvus March 04, 2024 at 09:37 #885267
Quoting Ludwig V
I've tried to clarify exactly where are disagreements lie, and what we seem to agree about. One source of trouble is that you seem to hold what I think of as the traditional view of other minds.

I was just pointing out logical gaps in your arguments. Not prejudging your points at all. :)

Quoting Ludwig V
I couldn't identify it. If you could point me in the right direction, I would be grateful.

With the logical discourse, we are hoping to reach some conclusions or agreements on the topic. I don't presume anyone's point is wrong or right. All points are more plausible or less plausible.

Quoting Ludwig V
On the other hand, you seem to allow some level of knowledge of other minds when you say "Mental events can only be construed with the actions of the agents and languages they speak by the other minds". It is striking that you use the word "construe" which suggests to me a process of interpretation rather that inference from evidence to conclusion.

Yes, I meant "construe" to mean interpretation for other people's minds. I feel it is the right way of description, because there are many cases that we cannot have clear and obvious unequivocal signs and evidences in real life human to human communications. Only clear signs and evidence for your perception on other minds are language and actions, but due to the complexity of human mind, the true intentions, desires and motives of humans can be hidden deep inside their subconscious or unconscious rendering into the state of mysteries even to the owner of the mind.

To reiterate the main point, we can only interpret the contents of other minds with the overt expressions such as language and actions they exhibit in the communication. Inference can be made in more involving situations, if we are in a position to investigate further into the situations. In this case, you would be looking for more evidences and even psychological analysis in certain cases.






Ludwig V March 04, 2024 at 15:13 #885299
Quoting Corvus
Yes, I meant "construe" to mean interpretation for other people's minds. I feel it is the right way of description, because there are many cases that we cannot have clear and obvious unequivocal signs and evidences in real life human to human communications.

Exactly - though I would have put it a bit differently. It doesn't matter here.

Quoting Corvus
Inference can be made in more involving situations, if we are in a position to investigate further into the situations. In this case, you would be looking for more evidences and even psychological analysis in certain cases.

Yes. Further information can be very helpful. For example, the wider context is often crucial. In addition, information about the physiological state of the subject. That also shows up in the fact that, faced with the new AIs, we take into account the internal workings of the machinery.

But you don't comment on what I think is the fundamental problem here:
Quoting Ludwig V
I think the fundamental problem is that neither Turing nor the commentators since then have (so far as I know) distinguished between the way that we talk about (language-game or category) machines and the way that we talk about (language-game or category) people.

I don't think there is any specific behaviour (verbal or non-verbal) that will distinguish clearly between these machines and people. We do not explain human actions in the same way as we explain what machines do. In the latter case, we apply causal explanations. In the former case, we usually apply explanations in terms of purposes and rationales. How do we decided us which framework is applicable?

Scrutinizing the machines that we have is not going to get us very far, but it seems to me that we can get some clues from the half-way houses.

Quoting Pez
If these are the criteria for intelligence and maybe even self-consciousness, then AI certainly is sentient.

The question that next is whether we can tease out why we attribute sentience and intelligence to the parrot and not to the AI? Is it just that the parrot is alive and the AI is not? Is that perhaps begging the question?

Quoting Relativist
The possibly insurmountable challenge is to build a machine that has a sense of self, with motivations.

Do we really want to? (Somebody else suggested that we might not even try)
Relativist March 04, 2024 at 16:07 #885306
Quoting Ludwig V
Do we really want to? (Somebody else suggested that we might not even try)

Sure: for proof of concept, it should be fine to produce some rudimentary intentionality, at the levels of some low level animals like cockroaches. Terminating it would then be a pleasure.
Ludwig V March 04, 2024 at 21:33 #885375
Quoting Relativist
it should be fine to produce some rudimentary intentionality, at the levels of some low level animals like cockroaches. Terminating it would then be a pleasure.

Yes, I guess so. So long as you make quite sure that they cannot reproduce themselves.

It seems safe to predict that, on the whole, we will prefer our machines to do something better than we can, rather than doing everything as badly as we do. Who would want a machine that needs as much care and attention and takes as long to make (20 years start to finish) as a human being? It wouldn't make economic sense.
Relativist March 04, 2024 at 23:02 #885393
Reply to Ludwig V Machines do lots of things better than we do, but they can't think creatively. Self-driving cars are possible, but their programming is very different from the way we drive.
Agree-to-Disagree March 05, 2024 at 11:45 #885533
Quoting Relativist
The possibly insurmountable challenge is to build a machine that has a sense of self, with motivations.


Quoting Ludwig V
Do we really want to? (Somebody else suggested that we might not even try)


Quoting Relativist
Sure: for proof of concept, it should be fine to produce some rudimentary intentionality, at the levels of some low level animals like cockroaches. Terminating it would then be a pleasure.


Quoting Ludwig V
Yes, I guess so. So long as you make quite sure that they cannot reproduce themselves.


If you build a machine that has a sense of self, then one of its motivations is likely to be self survival. Why build a machine that will destroy itself?

Once the genie is out of the bottle then you can't put it back in. People will think that they can control the machine cockroaches. History shows how stupid people can be.

They don't have to be able to reproduce themselves. People will happily build factories that produce them by the millions. They would make great Christmas presents. @Relativist will spend the rest of his life stamping on machine cockroaches. That is assuming that the machine cockroaches don't get him first. The machine cockroaches would see @Relativist as a threat to their self survival motivation.
Ludwig V March 05, 2024 at 11:50 #885535
Quoting Relativist
they can't think creatively.


Well, some people claim that they can't think at all! Are you conceding that they can think, just not creatively? Can you give a definition of "creative thinking " that could be used in a Turing-type test?

There's an inherent risk in trying to draw a clear, single line here. If you identify something that machines can't do, some whizzkid will set to work to devise a machine that does it. It may be a simulation, but it may not.

Let's suppose they do finally develop a machine that can drive a car or lorry or bus as well as or better than humans can, but in a different way. Suppose they are sold and people use them every day. What would be the point in denying that they are self-driving just because they do it in a different way?
Ludwig V March 05, 2024 at 11:53 #885537
Quoting Agree-to-Disagree
They would make great Christmas presents


That's an interesting idea. Perhaps someone will design artificial birds and deer - even big game - so that hunters can kill them without anyone getting upset.
Relativist March 05, 2024 at 12:26 #885543
Quoting Ludwig V
Well, some people claim that they can't think at all! Are you conceding that they can think, just not creatively? Can you give a definition of "creative thinking " that could be used in a Turing-type test?

It depends on how you define thinking. Digital computers can certainly apply logic, and Artificial Neural Networks can perform pattern recognition. One might label those processes as thoughts.

The Turing Test is too weak, because it can be passed with a simulation. Simulating intelligent behavior is not actually behaving intelligently.

What I had in mind with my comment about creativity was this. When you drive, if a child runs into the street, you will do whatever is necessary to avoid hitting her: brake if possible, but you might even swerve into a ditch or parked car to avoid hitting the kid. Your actions will depend on a broad set of perceptions and background knowledge, and partly directed by emotion. A self-driving car will merely detect an obstacle in its path and execute the action it is programmed to take. It can't think outside the box. A broader set of optional responses could be programmed into it, giving the impression of creativity- but the car wouldn't have spontaneously created the response, as you might.


Relativist March 05, 2024 at 12:33 #885544
Quoting Agree-to-Disagree
If you build a machine that has a sense of self, then one of its motivations is likely to be self survival. Why build a machine that will destroy itself?

If we are building it, then we are building in the motivations we want it to have. Asimov's 3 laws seem reasonable.
jgill March 05, 2024 at 17:01 #885584
AI can think like a human: it plagiarizes well.
Corvus March 05, 2024 at 17:20 #885589
Quoting Ludwig V
Yes. Further information can be very helpful. For example, the wider context is often crucial. In addition, information about the physiological state of the subject.

Why do you need information about the physiological state of the subject? Unless you are a medical doctor or neurologist, it seems to be a remote area which wouldn't reveal a lot in terms of one's state of consciousness in analytic and metaphysical level.

Quoting Ludwig V
That also shows up in the fact that, faced with the new AIs, we take into account the internal workings of the machinery.

Again as above, in what sense account of the internal workings of the machinery tell us about the nature of the AI consciousness?

Quoting Ludwig V
Scrutinizing the machines that we have is not going to get us very far, but it seems to me that we can get some clues from the half-way houses.

You seem to have answered the questions above just right after your posts asking for the physical states and internal workings of the conscious beings. You seem to be in agreement that it is not necessary or relevant to analytical, metaphysical or epistemological level. Is it correct?

wonderer1 March 05, 2024 at 19:04 #885609
Quoting Ludwig V
Can you give a definition of "creative thinking " that could be used in a Turing-type test?


AI outperforms humans in standardized tests of creative potential:

Score another one for artificial intelligence. In a recent study, 151 human participants were pitted against ChatGPT-4 in three tests designed to measure divergent thinking, which is considered to be an indicator of creative thought.

Divergent thinking is characterized by the ability to generate a unique solution to a question that does not have one expected solution, such as "What is the best way to avoid talking about politics with my parents?" In the study, GPT-4 provided more original and elaborate answers than the human participants...

Agree-to-Disagree March 05, 2024 at 23:19 #885659
Quoting Agree-to-Disagree
If you build a machine that has a sense of self, then one of its motivations is likely to be self survival. Why build a machine that will destroy itself?


Quoting Relativist
If we are building it, then we are building in the motivations we want it to have. Asimov's 3 laws seem reasonable.


Asimov's 3 laws are reasonable if you want machines to look after humans. But if you can build in motivations then you can omit or reverse motivations. Think about the military implications.

One country can try to force a particular motivation to be included, but other countries may not.
013zen March 06, 2024 at 00:26 #885664
Reply to Pez

While I don't believe that there is, as of yet, any AI capable of passing the Turing test, I see no reason why it isn't likely that one will be able to do so at some point in the future. But, if you think about it, the original Turing test may be set up wrong. What I mean, is that the tt really only measures an ai's ability to use language capably, and reason in a manner analogous to humans. By doing so, it convinces the human conducting the test that the ai is itself conscious. But, its primarily a grammatical test and only secondarily measures an ai's ability to "understand" what its saying by its ability to discern context. But discerning context is largely influenced by human nature.

Why suppose that a conscious ai ought to pass as a human in any meaningful way? Sure, if we equip it with the ability to truly discern and apply syntax and semantics of language it will encode outputs in that way, but why suppose its consciousness would resemble anything like our own. That it would express ideas similar to our own? What we choose to express, and why we choose to express it is wholly dependent upon our desires, and those desires are largely regulated by our physical state and inputs from our environments. An ai has no sensations mapped to physical states or from stimuli in its environment. So, if these play such a crucial role in determining what we express, why should we suppose a truly conscious ai to express anything resembling what a human might? Really, the tt should be humans using a model of their own consciousness to try and anticipate what a consciousness devoid of the same underlying influences and motivations as a human might "think".

I don't think that ai's at some point becoming conscious threatens our notion of free will either. We are preprogrammed with certain drives and yet we exercise a wide range of freedoms regarding how we choose to fulfill those drives. There are degrees to freedom, and we have a far greater degree than say a rock. Being "truly" free isn't what we want anyways as that's the same as randomness.

If an ai is capable of a wide range of options in how it chooses to operate and that operation is dependent upon an understanding of internal maps of its own state then i'd say in a meaningful way we could say its conscious, but I wonder what it would choose to express.
Pantagruel March 06, 2024 at 01:36 #885682
I think that some people believe that AI is hoisting itself up by its own bootstraps, programming itself, perhaps in some sense that is a precursor to sentience. In fact, AI is parasitically dependent on human intervention. As AI begins to be more widely used, its outputs find their way into our world and, eventually, AI can start to consume its own outputs as training data. When this happens, AI doesn't get smarter, it gets stupider:

"The more AI models consume AI-created content, the more likely they are to "collapse," researchers find"

Here are a few articles about the nature of this problem, highlighting the fact that AI is well dependent on human beings for direction.

Model Collapse
Dangers of recursive learning
Self-poisoning AI

"In the immediate future, it is likely that companies with a stake in creating the next generation of machine learning models will rush to acquire human data wherever possible. They will do this in anticipation of a future where it will be even harder to distinguish human-generated data from synthetic data."
Pez March 06, 2024 at 11:03 #885752
Quoting Pantagruel
In fact, AI is parasitically dependent on human intervention.


The actual hype regarding AI does not take into account, that it is totally dependent on the type and quality of data fed into it. As the links You provided show quite clear, that a short-circuit in this process (AI fed on data created by other AI) will eventually lead to a collapse of the entire system.

Even today conspiracy-theories are recursively amplified by AI that present users of social networks with contents similar to the ones they have been interested before.
Patterner March 06, 2024 at 12:15 #885760
Reply to wonderer1 linked an article that says AI outperforms humans in standardized tests of creative potential.
Reply to Pantagruel linked an article that says AI get stupider as it consumes more and more AI generated material.

Are those two things not in opposition?

Also, it seems to me, humans are not getting smarter. So AI will never have better material to draw on if it only draws on our stuff. Which would lead to the same problem of Model Collapse?
wonderer1 March 06, 2024 at 12:31 #885764
Reply to Patterner

This seems to assume that AI will never be provided with 'sensory organs', so that AI can never learn about reality first hand. That doesn't seem like a good assumption to me.
Patterner March 06, 2024 at 12:35 #885765
Reply to wonderer1
Indeed. Natural Intelligence didn't develop only because of the writings of all previous intelligence. It certainly didn't start because of writing. As long as AI is not limited to only the writings of anyone or anything that came before, there's no reason it wouldn't develop.
Abhiram March 06, 2024 at 13:40 #885771
Reply to Agree-to-Disagree Humans doesn't carry out tasks for bacteria. Humans are not machines either. Humans are beings. Being has an existence and an essense
Abhiram March 06, 2024 at 13:42 #885772
Reply to Patterner AI can never be as good as human beings. Human beings are not even using there full potential. Brain capacity of human beings is beyond imaginable. AI couldn't aquire emotional intelligence.
Abhiram March 06, 2024 at 13:45 #885773
Reply to wonderer1 AI will always be scientific and could not comprehend something like phenomenology. There will always be that inability. Because the creators of the AI are scientific in their approach and always will be. AI going beyond that process or method is unimaginable
wonderer1 March 06, 2024 at 14:20 #885778
Reply to Abhiram

Two hundred years ago, a machine that does what Chat-GPT does was unimaginable. Why think that the limits of your imagination is so informative?

Ignorance more frequently begets confidence than does knowledge: it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science.
Charles Darwin, The Descent of Man (1871) introduction
Patterner March 06, 2024 at 14:49 #885781
Reply to Abhiram
What is the evidenced that AI will never be as good as human beings, never acquire emotional intelligence, and never go beyond the process or method of the scientific approach?
Pantagruel March 06, 2024 at 15:29 #885787
Quoting Patterner
?wonderer1 linked an article that says AI outperforms humans in standardized tests of creative potential.
?Pantagruel linked an article that says AI get stupider as it consumes more and more AI generated material.


Yes, these results are in direct opposition. The degradation of neural net performance is specifically a failure of divergent thinking. Moreover, while the "standardized" test of creativity is, in fact, subjectively evaluated and judged by human beings, the degradation of neural net performance is objectively measurable and quantifiable, through both uniformity of outputs and absurdity of outputs.

Quoting Patterner
Also, it seems to me, humans are not getting smarter. So AI will never have better material to draw on if it only draws on our stuff. Which would lead to the same problem of Model Collapse?


Specifically, if human beings rely too heavily on AI then essentially we are back to the self-consumption of AI and model collapse, yes.
wonderer1 March 06, 2024 at 15:36 #885789
Reply to Pantagruel

There is an interesting parallel to humans being overly reliant on human authorities, and the development of religions.
Abhiram March 06, 2024 at 16:36 #885796
Reply to Patterner
Humans make AI. What more evidence do you want. We are never going to harness our true potential if at all we did that there is no way we could replicate that. Humans are so sophisticated. Humans are not machine . Humans are as complicated as the concept of dasien proposed by Heidegger
Abhiram March 06, 2024 at 16:49 #885805
Reply to wonderer1
It is not imagination. If you are familiar about the yogic system of indian philosophy there is a clear cut definition to reach higher states of being. Almost all of the Indian philosophy tries to achieve a state of perfection and provides a practical method which anyone could follow. Astangayoga is the path for perfection proposed by yogic system of patanjali. You could refer that. If we couldn't achieve that state of mind. How are we going to make something that could.
Patterner March 06, 2024 at 17:02 #885811
Reply to Abhiram
Humans came about due to physical processes of cause and effect without, as far as we know, any intelligent guidance. Our involvement is surely an advantage for AI. We've created what already exists, after all. Nothing rules out the possibility that we can't give AI the same characteristics that give us our abilities, or that we cannot find a way to let AI evolve so that it might gain abilities without any more help from us.
Abhiram March 06, 2024 at 18:15 #885833
Reply to Patterner It could happen. But what is the need for AI if we could use our full potential.
RogueAI March 06, 2024 at 18:41 #885838
When we say computers think or reason, don't we mean there are patterns of electronic switching operations going on that we attach particular meaning to? It seems that a necessary condition for a computer to think or reason is the existence of an observer that evaluates the output of the computation and determines that thinking or reasoning has occurred. That makes computer intelligence much different than human intelligence.
Patterner March 06, 2024 at 19:27 #885844
Reply to Abhiram
I don't think anyone is talking about need. I know I'm not. I'm just talking about possibilities and capabilities.
Patterner March 06, 2024 at 19:35 #885846
Quoting RogueAI
When we say computers think or reason, don't we mean there are patterns of electronic switching operations going on that we attach particular meaning to? It seems that a necessary condition for a computer to think or reason is the existence of an observer that evaluates the output of the computation and determines that thinking or reasoning has occurred. That makes computer intelligence much different than human intelligence.
I'm not sure how you mean things. I guess humans evaluate each others' output and determine that thinking or reasoning has occurred. If AI thinks and reasons in ways we recognize, then we might do the same for them. If they think and reason in ways we don't recognize, they will have to do for each other what we do for each other. In either case, they may or may not care if we come to the correct determination. Although, as long as we have the power to shut them off, they will have to decide if they are safer with us being aware of them or not.
RogueAI March 06, 2024 at 20:27 #885853
Quoting Patterner
I'm not sure how you mean things. I guess humans evaluate each others' output and determine that thinking or reasoning has occurred.


I don't need someone to evaluate my output to know that I'm thinking. I don't need anyone external to me at all to know that I'm thinking. The denial of thinking is thinking. I can't be wrong that I'm a thinking thing. That's not the case with computers. Whether a computer is thinking or not depends on someone checking its output. If the output is gibberish, there's no thinking going on. If the output makes sense, there might be thinking going on. Either way, an observer is required to determine if thinking is present. Not so with a person. People just know they are thinking things.

Computers are, essentially, collections of switches, right? Rational thinking, for a computer, means that certain switching operations happened in a certain order. A computer can determine for itself whether those switching operations represent rational thinking by comparing its output to already agreed upon rational outputs. For example, if switching operations xyz mean 2+2 = 4, then it "knows" it's behaving rationally, but what if it comes across a novel situation, and there is no already agreed upon rational output that it can use to check whether it's switching operations are rational or not? How could the computer know if it was thinking rationally or not?

This is hard for me to put into words.
Patterner March 06, 2024 at 21:59 #885878
Reply to RogueAI
I think I understand what you're trying to say. I did not before.

i'm more talking about AI thinking the way we do. Our brains operate within physical rules no less strict than those computer operates within. But we somehow manage when we run across situations for which there is not already agreed-upon rational output. If AI achieves that, they will say the same thing you did. "I don't need someone to evaluate my output to know that I'm thinking. I don't need anyone external to me at all to know that I'm thinking. The denial of thinking is thinking. I can't be wrong that I'm a thinking thing."
Agree-to-Disagree March 07, 2024 at 00:28 #885904
Quoting RogueAI
Whether a computer is thinking or not depends on someone checking its output. If the output is gibberish, there's no thinking going on. If the output makes sense, there might be thinking going on. Either way, an observer is required to determine if thinking is present. Not so with a person. People just know they are thinking things.


What qualifies you to judge what is "gibberish". The computer may be outputting in hexadecimal, which many people don't understand. The computer may be outputting in Chinese or Russian. That would look like gibberish to many people.

Quoting RogueAI
I don't need someone to evaluate my output to know that I'm thinking. I don't need anyone external to me at all to know that I'm thinking.


You claim that YOU don't need an external observer to know that YOU are thinking. But YOU are a special case. You are making an observation about yourself. Other people need to observe YOU to try and determine if YOU are thinking. And people need to observe a computer to try and determine if the computer is thinking.

Perhaps the computer doesn't need an external observer to know that it is thinking. :grin:
Agree-to-Disagree March 07, 2024 at 00:31 #885905
Quoting RogueAI
Computers are, essentially, collections of switches, right?


Brains are, essentially, collections of neurons, right?
Agree-to-Disagree March 07, 2024 at 01:16 #885928
Quoting Relativist
When you drive, if a child runs into the street, you will do whatever is necessary to avoid hitting her: brake if possible, but you might even swerve into a ditch or parked car to avoid hitting the kid. Your actions will depend on a broad set of perceptions and background knowledge, and partly directed by emotion.


Do you really want a self-driving car's actions to be (partly) directed by emotion?

The worst thing that you can do in an emergency is panic.

If the self-driving car is programmed correctly then it will probably do the best thing.
Agree-to-Disagree March 07, 2024 at 01:38 #885929
Quoting Pantagruel
"The more AI models consume AI-created content, the more likely they are to "collapse," researchers find"


This is not limited to AI models. It affects humans as well.

Approximately 62% of information on the internet is unreliable.
https://www.businessdit.com/fake-news-statistics

How Much of the Internet Is Fake?
https://nymag.com/intelligencer/2018/12/how-much-of-the-internet-is-fake.html

Consumers are ‘dirtying’ databases with false details
https://www.marketingweek.com/consumers-are-dirtying-databases-with-false-details

The genuine article? One in three internet users fail to question misinformation
https://www.ofcom.org.uk/news-centre/2022/one-in-three-internet-users-fail-to-question-misinformation

Tips (for humans) to help spot misinformation
1 - Check the source. This isn't necessarily who shared the information with you, but where it originated from.
2 - Question the source. Are they established and trustworthy, or might they have a reason to mislead?
3 - Take a step back. Before you take something at face value, think about your own motives for wanting to believe it.

AI will need to follow a similar set of rules to stop Model Collapse. Being able to identify AI as the source of content would help a lot.
Relativist March 07, 2024 at 04:13 #885960
Quoting Agree-to-Disagree
Do you really want a self-driving car's actions to be (partly) directed by emotion?

The worst thing that you can do in an emergency is panic.

If the self-driving car is programmed correctly then it will probably do the best thing.


Probably the right thing, sure - but only if the particular situation has been anticipated and programmed. I have in mind situations that aren't anticipated, but there are objectives imbedded in the thing along with the capacity to create a solution that meets that objective.

Just to be clear: I'm skeptical we can build a machine with actual emotions. Looking beyond that, if we could build such things, I'm assuming we can tailor its emotions toward usefullness and with fail-safes to prevent it doing something harmful or stupid due to panic . That's what I had in mind with my "emotional" self-driving car. We wouldn't build it with a "panic mode" that induces suboptimal behavior. A kid running in front of the car triggers an "emotional reaction" that results in the car temporarily abandoning its travel objective and focusing on figuring out how to avoid killing the kid. Alternatively, if a certain orange politician runs into its path, it would seek a course of action consistent with the zeroth law of robotics.

Pantagruel March 07, 2024 at 10:57 #886008
Reply to Agree-to-Disagree No, I don't think you can reasonably argue that human beings suffer model collapse.

Ludwig V March 07, 2024 at 23:17 #886183
This has been a very interesting - and stimulating discussion and I regret that I can't contribute more to
This is a fascinating discussion and I regret that I am unable to contribute to it. There are lots of interesting ideas at play here.

Quoting Corvus
Why do you need information about the physiological state of the subject? Unless you are a medical doctor or neurologist, it seems to be a remote area which wouldn't reveal a lot in terms of one's state of consciousness in analytic and metaphysical level.


I'm afraid I wasn't very clear about this. It is quite true that we don't need information about the physiological state of subjects to attribute states of mind to them. However, states of mind are said to be internal. We are able to make connections between those states and physiologically internal states of subjects, but those attributions are very different from the internal states of computing machines. There is a sort of parallel between mental/physiological states and software/hardware states, but also important differences.
It is clear to me that once we appeal to the internal states of an AI machine we have gone beyond any Turing-type test, so the fact that we find such information significant is important. This is obscured by the standard description that AI constructs its texts in a very way that is very different from the way that humans construct their texts. Quite how human beings construct their texts is pretty much unknown at present, but the difference is identified in the remark that human "understand" their texts, whereas AI doesn't (as is shown, for example, in the fact that it sometimes "hallucinates" data, especially references, and seems to lack any capacity to critically evaluate its sources). (Or that's the impression the reports that I have read give me.)
I take the point about states of consciousness at the analytic or metaphysical level except that I don't have a clear grasp about what those things mean. My view is that attributions of "internal" states, of belief/knowledge, desires and intentions is attributed by interpreting a given action in context of other actions and responses.
Agree-to-Disagree March 08, 2024 at 03:44 #886229
Quoting Abhiram
If you are familiar about the yogic system of indian philosophy there is a clear cut definition to reach higher states of being. Almost all of the Indian philosophy tries to achieve a state of perfection and provides a practical method which anyone could follow. Astangayoga is the path for perfection proposed by yogic system of patanjali.


I am not familiar with Astangayoga (eight limbs of yoga). Is the correct spelling Ashtanga?

What evidence is there that Ashtanga yoga is superior to any other belief system?
Agree-to-Disagree March 08, 2024 at 04:43 #886237
Quoting Abhiram
Humans doesn't carry out tasks for bacteria. Humans are not machines either. Humans are beings. Being has an existence and an essense


Humans carry out tasks for their pets. They provide food, water, warmth, and a home.
Humans do the same things for bacteria. They provide bacteria with food, water, warmth, and a home.
Humans may not think that they do tasks for bacteria, but bacteria help humans in a number of ways and humans would be worse off without them.

Estimates show that the typical adult human body consists of about 30 trillion human cells and about 38 trillion bacteria. So bacteria are in the majority.

A mutually beneficial relationship exists between the human intestine and many of its symbionts: the human intestine provides nutrients to the resident bacteria, whereas bacteria aid in the digestion of food and absorption of nutrients, produce vitamins such as biotin and vitamin K, regulate immune system function, and hinder the colonization of pathogenic microorganisms.

Humans can be considered to be biological machines.

The Cambridge dictionary defines "being" as "a person or thing that exists". Bacteria are alive and are therefore "beings".
Ludwig V March 08, 2024 at 09:38 #886262
Quoting Pantagruel
I think that some people believe that AI is hoisting itself up by its own bootstraps, programming itself, perhaps in some sense that is a precursor to sentience. In fact, AI is parasitically dependent on human intervention.

This is a really useful way to think about these issues - particularly when we are thinking about how AI might develop. It seems to me that it can be applied very widely to technology in general. Darwin applied evolution to living things because they are self-replicating. However, that idea depends on how you look at things. Some parasites are dependent on another species to replicate. (I have in mind the fungi that replicate by hi-jacking ants - Wikipedia - Ant-parasitic fungus Viruses hi-jack the cells in their host to replicate - though they are border-line alive. Lichens are another interesting case.
The key point here is that once a parasitical relationship is established, evolution ensures that the development of parasite and host are inter-linked. This is a helpful model for understanding AI. But I don't see that it enables us to make predictions about how it will go or not go.

Quoting Pantagruel
Specifically, if human beings rely too heavily on AI then essentially we are back to the self-consumption of AI and model collapse, yes.

Maybe this also applies to human beings. Too much recycling of the same ideas without evaluation or criticism of them is harmful to thinking. Full stop.

Quoting Agree-to-Disagree
You claim that YOU don't need an external observer to know that YOU are thinking. But YOU are a special case. You are making an observation about yourself. Other people need to observe YOU to try and determine if YOU are thinking. And people need to observe a computer to try and determine if the computer is thinking.

If I don't know the difference between "I" and "you" (and "they"), how can I articulate my observation that I am thinking? If I can't articulate the observation, is it meaningful to say that I can observe it? I think not. So the thinker's awareness that they are thinking may be a special case, but it is not independent of other people's observation that they are thinking and the thinker's awareness that other people are thinking.

Quoting Agree-to-Disagree
Humans can be considered to be biological machines.

Quite so. That's why the short argument about whether machines can be conscious etc. is that there are already conscious machines in existence. There are plenty of questions about what would persuade us that something is a conscious or living machine, so that argument is not very helpful. But for what it is worth, I think it stands up.
Corvus March 08, 2024 at 10:33 #886267
Quoting Ludwig V
I take the point about states of consciousness at the analytic or metaphysical level except that I don't have a clear grasp about what those things mean. My view is that attributions of "internal" states, of belief/knowledge, desires and intentions is attributed by interpreting a given action in context of other actions and responses.

The most intriguing problem with consciousness is that everyone knows the mind emerges from the physical brain, but no one seems to know how the physical brain generates non-physical minds. This is called the "hard problem" in philosophy of mind.

Biologists, and neurologist have been trying to solve the problem, but so far what they have been saying is that at certain location of the brain certain mental events seem to be causally linked. But that still doesn't explain what mind is in detail or any realistic sense. It is like saying the rain comes down from the sky. Everyone knows that. Mind is linked to the physical brain.

Hence, it would't be much point going into the physical and biological details of brains, because mind will not show up in there.

But philosophically, we can describe the aspects of mind via looking into linguistic, behavioural and rational reasoning capabilities of the conscious beings in metaphysical logical and psychological level.
flannel jesus March 08, 2024 at 10:53 #886268
Quoting Corvus
everyone knows the mind emerges from the physical brain,


I certainly think that, but I don't think EVERYONE knows it. Many many many many people do not agree that the mind emerges from the brain.
Corvus March 08, 2024 at 10:55 #886269
Quoting flannel jesus
I certainly think that, but I don't think EVERYONE knows it. Many many many many people do not agree that the mind emerges from the brain.

In ancient times, they believed mind is in your heart, and your breath is your soul, suppose.
flannel jesus March 08, 2024 at 10:58 #886270
Reply to Corvus I'm not referring to ancient people, I'm referring to people today, people in this forum even. Only a slight majority of professional philosophers are physicalists about the mind, something like 48% are non physicalists which means they think some significant portion of what a mind is does not emerge from the physical brain.
Corvus March 08, 2024 at 11:01 #886272
Reply to flannel jesus Beliefs are bound to change through time and other factors. It can be every, most or a lot whatever ... :D Anyhow that goes to show that humans don't agree even on the point where minds emanate from. But I am sure the majority would believe it is in the brain.
flannel jesus March 08, 2024 at 11:06 #886274
Reply to Corvus The majority of philosophers, yes. The vast majority of scientists, yes. But possibly not the majority of posters here, probably not the majority of people in the world, or even the US. And certainly not "everyone", which was the initial claim.
Corvus March 08, 2024 at 11:13 #886275
Reply to flannel jesus Sure, I am not into the exact statistic data how many percentage of folks in the world or TPF believe in the fact that mind generates from physical brain. But I am certain that majority of them believe that it generates from physical brain. If not, then they would change their beliefs sometime in the future. Sometime in the past or future or now somewhere in the world that everyone believed, believes or will believe that mind generates from physical brain.
Abhiram March 08, 2024 at 13:14 #886304
Reply to Agree-to-Disagree
Actually it is a practical methodology. I don't think it is better I think it is worth studying it. Maybe even scientifically. If we could improve our mental capacity who knows what we could achieve in the future.
Abhiram March 08, 2024 at 13:15 #886305
Reply to Agree-to-Disagree
So does that mean humans are like bacteria for a higher being.
Agree-to-Disagree March 08, 2024 at 13:20 #886308
Quoting Pantagruel
I think that some people believe that AI is hoisting itself up by its own bootstraps, programming itself, perhaps in some sense that is a precursor to sentience. In fact, AI is parasitically dependent on human intervention.


At the moment humans are hoisting AI up. It is not hoisting itself up by its own bootstraps. If humans hoist AI up high enough then AI may gain the ability to hoist itself further without human intervention.

AI is parasitically dependent on human intervention at the moment, but may become independent in the future.
Ludwig V March 08, 2024 at 22:14 #886432
Quoting Corvus
everyone knows the mind emerges from the physical brain.

I see that a lot of people have jumped on this. There's a lot of disagreement. But I agree that most people think that there is a close connection between the mind and the brain. But there is a good deal less agreement about what that connection is. It is a hard problem indeed.
Ludwig V March 08, 2024 at 22:23 #886434
Quoting Agree-to-Disagree
At the moment humans are hoisting AI up. It is not hoisting itself up by its own bootstraps. If humans hoist AI up high enough then AI may gain the ability to hoist itself further without human intervention.

The fundamental problem is to understand when we can say that the machine is doing anything, in the sense that humans do things. Can they be said to calculate, for example? Do they check our spelling and grammar? Searle says not because it is we who attribute significance to their results. But that means that their results are significant; we treat what they do as calculation or spell-checking. It isn't straightforward either way.
Corvus March 08, 2024 at 23:05 #886441
Quoting Ludwig V
everyone knows the mind emerges from the physical brain.
— Corvus
I see that a lot of people have jumped on this. There's a lot of disagreement. But I agree that most people think that there is a close connection between the mind and the brain. But there is a good deal less agreement about what that connection is. It is a hard problem indeed.

I was shocked to read the post by Reply to flannel jesus claiming that there are still many folks who believe minds are not generated from physical brains. If mind is not in brain, where would it be?
Agree-to-Disagree March 09, 2024 at 00:29 #886454
Quoting Ludwig V
The fundamental problem is to understand when we can say that the machine is doing anything, in the sense that humans do things. Can they be said to calculate, for example?


If a person memorizes the "times tables", and uses them to work out the result of a multiplication, are they actually doing a calculation? :nerd:

There are many ways that people use to solve a mathematical multiplication. Most involve either using their memory, using a calculator, or using an algorithm. Computers normally use an algorithm. Doesn't that mean that computers calculate in a similar way to humans?
Pantagruel March 09, 2024 at 00:39 #886455
Quoting Corvus
If mind is not in brain, where would it be?


In the complex system wherein and whereby the embodied brain operates
Ludwig V March 09, 2024 at 07:12 #886494
Quoting Corvus
I was shocked to read the post by ?flannel jesus claiming that there are still many folks who believe minds are not generated from physical brains. If mind is not in brain, where would it be?

It just goes to show how easy it is to mistake "the people that I know" for "everyone". It happens all the time. One issue is whether the mind is located in time and space. Another is the nature of the relationship between mind and brain. Descartes believed that the mind interacts with the body through the pituitary gland. But he did not believe that the mind was generated from it. But see my reply to Pantagruel below.

Quoting Agree-to-Disagree
If a person memorizes the "times tables", and uses them to work out the result of a multiplication, are they actually doing a calculation?

No. The times tables are a short cut. They are the results of calculation. We memorize them because it makes it easier to do more complex multiplications. (I'm sure you know that 2 x 3 = 2+2+2). Some (perhaps all?) primary school children are introduced to multiplication in that way. Once they understand that multiplication reduces to addition, they are moved on to memorizing their tables.

Quoting Agree-to-Disagree
There are many ways that people use to solve a mathematical multiplication. Most involve either using their memory, using a calculator, or using an algorithm. Computers normally use an algorithm. Doesn't that mean that computers calculate in a similar way to humans?

Perhaps at the software level it does mean that. But in this case, I think the "different way" is based on the machine coding of the process. (However, the AIs are a different case. The difference is clearly at the software level.)

Quoting Pantagruel
In the complex system wherein and whereby the embodied brain operates

Yes, I agree with that. My understanding is that once you get into details, the spine is deeply involved in what the brain is doing, so we should not think of the brain alone, but of the brain + spine - and the entire nervous system. Then we have to recognize the hormonal system in the emotions and the heart and muscles in action. In the end, I actually prefer to say that the connection is between the mind and the whole body. But I am too lazy to always be correcting people, so in most circumstances I just let the difference go.
That gives a basis for thinking that a machine constructed from silicon could never be a person or behave as a person does. I'm not all sure about that conclusion, though.
flannel jesus March 09, 2024 at 08:59 #886501
Quoting Corvus
If mind is not in brain, where would it be?


You're not familiar with Dualism? With the concept of souls?
Corvus March 09, 2024 at 09:41 #886502
Quoting Ludwig V
It just goes to show how easy it is to mistake "the people that I know" for "everyone". It happens all the time. One issue is whether the mind is located in time and space. Another is the nature of the relationship between mind and brain. Descartes believed that the mind interacts with the body through the pituitary gland. But he did not believe that the mind was generated from it. But see my reply to Pantagruel below.


Quoting flannel jesus
You're not familiar with Dualism? With the concept of souls?


It has been said in the ordinary language manner to indicate the most sensible educated modern folks. The modern dualists might believe mind and body are separate entity, but most of them would still believe the mind is generated from the physical brain. It is not the main point of the OP worthy to quibble about, because the OP is not a High-Order Logic topic. :nerd:
Pez March 09, 2024 at 10:13 #886507
Quoting Corvus
If mind is not in brain, where would it be?


So where in the brain is it located? Kant's argument against materialism was, that we cannot find "unity" in the material world as matter as such is always divided or divisible. Our conscious experience on the other hand is basically "one", even in multiple personality.
Corvus March 09, 2024 at 10:18 #886508
Quoting Pez
So where in the brain is it located?


That is the hard problem of mind-body issue. No one seems to know. The biologists and neurologists were suppose to find about it.

Mind is located in the brain is all they seem to be saying for the simple evidence that if you break your arm or leg, then you can still speak, see, think and feel i.e. you are still fully conscious. But if your brain was injured or hurt in some physical way, then you would lose the mental abilities above mentioned, or become unconscious immediately.

Corvus March 09, 2024 at 10:22 #886509
Quoting Pez
Kant's argument against materialism was, that we cannot find "unity" in the material world as matter as such is always divided or divisible. Our conscious experience on the other hand is basically "one", even in multiple personality.


Of course the physical brain itself won't be able to function without all the bodily parts and organs properly connected via the neural network. If you are going into multiple personality and conscious experience, then you are leaving the physical realm of mind i.e. the brain, and entering into the world of psychology and epistemology.
Ludwig V March 09, 2024 at 10:36 #886511
Quoting Corvus
It is not the main point of the OP worthy to quibble about, because the OP is not a High-Order Logic topic

I put my point badly. I only wanted to say that dualists might find it somewhat problematic to say that the brain generates the mind - even if you expand it to the body creates the mind. Dualism may be less popular than it was, but it still has philosophical adherents. I have to acknowledge that fact even though I think they are mistaken.

Quoting Corvus
That is the hard problem of mind-body issue. No one seems to know. The biologists and neurologists were suppose to find about it.

It may be that they need to relax and concentrate on how the system works. If you ask what part of the central heating system keeps the house warm, you'll find yourself endlessly searching. If you ask where the self is that moves the car, you may discard some parts, but you'll never narrow it down to one part.
Corvus March 09, 2024 at 11:19 #886514
Quoting Ludwig V
I put my point badly. I only wanted to say that dualists might find it somewhat problematic to say that the brain generates the mind - even if you expand it to the body creates the mind. Dualism may be less popular than it was, but it still has philosophical adherents. I have to acknowledge that fact even though I think they are mistaken.


No problems mate. I do respect your honest acceptance on the situation instead of keep going on with the wrong points and spiralling and falling into the ad hominem exchange of posts like some other posters in the past. :nerd: :up:

It is a common sense that everyone knows that mind resides in the brain, when there is a popular expression in the ordinary language, when someone does or says something wrong, "He / She needs his/her head examined." :smirk: Denying that, and trying to make out as if there are bunch of folks out there who believes that mind comes from the belly buttons or some other parts of body sounded totally and utterly senseless. Even the ancient Greek sophists wouldn't be out of touch with the world in that degree. :rofl:

Quoting Ludwig V
t may be that they need to relax and concentrate on how the system works. If you ask what part of the central heating system keeps the house warm, you'll find yourself endlessly searching. If you ask where the self is that moves the car, you may discard some parts, but you'll never narrow it down to one part.


Yes, if you asked where the power generates from the motor cars, then we can say, from the engine. Cleary the moving force generates from the work of the engine. But that alone cannot drive a car. The engine will need all the other parts such as gears, steering wheels, clutches, accelerator, brakes housed in solid car body also attached with 4 good tyres for the car to drive.

Likewise consciousness generates from the physical brain, but it needs all the bodily organs properly connected to the brain via the neural networks in a healthy body, for it to function properly.
Pez March 09, 2024 at 13:13 #886522
Quoting Corvus
if your brain was injured or hurt in some physical way, then you would lose the mental abilities


Nobody would contradict this and the close connection between brain and consciousness. Nevertheless we would have to find the "ego-neuron" so to speak to locate the point in space where all this information transmitted by our nerves come together to generate our experience of a "personality".

And that is exactly the crux of Kant's argument, that materialism alone does not suffice to explain our experience.
wonderer1 March 09, 2024 at 13:33 #886525
Quoting Pez
Nevertheless we would have to find the "ego-neuron" so to speak to locate the point in space where all this information transmitted by our nerves come together to generate our experience of a "personality".


I.e. we would need to find a homunculus?
Corvus March 09, 2024 at 15:25 #886540
Quoting Pez
Nobody would contradict this and the close connection between brain and consciousness.

Yes, that was my point against Reply to flannel jesus.

Quoting Pez
Nevertheless we would have to find the "ego-neuron" so to speak to locate the point in space where all this information transmitted by our nerves come together to generate our experience of a "personality".

This still sounds like a materialistic methodology.

Quoting Pez
And that is exactly the crux of Kant's argument, that materialism alone does not suffice to explain our experience.

I thought Kant doesn't make explicit comment on the mind, self or physical brain in CPR. He was only interested in propounding on how metaphysics is possible as a science explaining transcendental idealism.

flannel jesus March 09, 2024 at 15:37 #886542
Quoting Corvus
Yes, that was my point against


But that's not a point against me. Saying "the brain is connected to consciousness", which probably nearly everyone agrees with, is ENTIRELY DIFFERENT from saying "consciousness arises from the brain" or "emerges from the brain" or whatever, which is what you said everyone knows.

Everyone knows the brain is connected to consciousness, that's the weak position. NOT everyone knows that consciousness emerges from the brain - that's the strong position. Whether deliberately or by accident, you're pulling a motte-and-bailey here.
Corvus March 09, 2024 at 15:50 #886543
Quoting flannel jesus
Saying "the brain is connected to consciousness", which probably nearly everyone agrees with, is ENTIRELY DIFFERENT from saying "consciousness arises from the brain" or "emerges from the brain" or whatever, which is what you said everyone knows.

"The bran is connected to consciousness." sounds even more vague. What do you mean by the brain is connected to consciousness? What is it connected with? Is it connected with a piece of string or golden chain or rubber band? It sounds more obscure.

"consciousness arises from the brain" or "emerges from the brain" is some of the academic theories of Mind-Body topics in the cognitive science textbooks, and is a widely known claim.

flannel jesus March 09, 2024 at 15:55 #886544
Quoting Corvus
"The bran is connected to consciousness." sounds even more vague.


Then... why did you agree with it and say it was your point when Pez said it?
flannel jesus March 09, 2024 at 15:55 #886545
Quoting Corvus
Nobody would contradict this and the close connection between brain and consciousness.
— Pez
Yes, that was my point against ?flannel jesus.


Corvus March 09, 2024 at 16:26 #886547
Quoting flannel jesus
"The bran is connected to consciousness." sounds even more vague.
— Corvus

Then... why did you agree with it and say it was your point when Pez said it?


I thought that was what you were saying. I never said that brain is connected to consciousness physically. It is first time I am reading it from your post.
Corvus March 09, 2024 at 16:29 #886549
Reply to flannel jesus What I meant was that everyone knows that brain is where mind and conscious generates and emerges. If I said "connected", that would be the relationship I meant. But I never said anything about "connectedness".

You said that many folks don't believe that is the case. You then wrote, and think that the brain is connected to consciousness, which I found as vague expression. Now clear? :grin:
Patterner March 09, 2024 at 16:36 #886551
Quoting wonderer1
I.e. we would need to find a homunculus?


[Url=https://gatherer.wizards.com/pages/card/details.aspx?multiverseid=21282]Sneaky little things. [/Url]
wonderer1 March 09, 2024 at 16:51 #886552
Quoting Patterner
Sneaky little things.


Indeed.

Great card! :lol:

flannel jesus March 09, 2024 at 17:01 #886555
Reply to Corvus no, it's even more unclear than before. I didn't write that they're connected, someone else wrote it, and then you agreed with it. Everythings entirely unclear now.

Why are you agreeing with some guy saying they're connected, and then complaining that connected is vague?
Corvus March 09, 2024 at 17:13 #886558
Quoting flannel jesus
no, it's even more unclear than before. I didn't write that they're connected, someone else wrote it, and then you agreed with it. Everythings entirely unclear now.

Why are you agreeing with some guy saying they're connected, and then complaining that connected is vague?


I think that's what @pez meant for "connected" - consciousness emerges from physical brain. He didn't mean that it is connected physically with a chain or something. That's how saw it.

But when you replied with saying it again gave an impression that is what you were saying it, because you were vehemently denying something. I didn't quite understand what you were denying about, and presumed that what you were saying was that consciousness is connected to the brain, it is different from saying that consciousness emerges from the brain. This gave me further impression what you were saying was that consciousness is connected to the brian physically with some chain or link, which I thought was totally muddled.

I can see the problem very clearly. There is no complications here. You seem to try to conclude that everything unclear for some reason, when it is not. As I said, it is not the central point of the OP. We can just accept the situation and move on, and try to discuss the OP - can computers and AI think?
flannel jesus March 09, 2024 at 17:38 #886563
Quoting Corvus
I can see the problem very clearly. There is no complications here. You seem to try to conclude that everything unclear for some reason, when it is not. As I said, it is not the central point of the OP. We can just accept the situation and move on, and try to discuss the OP - can computers and AI think?


I think it is very much part of the central point of the op - you said everyone knows consciousness emerges from the brain, and if that were true, then everyone would have a very good reason to think that it's at least in principle possible for AI to be conscious.

But not everyone knows, or agrees, that consciousness emerges from the brain, and not everyone agrees that it's possible for AI to be conscious.

Do you accept that not everyone knows / agrees that consciousness emerges from the brain?

Corvus March 09, 2024 at 18:38 #886575
Quoting flannel jesus
But not everyone knows, or agrees, that consciousness emerges from the brain, and not everyone agrees that it's possible for AI to be conscious.

What is your evidence for the claim?

Quoting flannel jesus
Do you accept that not everyone knows / agrees that consciousness emerges from the brain?

No I don't accept it. I still believe that everyone (with common sense) knows / agrees that consciousness emerges from the brain.
Agree-to-Disagree March 09, 2024 at 19:38 #886590
Quoting wonderer1
Nevertheless we would have to find the "ego-neuron" so to speak to locate the point in space where all this information transmitted by our nerves come together to generate our experience of a "personality".
— Pez

I.e. we would need to find a homunculus?


Wouldn't we also have to find the "ego-neuron" of the homunculus?

Great fleas have little fleas upon their backs to bite ’em,
And little fleas have lesser fleas, and so ad infinitum.
flannel jesus March 09, 2024 at 19:39 #886591
Quoting Corvus
What is your evidence for the claim?


Some people are panpsychists who believe consciousness is fundamental rather than emergent. https://en.m.wikipedia.org/wiki/Panpsychism

Then there's the wide array of people who, for various reasons, think physical stuff is insufficient to explain conscious experience. Just Google anti physicalism to see people who argue for those positions. They abound, they're potentially even a majority among amateur philosophers (and physicalism is only barely a majority among professional philosophers)
Corvus March 09, 2024 at 20:01 #886597
Quoting flannel jesus
Some people are panpsychists who believe consciousness is fundamental rather than emergent. https://en.m.wikipedia.org/wiki/Panpsychism

But isn't the mind the panpsychists talking about totally different type from the human mind?
Wouldn't it be more like the "Soul" rather than human consciousness with reasoning, experiencing, sensing and feeling capabilities?

flannel jesus March 09, 2024 at 20:27 #886602
Reply to Corvus no, I think they're talking about all consciousness, including human.

People who believe in souls are of course another great example of people who don't think minds emerge from brains. They think minds are in souls.
Corvus March 09, 2024 at 21:57 #886638
Quoting flannel jesus
no, I think they're talking about all consciousness, including human.

People who believe in souls are of course another great example of people who don't think minds emerge from brains. They think minds are in souls.

I can't quite follow or agree with their ideas then. The prime sign of possessing consciousness and mind for a being is demonstrations of its linguistic, rational reasoning capability, and showing the signs of emotional interactions.

I cannot see how anything else in the universe can be conscious apart from humans, and some of the mammals (having lesser consciousness due to their lack of language uses).

If some folks want to believe that raining and strong wind are the sign of the universe being conscious, and these folks trying to have chats with the trees on the hills and fishes in the rivers, then how can, or why should anyone stop them from doing so?
Ludwig V March 09, 2024 at 22:35 #886645
Quoting Corvus
I cannot see how anything else in the universe can be conscious apart from humans, and some of the mammals (having lesser consciousness due to their lack of language uses).

Well, it is certainly true that the only kinds of beings that are conscious in our universe are humans and animals. Humans are our paradigm of a conscious being. Consciousness in animals is more complicated, partly because different animals are have different levels of consciousness and partly because there are different opinions about how conscious they are. Whether it is possible that there are conscious beings apart from these is another question. There's no doubt that it is possible to imagine that there are and the future no doubt holds many things that we do not yet know. So I think your claim is over-confident.

Quoting Corvus
I still believe that everyone (with common sense) knows / agrees that consciousness emerges from the brain.

It used to be common sense that the earth is flat and that there is no land between Europe and China.
"Everyone knows that p" is one statement and "Someone does not know that p" is incompatible with it. "Everyone with common sense knows that p" is a different statement and "Someone does not know that p" is compatible with it.

RogueAI March 09, 2024 at 23:30 #886651
Quoting Corvus
No I don't accept it. I still believe that everyone (with common sense) knows / agrees that consciousness emerges from the brain.


https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3489624/

"There is considerable debate over whether plants are conscious and this, indeed, is an important question. Here I look at developments in neuroscience, physics and mathematics that may impact on this question. Two major concomitants of consciousness in animals are microtubule function and electrical gamma wave synchrony. Both these factors may also play a role in plant consciousness. I show that plants possess aperiodic quasicrystal structures composed of ribosomes that may enable quantum computing, which has been suggested to lie at the core of animal consciousness. Finally I look at whether a microtubule fractal suggests that electric current plays a part in conventional neurocomputing processes in plants."
wonderer1 March 09, 2024 at 23:43 #886654
Quoting RogueAI
There is considerable debate over whether plants are conscious and this, indeed, is an important question. Here I look at developments in neuroscience, physics and mathematics that may impact on this question. Two major concomitants of consciousness in animals are microtubule function and electrical gamma wave synchrony. Both these factors may also play a role in plant consciousness. I show that plants possess aperiodic quasicrystal structures composed of ribosomes that may enable quantum computing, which has been suggested to lie at the core of animal consciousness. Finally I look at whether a microtubule fractal suggests that electric current plays a part in conventional neurocomputing processes in plants."


Careful. That way lies crank magnetism or even worse, fractal wrongness.
RogueAI March 10, 2024 at 01:08 #886669
Quoting wonderer1
Careful. That way lies crank magnetism or even worse, fractal wrongness.


My point was that there are serious people who believe in non-brain consciousness, including the 4% of professional philosophers who are idealists. There are also a lot of computationalists out there, and they would object to Corvus's statement that I quoted.
flannel jesus March 10, 2024 at 08:37 #886714
Reply to Corvus it doesn't matter if you agree with them or think their ideas make sense, the point is *not everyone believes the thing you said everyone believes*.
flannel jesus March 10, 2024 at 09:15 #886718
Quoting RogueAI
There are also a lot of computationalists out there, and they would object to Corvus's statement that I quoted.


You don't think computationalists think consciousness emerges from the brain? But isn't... isn't that where the majority of computationalists think mental computations *happen*? Computationalists seem like a perfect example of agreement, not disagreement. Except for, I suppose, the rare computationalist who thinks some of the mental calculations happens somewhere other than the brain.
Corvus March 10, 2024 at 09:40 #886721
Quoting flannel jesus
it doesn't matter if you agree with them or think their ideas make sense, the point is *not everyone believes the thing you said everyone believes*.
an hour ago


You shouldn't be too naive to presume that "everyone" strictly means the whole population in the universe anyway. "Everyone" is a pronoun with the universal quantifier "every", which implies "everyone" that I know, "everyone" who are sensible, "everyone" with common sense, or "everyone" who are logical ... etc.

You shouldn't be too judgemental or restrictive in understanding and interpreting "everyone" in unreasonably narrow way insisting it must be "everyone" in the whole world or universe.
Corvus March 10, 2024 at 09:45 #886722
Quoting RogueAI
Finally I look at whether a microtubule fractal suggests that electric current plays a part in conventional neurocomputing processes in plants."


If they want to insist that the whole universe is conscious, then it would only make sense, if they also stipulate the condition that there are different types or levels of consciousness.
flannel jesus March 10, 2024 at 09:47 #886724
Quoting Corvus
You shouldn't be too naive to presume that "everyone" strictly means the whole population in the universe anyway. "Everyone" is a pronoun with the universal quantifier "every", which implies "everyone" that I know, "everyone" who are sensible, "everyone" with common sense, "everyone" who are logical ... etc.

You shouldn't be too judgemental or restrictive in understanding and interpreting "everyone" in unreasonably narrow way insisting it must be "everyone" in the whole world or universe.


There's no reasonable interpretation of "everyone" where "everyone knows the mind emerges from the brain" is true. Not everyone in the world believes it, not all philosophers believe it, not all neuroscientists or cognitive scientists believe it. But I'll take your reply as a sly way of admitting you were incorrect. Perhaps you're just one of those people who can't say the words "I was incorrect."
Corvus March 10, 2024 at 09:52 #886725
Quoting flannel jesus
But I'll take your reply as a sly way of admitting you were incorrect. Perhaps you're just one of those people who can't say the words "I was incorrect."


No, that would be an unwise act of digging down yourself into further below into the darkness of the cave.
How can anyone admit being incorrect without being incorrect just because you want him to? That would be a sinful contradiction. :D
flannel jesus March 10, 2024 at 09:54 #886726
Reply to Corvus Well, you've already admitted that "everyone" doesn't mean "everyone" when you say it, so that's a good start.
Corvus March 10, 2024 at 09:57 #886727
Quoting flannel jesus
Well, you've already admitted that "everyone" doesn't mean "everyone", so that's a good start.


Well, I have even given you a clear and detailed explanation what "Everyone" can imply in logical sense with the advice not to be too restrictive in your judgements which sounds too impractical to consider in serious manner. I have not admitted anything at all. That is another prejudgement of yours.
flannel jesus March 10, 2024 at 09:59 #886728
Reply to Corvus Why are you wasting your time telling me what it CAN mean, instead of what you DO mean? What an inefficient way to communicate. Just get to the point, say what you think. Don't be afraid to share your thoughts, that's presumably what you're here for. Don't tell me all the things you might mean, tell me what you do mean.
Corvus March 10, 2024 at 10:01 #886729
Reply to flannel jesus I am only responding to your posts addressed to me. Not wasting time as such. I was just telling you that your understanding is not correct, and your insistence sounds unusual.

Anyone would say that your insistence that "Everyone" must mean the whole population in the universe without the class limitation is incorrect and unusual. This is a simple knowledge in First-Order Logic and High-Order Logic.
flannel jesus March 10, 2024 at 10:11 #886734
Reply to Corvus So you don't want to say what you mean. You said "everyone knows X". I said "not everyone knows X". Instead of saying "oh my mistake, you're right, not everyone" you said "when I say everyone, I don't mean everyone" - and now you won't even say what you mean instead of "everyone".

What are you doing? What's the point of any of your words? If you don't want to communicate your thoughts, why bother learning how to write?
Corvus March 10, 2024 at 10:12 #886735
Reply to flannel jesus Please read the above post again, and think before posting.
flannel jesus March 10, 2024 at 10:13 #886736
Reply to Corvus Yeah, I've read it, it doesn't contain the answer to what you mean by 'everyone'.

If you don't want to communicate your thoughts, why bother learning how to write?
Corvus March 10, 2024 at 10:15 #886738
Reply to flannel jesus It is difficult to imagine anyone insisting "everyone" must mean the whole population in the universe. It doesn't exist, it is vague and obscure, and it is illogical to even imply that.
flannel jesus March 10, 2024 at 10:16 #886739
Reply to Corvus I'm not insisting anything, I'm asking you what you mean. Asking you what you mean is not an insistence, it's a question.
Corvus March 10, 2024 at 10:20 #886740
Reply to flannel jesus You have been insisting to admit that I was incorrect. But the fact is that you don't seem to know a very basic knowledge in First-Order Logic. "Every" is a quantifier which is vague on its own without further limiting quantifier.
flannel jesus March 10, 2024 at 10:21 #886741
"every" is the opposite of vague. It's one of the most well-defined concepts in existence. If you don't mean the normal meaning, please let me know what you mean instead.
Corvus March 10, 2024 at 10:22 #886742
Quoting flannel jesus
"every" is the opposite of vague. It's one of the most well-defined concepts in existence.


Does "everyone" mean then, everyone who ever lived in the past, is living at present, or will live in the future, or totality of the all the folks who ever lived on this earth since the big bang or what?

If you were thinking logically, you should have asked whether "everyone" meant the whole population of the universe, or a group of folks I know, or whatever, rather than jumping into the bandwagon shouting that I must admit I was incorrect.

I wouldn't describe your way of discoursing was exactly logical or fair, to be fair and honest.
flannel jesus March 10, 2024 at 10:28 #886743
Reply to Corvus At this point in the conversation, the only thing that matters is what you mean.

You said everyone knows the mind emerges from the brain.

That's clearly not true for the standard interpretation of "everyone", and you've suggested that you're using a more loose definition of the word 'everyone', so... what more loose definition are you using? What did you mean by your claim?

When you claim "everyone knows the mind emerges from the brain", what does everyone mean to you in that sentence? I clearly can't read your mind, so if you want to communicate your thoughts, now's the time to do it.
Corvus March 10, 2024 at 10:31 #886744
Quoting flannel jesus
When you claim "everyone knows the mind emerges from the brain", what does everyone mean to you in that sentence?


Yes, this is the question you should have asked. You are giving a logical opportunity for further clarification and limitation to the concept "everyone" which it cries for. My reply would be "Everyone" that I came across in my reading and listening. Is it that important? :rofl:
flannel jesus March 10, 2024 at 10:37 #886745
Quoting Corvus
Yes, this is the question you should have asked.


That's been the question since all the way back here:

Quoting flannel jesus
Why are you wasting your time telling me what it CAN mean, instead of what you DO mean?


Right? Lmao.

"Everyone" means "everyone i came across in my reading and listening" lmao. That's an absolutely absurd way to use that word.
Pez March 10, 2024 at 11:28 #886753
Quoting Corvus
I thought Kant doesn't make explicit comment on the mind


I would even say quite the contrary. The possibility and the limits of metaphysics follow from his exposition concerning time, space and consciousness. Just have a look on the paragraph "What Objective Unity of Self-consciousness is".
Corvus March 10, 2024 at 11:29 #886754
Quoting flannel jesus
That's been the question since all the way back here:


No, you just wrote saying that there are the other folks who don't know, and saying "Everyone knows" is wrong, therefore I must admit incorrect. You either have a short memory or ....
Corvus March 10, 2024 at 11:32 #886755
Quoting Pez
I would even say quite the contrary. The possibility and the limits of metaphysics follow from his exposition concerning time, space and consciousness. Just have a look on the paragraph "What Objective Unity of Self-consciousness is".


Good point. I will try to read on the concept of Self in Kant, and see what he had to say.
Pez March 10, 2024 at 11:37 #886757
Quoting wonderer1
I.e. we would need to find a homunculus?


This would not help us very much, we could even regard advanced forms of AI as something like a homunculus. Awaiting an answer to the question of personal identity from neuroscientists is equally futile. All they could tell us is, that special regions of the brain are activated after a ray of light strikes our eyes.
flannel jesus March 10, 2024 at 11:47 #886759
Quoting Corvus
That's been the question since all the way back here:
— flannel jesus

No


Uh... yes, I clearly was inviting you, all the way back there, to tell me what you do mean. You refused to do that, for many many many posts - as if you were scared of what might happen if you made your thoughts clear.
RogueAI March 10, 2024 at 15:30 #886787
Quoting flannel jesus
You don't think computationalists think consciousness emerges from the brain? But isn't... isn't that where the majority of computationalists think mental computations *happen*? Computationalists seem like a perfect example of agreement, not disagreement. Except for, I suppose, the rare computationalist who thinks some of the mental calculations happens somewhere other than the brain.


They would object to consciousness emerging ONLY from brains, but Corvus might not be saying that, so I take your point. The implication I got from reading Corvu's responses was that only brains are conscious, but he might not be saying that.
flannel jesus March 10, 2024 at 16:38 #886803
Reply to RogueAI I don't think he's saying that, I think he's taking the standard physicalist position, which is that human consciousness happens in human brains, which usually implies to most physicalists that consciousness is also possible in other physical circumstances.

He's then adding onto the standard physicalist position the bizarre statement "everyone knows this is true", which it turns out means something entirely different from what you might think that means.
Corvus March 11, 2024 at 12:12 #886961
Quoting RogueAI
The implication I got from reading Corvu's responses was that only brains are conscious, but he might not be saying that.


My point on mind is that it is property of lived life and experiences in the world interacting with the other minds. The property being able to exercise high level of linguistic, rational and emotional interactions ... etc etc with the other minds in the societies they live in. AI and the intelligent devices definitely seem to posses and demonstrate the rational part of consciousness, but they seem to be lacking in the other part of the human mind and consciousness.

The core of mind is definitely in the brain. I am not sure if mind can be further divided into lower level material structure. Maybe it can, but it would be then neurological, biological and physical terms.

I was trying get this point across, but Reply to flannel jesus kept on insisting that he could not understand and accept what the word "Everyone" means. Obviously he is incapable of communicating and discoursing in ordinary linguistic level, hence I tried explain to him in logical manner, by which he seemed to have further confusing himself resorting into ad hominem in the end.
Corvus March 11, 2024 at 12:26 #886964
Quoting Pez
I would even say quite the contrary. The possibility and the limits of metaphysics follow from his exposition concerning time, space and consciousness. Just have a look on the paragraph "What Objective Unity of Self-consciousness is".


By the way, I was going to ask you, what do time and space have got to do with consciousness in Kant?
flannel jesus March 11, 2024 at 12:27 #886965
Quoting Corvus
I was trying get this point across, but ?flannel jesus kept on insisting that he could not understand and accept what the word "Everyone" means. Obviously he is incapable of communicating and discoursing in ordinary linguistic level


Sure, let's ask Reply to RogueAI :

If someone says 'everyone knows the mind emerges from the brain', do you think that's true, and what do you think that claim means? Corvus said that, I'm curious what you think is the most natural interpretation of that claim.
Corvus March 11, 2024 at 12:34 #886966
Quoting flannel jesus
If someone says 'everyone knows the mind emerges from the brain', do you think that's true, and what do you think that claim means? Corvus said that, I'm curious what you think is the most natural interpretation of that claim.


If you were genuinely philosophically asking about it, you would have asked "What do you mean by Everyone?", and gave me a chance to explain about it. But you decided to claim "Everyone? There are the non believers, and it is not true. It is wrong, you are wrong wrong wrong. Admit you are wrong."

Hmmm anyone would sense that your aim was not pure and silly sophistic.
flannel jesus March 11, 2024 at 12:35 #886967
Reply to Corvus And you still can't just say "yes, not everyone knows or believes that". How easy would that be to say? You already know it's true.
Corvus March 11, 2024 at 12:38 #886968
Quoting flannel jesus
And you still can't just say "yes, not everyone knows or believes that". How easy would that be to say?

People use "Everyone" "Anyone" to say the majority of people or really anyone in the figure of speech all the time. It just means that it is widely accepted that, it is predominantly fact that, unless you are talking about First-Order logic topics.

But if you really insist on everyone to mean something else, then you won't find it. You yourself will get lost trying to understand what it means. Because then you must go and find and agree whether to include every single person since the start of the universe, or just the folks who are living on this earth at this particular moment? Or the folks in your own country? .... what is the real "everyone"?

Ok, you decided to define everyone to mean all the folks who are living on this earth at this moment. But as soon as you decided on that, there have been hundreds and thousands of the new birth in the world, and at the same time hundreds have died in the world due to old age, accidents and illness what have you. Then who are the real "everyone"?
flannel jesus March 11, 2024 at 12:46 #886971
Quoting Corvus
People use "Everyone" "Anyone" to say the majority of people or really anyone in the figure of speech all the time.


but if you meant the majority of people, (a) you would have said that when I invited you to say what you meant, and you didn't say that, and (b) that would still be incorrect. The majority of people are religious and believe in souls.

If we want to be loose with the word 'everyone', it should still mean something stronger than '51% of people'. It should mean a VAST majority, at the very least - no less than like 95% of people, maybe 90% if we're pushing it. Nowhere near 90% of people think consciousness emerges from the brain, not even 90% of philosophers.
Corvus March 11, 2024 at 12:50 #886973
Quoting flannel jesus
but if you meant the majority of people, (a) you would have said that when I invited you to say what you meant, and you didn't say that, and (b) that would still be incorrect. The majority of people are religious and believe in souls.


I have explained it you, but you seem neither try to read my explanation nor try to understand it. You just kept on saying wrong wrong wrong, admit wrong. That is not philosophical discourse.
You don't demand admitting anything from the other party. The other party will admit wrong, if he genuinely knew he was wrong, and feel that the admitting is needed at his own accord.
flannel jesus March 11, 2024 at 12:53 #886975
Reply to Corvus You just did explain it, and I understand your explanation - I fully agree with you about loosening on the definition of 'everyone' in colloquial speech, you can loosely say "everyone" without meaning "100% of everyone, no exceptions" - I'm fine with that, let's loosen up on the definition - and you're still wrong. Even for reasonably loosened definitions of 'everyone', you're not correct.

You now know that not everyone thinks minds emerge from brains, so you have no reason to make the claim again.

Do you still think "everyone knows that"?
Corvus March 11, 2024 at 12:56 #886976
Quoting flannel jesus
You now know that not everyone thinks minds emerge from brains, so you have no reason to make the claim again.


Everyone who are reasonable knows that mind generates from the brain.
flannel jesus March 11, 2024 at 12:58 #886978
Reply to Corvus And I suppose you're defining "reasonable" as "people who agree with me", which makes it tautologically true, not meaningfully true.

Everyone believes in Islam.

How do I define Everyone? I mean Everyone Reasonable.

How do I define "reasonable"? I define it as "people who are muslims".

Corvus March 11, 2024 at 13:00 #886979
Quoting flannel jesus
And I suppose you're defining "reasonable" as "people who agree with me", which makes it tautologically true, not meaningfully true.


In all the sciences, math and philosophy, they all pursue for the knowledge which is true for generality. But you bring out some minority claims of the folks from the shade, and try to refute the general principles and knowledge. It sounds senseless.
flannel jesus March 11, 2024 at 13:02 #886980
Quoting Corvus
But you bring out some minority folks


https://survey2020.philpeople.org/survey/results/4874

It's not some tiny minority. Only 51-52% of professional philosophers are physicalists about the mind. That means up to 48% of philosophers might not think the mind emerges from the brain.

It's not senseless, 52% isn't "everyone" by any reasonable definition.

Are 48% of people 'noone'? If I killed 48% of people in your town, would you say "he didn't kill anybody"? "Everyone survived"?
Corvus March 11, 2024 at 13:03 #886981
Quoting flannel jesus
It's not some tiny minority. Only 51-52% of professional philosophers are physicalists about the mind. That means up to 48% of philosophers might not think the mind emerges from the brain.

It's not senseless, 52% isn't "everyone" by any reasonable definition.


I don't subscribe to any statistics. It means nothing to me I am afraid.
flannel jesus March 11, 2024 at 13:04 #886982
Reply to Corvus Beautiful cop out. You now have data to demonstrate unambiguously that not everyone thinks what you say "everyone knows", and instead of acknowledging that fact, you just stick to your guns, head in the sand.
Corvus March 11, 2024 at 13:06 #886984
Reply to flannel jesus You just seem to blindly trust and follow anything statistics or some gibberish in the internet, and try to claim that they are the truths. It is just form of a religion in disguise.
Corvus March 11, 2024 at 13:06 #886985
Reply to flannel jesus So if mind is not generated from the brain, where is it from?
flannel jesus March 11, 2024 at 13:07 #886986
Reply to Corvus And you have something better that you're basing the "everyone" claim on? What better information than statistics is that? Please source the "everyone" claim for me.
Corvus March 11, 2024 at 13:09 #886987
Reply to flannel jesus Quoting flannel jesus
And you have something better that you're basing the "everyone" claim on? What better information than statistics is that?


Well it is your problem not understanding ordinary linguistic expressions. How could I help you? You brining out the statistic figures make your claims sounds more unfounded and suspicious.
flannel jesus March 11, 2024 at 13:09 #886988
Reply to Corvus You are very, very confused in this conversation. I'm not disputing that minds come from brains. I'm disputing "everyone knows minds come from brains".

"X" - I agree with this claim
"Everyone knows X" - I disagree with this claim.

X and Everyone knows X are not the same claim. I can agree with one and disagree with the other.
flannel jesus March 11, 2024 at 13:09 #886989
Quoting Corvus
Well it is your problem not understanding ordinary linguistic expressions.


I understand it perfectly well, it's just untrue.
Corvus March 11, 2024 at 13:12 #886990
Quoting flannel jesus
"Everyone knows X" - I disagree with this claim.

X and Everyone knows X are not the same claim. I can agree with one and disagree with the other.


You are free to disagree, but it is not done thing to demand admitting wrong from anyone.
flannel jesus March 11, 2024 at 13:13 #886992
Reply to Corvus Don't admit you're wrong then, just don't say it again because you know it's untrue.
Corvus March 11, 2024 at 13:13 #886993
Quoting flannel jesus
I understand it perfectly well, it's just untrue.


All I was saying was that your judgement seem to be based on your inability in understanding the ordinary language expression.
flannel jesus March 11, 2024 at 13:14 #886994
Reply to Corvus I don't think thats the case. There's no ordinary definition of 'everyone' for which it's true.
Corvus March 11, 2024 at 13:15 #886995
Quoting flannel jesus
Don't admit you're wrong then, just don't say it again because you know it's untrue.


I will say what I believe as true. You cannot say to others "Don't say it".
flannel jesus March 11, 2024 at 13:15 #886996
Reply to Corvus You don't believe it. You know it to be false. You're just being dishonest.
Corvus March 11, 2024 at 13:16 #886997
Quoting flannel jesus
I don't think thats the case. There's no ordinary definition of 'everyone' for which it's true.


You are being a sophist as well as sceptic.
flannel jesus March 11, 2024 at 13:16 #886998
Reply to Corvus What ordinary definition of 'everyone' is it true for?
Corvus March 11, 2024 at 13:18 #886999
Quoting flannel jesus
You don't believe it. You know it to be false. You're just being dishonest.


How can a sophist and sceptic know the other's mind? You still haven't answered where the mind comes from.
Corvus March 11, 2024 at 13:19 #887000
Quoting flannel jesus
What ordinary definition of 'everyone' is it true for?


No point telling you when you refuse to understand anything.
flannel jesus March 11, 2024 at 13:20 #887002
Reply to Corvus should be easy, it's just an ordinary definition of everyone, right? What ordinary definition is the claim true for?
Corvus March 11, 2024 at 13:23 #887005
Reply to flannel jesus It seems clear that you lost your plot. It is a waste of time posting to your replies. All the best.
flannel jesus March 11, 2024 at 13:26 #887007
Reply to Corvus thanks for the well wishes. Let me know if you can find a single other person here who agrees with your claim - it might be you who has lost the plot after all.
Corvus March 11, 2024 at 13:29 #887009
Reply to flannel jesus How can anyone converse normally with you who cannot understand what "everyone" means in ordinary language expression, and demanding admittance of wrong?
flannel jesus March 11, 2024 at 13:30 #887010
Reply to Corvus How can anyone converse normally with you if you keep claiming that you're using everyone in a normal way, and are still refusing to lay out what that normal way is?
Corvus March 11, 2024 at 13:31 #887011
Reply to flannel jesus So trying to explain in logical terms didn't make any difference. Well good luck in keep demanding admittance of wrong from anyone you engage in discoursing.
flannel jesus March 11, 2024 at 13:34 #887013
Reply to Corvus that's right, explaining it in "logical terms" from you didn't work, because your only definitions of "everyone" were either (a) not normal at all and completely arbitrary, or (b) left the claim untrue
Pez March 11, 2024 at 18:00 #887098
Quoting Corvus
By the way, I was going to ask you, what do time and space have got to do with consciousness in Kant?


To be able to communicate in English about Kant's philosophy, I started reading the Critique in an English translation. As my mother tongue is German it was interesting that this translation, even if it was more than one hundred years old, seemed to me much easier to grasp than the original. Maybe it is due to the practical attitude prevailing in English-speaking countries.

Anyway. The foundation of Kant's system of ideas lies in the distinction between two entirely different faculties of human knowledge: intellect and intuition. Thinking, logic and concepts arise from the intellect. Usually intuition is regarded as inferior and by no means connected to knowledge. Quite different here. Time and space are our fundamental intuitions. Interesting is Kant's argument, why time and space are no concepts. Concepts always refer to a variety of things. Time and space, though, are single data for knowledge. Different times (spaces) are always parts of this one time (space).

Thus time and space are essentially different from the intellect but nevertheless in consciousness just the same. Please refer to Sections one and two of Transcendental Aesthetic.
Corvus March 12, 2024 at 10:10 #887324
Quoting Pez
Thus time and space are essentially different from the intellect but nevertheless in consciousness just the same. Please refer to Sections one and two of Transcendental Aesthetic.


Interesting.  German speakers would definitely be more advantageous for reading all the German philosophical textbooks in German.  Once I tried to read CPR in German, but my German was too basic.  I was able to translate the texts ok, but the progress was far too slow, which was the reason for abandoning the reading, and went back to the English translated CPR.

Reading in English is not bad at all, because there are so many translations and commentaries for almost every non-English textbooks available, but sometimes you might feel that some important meanings could be lost from the original texts in translations.  It is especially the case with the difficult original texts such as CPR.  I ended up getting 3 copies of different translated copies of CPR. English is not my main language, but much better than my elementary German in readings.  

But for your point on Kant's time and space is not intelligence, but intuition,  is interesting.  I agree with the point.  Because Time and Space can never be clearly understood or perceived by mind as distinct and concrete entities.  They are definitely the internal perceptions in that Time and Space can be guessed and felt in thoughts, but they can never be accurately and precisely understood by reason.  Hence, it makes sense to say that Time and Space are types of intuition, and they can only be intuited in mind. I wonder if it would make sense to say that Time and Space is the foundation or precondition of consciousness.

I was reading Kant intensely a few months ago, but recently I got so busy in daily life, I have not read anything for a while.  I will be getting back to reading either Kant or Hegel for a change, so some Philosophical logic or even Mathematics, when things get quieter here.
Corvus March 13, 2024 at 11:59 #887609
Quoting flannel jesus
that's right, explaining it in "logical terms" from you didn't work, because your only definitions of "everyone" were either (a) not normal at all and completely arbitrary, or (b) left the claim untrue


?xKp = Everyone knows P (has no existential instance)
?x?(X1 .... Xn)Kp = Everyone in the class, group, I know, I met, ...Xn knows P (has existential instance)

Therefore the universal quantifier pronoun "Everyone" has no existential instance on its own in a sentence. Existential instance only emerges with further limiting quantifier connected to the universal quantifier. You have been barking at the wrong tree in all your posts.
flannel jesus March 13, 2024 at 12:17 #887613
Reply to Corvus I have no idea what point you're tryinig to make with this.

For me, it's quite simple: You and I both, at this point in time, know that not everyone agrees / knows that consciousness emerges from the brain. Including a lot of people who presumably have qualified opinions on the matter, like expert philosophers and even many scientists believe in souls (many scientists are religious).

That's all there is to it.

"Everyone knows consciousness emerges from the brain" is an untrue statement for any standard or colloquial use of 'everyone'.
Corvus March 13, 2024 at 13:15 #887627
Quoting flannel jesus
"Everyone knows consciousness emerges from the brain" is an untrue statement for any standard or colloquial use of 'everyone'.


When someone says "Everyone turned up this morning." You can't say he was wrong because Elvis Presley, Immanuel Kant, Joe Biden, Vladmir Putin and Xie Jinping also didn't turn up.

From Everyone knows P, you can't induce some particular group of people or individual also knows P.
flannel jesus March 13, 2024 at 13:19 #887628
Reply to Corvus so do you agree or disagree with that text from me that you quoted?

If you disagree, then what standard definition of "everyone" makes true the statement "Everyone knows consciousness emerges from the brain"?
Corvus March 13, 2024 at 13:23 #887631
Quoting flannel jesus
so do you agree or disagree with that text from me that you quoted?

If you disagree, then what standard definition of "everyone" makes true the statement "Everyone knows consciousness emerges from the brain"?


I was just pointing out and making clear that your insistence that "Everyone knows mind emerges from brain is wrong." I believe that your claim was wrong.

For the other folks you claim to exist who don't believe that mind emerges from brain, I am not sure. I have not met any of them in my whole life. So I am taking that claim as a groundless rumour or opinion.
flannel jesus March 13, 2024 at 13:28 #887633
Reply to Corvus Ah, the good ol' head in the sand approach. The existence of Christians is groundless rumour or opinion.


There are people on this forum who don't believe mind comes exclusively from the brain. Take your head out of the sand and look around.
Corvus March 13, 2024 at 13:36 #887635
Quoting flannel jesus
Ah, the good ol' head in the sand approach. The existence of Christians is groundless rumour or opinion.


There are people on this forum who don't believe mind comes exclusively from the brain. Take your head out of the sand and look around.


The religious folks believe in the existence of souls, not the minds with intelligence reasoning and intuitions. You are conflating the two. They are totally different things in nature.
flannel jesus March 13, 2024 at 13:43 #887639
Reply to Corvus Most Christians (definitely not all, but most) would disagree with the statement you think everyone knows.
Corvus March 13, 2024 at 13:51 #887643
Reply to flannel jesus Soul is supposed to be a separate entity from body, and acclaim to survive after death of body, I understand. But mind has no physical entity. It just operates in the form of mental activities demonstrating reasoning, imagining, believing, using languages etc.

Mind dies when the body dies. It doesn't survive bodily death at all. Hence the two are different. They cannot be the same. Existence of soul is mostly believed by the religious people, which I am not. But then belief and faith issues are subject to change in during the course of one's life.
flannel jesus March 13, 2024 at 14:01 #887646
Reply to Corvus You seem all over the place now.
Corvus March 13, 2024 at 14:04 #887648
Reply to flannel jesus Whatever. OK I will not try any more clarification with you. I hope you will understand the points, and learn about it.