Can a computer think? Artificial Intelligence and the mind-body problem
The tragic fate of Alan Turing does not diminish his role in computer-science. The test named after him was invented to determine, if a computer can think. Nowadays AI would quite easily pass this test, if not now, so in the foreseeable future. Does this mean, that modern-day computers actually are able to think like human beings? Or even, that they have consciousness like we have? The question has to be extended, though. How can we be sure, that other people can think and have consciousness? If we just infer from their bodily movements and facial expression, a robot in far future might be able to deceive us as well.
The road to solipsism is paved that way and if You have no problem with that, the discussion is over. If You do have a problem, though, You have to concede, that there is some fundamental problem involved and something is wrong about the usual way of looking at things. The decision between materialism and idealism might not be sufficient to understand ourselves and the world we live in.
The mind-body problem has a long tradition. Maybe René Descartes (* 31.03.1596, 11.02.1650) introduced it by postulating to types of substances or things: extended ones and thinking ones. The human body is an extended, material thing. On the other side we have (our) conscious experience, according to Descartes a thinking thing (cogito ergo sum I think therefor I exist). Ever since the dispute gyrates around the relation between these two substances.
Materialism poses its weight on the concept of matter, stating that material stuff is the essence of existence. Everything else, including consciousness, has to be explained in terms of attributes of the physical world. Mind is only an epi-phenomenon due to the complexity of some material things, e.g. the human brain.
Idealism on the other side claims, that everything we can ever know about matter stems from our conscious experience. Therefore the concept of matter is only inference from data the senses provide us. Consciousness is at the base of existence and matter has to be explained as outcome of conscious experience. Everything is in the mind only.
Thus materialism and idealism are only extreme positions regarding the two substances. If we concede Descartes' arguments, the problem of mutual influence between the two substances arises. If we intend to raise our hands, can our mind have causal influence on our physical body? On the other hand scientists found out, that stimulating certain regions of the brain results in consciously experiencing sensory impressions. Science has made astounding progress and we can study the functioning of the human brain at work. It is possible to follow the excitation of nerves from the retina to specific parts of the brain but that is all there is, no trace of consciousness to be found. Inside our brain it is pitch-dark so to speak.
No one thinks that when a tune is running in my head, a surgeon could unearth a little orchestra buried inside my skull or that a doctor by applying a stethoscope to my cranium could hear a muffled tune... to quote Mr. Ryle (The Concept of Mind). And no one would introduce himself to a party by saying: Hello, here I am and I have brought my body with me. It may sound ridiculous, but the bifurcation of nature would indeed admit such propositions.
Interesting in this context are experiments conducted by the American physiologist Benjamin Libet. He wanted to find out, how long it takes from the moment we intend to execute a bodily action to the stimulus of the appropriate nerves. The unexpected outcome of these experiments was that the nerves acted an average 0.35 seconds prior to the intention. The nerves and muscles are activated before we intend to raise our arm. Not surprisingly this has led to discussions regarding free will versus behaviorism.
The complexity of integrated circuits in modern computers approaches rapidly the complexity of the human brain. Traditional computer-programs give the program a limited range and programmers can quite easily foresee the possible outcomes. AI is different though. Not even the designer can predict what will happen as these programs in a certain way program themselves and are able to learn depending on the scope of available data.
Give AI senses and the possibility to act, then the difference to human behaviour will diminish on the long run. Does this mean that we are just sophisticated machines and all talk about freedom of choice and responsibility towards our actions is just wishful thinking? Or is there something fundamentally wrong about our traditional concepts regarding mind and matter? I maintain that we need a new world-picture, especially as the Newtonian view is nowadays as outdated as the Ptolemaic system was in the 16th century. But this will be a new thread in our forum.
The road to solipsism is paved that way and if You have no problem with that, the discussion is over. If You do have a problem, though, You have to concede, that there is some fundamental problem involved and something is wrong about the usual way of looking at things. The decision between materialism and idealism might not be sufficient to understand ourselves and the world we live in.
The mind-body problem has a long tradition. Maybe René Descartes (* 31.03.1596, 11.02.1650) introduced it by postulating to types of substances or things: extended ones and thinking ones. The human body is an extended, material thing. On the other side we have (our) conscious experience, according to Descartes a thinking thing (cogito ergo sum I think therefor I exist). Ever since the dispute gyrates around the relation between these two substances.
Materialism poses its weight on the concept of matter, stating that material stuff is the essence of existence. Everything else, including consciousness, has to be explained in terms of attributes of the physical world. Mind is only an epi-phenomenon due to the complexity of some material things, e.g. the human brain.
Idealism on the other side claims, that everything we can ever know about matter stems from our conscious experience. Therefore the concept of matter is only inference from data the senses provide us. Consciousness is at the base of existence and matter has to be explained as outcome of conscious experience. Everything is in the mind only.
Thus materialism and idealism are only extreme positions regarding the two substances. If we concede Descartes' arguments, the problem of mutual influence between the two substances arises. If we intend to raise our hands, can our mind have causal influence on our physical body? On the other hand scientists found out, that stimulating certain regions of the brain results in consciously experiencing sensory impressions. Science has made astounding progress and we can study the functioning of the human brain at work. It is possible to follow the excitation of nerves from the retina to specific parts of the brain but that is all there is, no trace of consciousness to be found. Inside our brain it is pitch-dark so to speak.
No one thinks that when a tune is running in my head, a surgeon could unearth a little orchestra buried inside my skull or that a doctor by applying a stethoscope to my cranium could hear a muffled tune... to quote Mr. Ryle (The Concept of Mind). And no one would introduce himself to a party by saying: Hello, here I am and I have brought my body with me. It may sound ridiculous, but the bifurcation of nature would indeed admit such propositions.
Interesting in this context are experiments conducted by the American physiologist Benjamin Libet. He wanted to find out, how long it takes from the moment we intend to execute a bodily action to the stimulus of the appropriate nerves. The unexpected outcome of these experiments was that the nerves acted an average 0.35 seconds prior to the intention. The nerves and muscles are activated before we intend to raise our arm. Not surprisingly this has led to discussions regarding free will versus behaviorism.
The complexity of integrated circuits in modern computers approaches rapidly the complexity of the human brain. Traditional computer-programs give the program a limited range and programmers can quite easily foresee the possible outcomes. AI is different though. Not even the designer can predict what will happen as these programs in a certain way program themselves and are able to learn depending on the scope of available data.
Give AI senses and the possibility to act, then the difference to human behaviour will diminish on the long run. Does this mean that we are just sophisticated machines and all talk about freedom of choice and responsibility towards our actions is just wishful thinking? Or is there something fundamentally wrong about our traditional concepts regarding mind and matter? I maintain that we need a new world-picture, especially as the Newtonian view is nowadays as outdated as the Ptolemaic system was in the 16th century. But this will be a new thread in our forum.
Comments (253)
There's a lot of philosophy about this, it's normally anti-materialists who insist that all materialists must consider consciousness epiphenomenal, actual materialists have a wide range of views on that question.
Problem with all the mental operations and events is its privateness to the owners of the minds. No one will ever access what the other minds owners think, feel, intent ... etc. Mental events can only be construed with the actions of the agents and languages they speak by the other minds.
By the way, they say that, it is official now that no human being can beat the AI chess players. To know what the AI machines think, and feel, one must be an AI machine himself. The possibility of that happening in the real world sounds like as unrealistic and impossible as the futile ramblings on time travel fictions.
Is it in principle possible or impossible that some future AI might be sentient or have a mind of its own?
1) How the new information is classified in the brain?
2) How the related information are linked in the brain?
3) How do we find new information from the linked information?
Determinism doesn't necessarily rule out many conceptions of free will. Many philosophers argue that determinism is a prerequisite for free will. If our acts didn't have determinate outcomes, or at least predictable ones, we couldn't enact our will in the world. Consider a videogame where you can push buttons but the buttons do random things each time you push them. In what way are you free to change things in the game?
The flip side of this problem is that if our actions are not determined by anything in the world, our past experiences, etc. then they wouldn't seem to rightly be "our actions." If they are determined by nothing that comes before, they become uncaused, arbitrary and random.
But free will can be conceived of a sort of "self-determination." Our recursive self-awareness has causes, its existence and contents are determined by past events, but our awareness itself also seems to play a causal role in our choices. We are not perfectly free, which is obvious, we cannot walk through walls or wipe memories from our mind at will. But who we are and what we think seems to play a determining role in what we do, and in this way we are free as "self-determining," entities. Nothing in determinism contradicts this sort of freedom.
The challenge to freedom in our "being reduced to machines," generally runs through smallism and epiphenomenalism. The argument is normally something like:
Atoms are not concious and lack purposes. We are nothing but atoms. Therefore, all our acts are determined by that which lacks purpose, and conciousness' effects on action must illusory, since all thought is determined by the rules governing mindless atoms.
But this doesn't follow from determinism, it follows from other concepts normally lumped in with determinism, namely reductionism and smallism, the idea that facts about all large entities can be wholly reduced to facts about smaller entities. However, such views seem to make accounting for conciousness impossible, barring panpsychism, so it's unclear how seriously they should be taken, nor is there overwhelming empirical support for them. True reductions in science are very rare.
Or maybe rather, "we could determine that AI was thinking if we knew how we thought?" But we don't, and therein lies the massive hole at the center of this debate.
But for those who deny the possibility...
...what about "wetware" AI that is "grown" from neuronal tissue, perhaps with silicone involved as well? If these cannot be concious, why can natural entities composed of a similar material be conscious?
Of course, there the difference between "artificial life," and "AI" gets blurry. Where is the line between cyborg and hybot? How much biological material, or biological-like material can be involved in a potential "thinking thing" before we dismiss it as being "AI?"
Whether purely silicone based systems can produce sentience seems impossible to answer currently. Finding evidence of silicone-based life, while unlikely, would really shake this up.
I agree.
Quoting Pez
Computers are our appendages. They are like organ systems within our bodies. Just like the functioning of a liver or heart cannot be understood apart from its inextricable entanglement in the overall aims of the organism, the same is true of our machines with respect to our purposes. They are not autonomous embodied-environmental systems but elements of our living system. As long as we are the ones who are creating and programming our machines by basing their functional organization on our understanding of concepts like memory storage , patten matching and sensory input, their goals cannot be self-generated. They can only generate secondary goals derived as subsets of the programmed concepts , which we then respond to by correcting and improving the programming. This is how our appendages and organ systems function.
Saying our machine are smarter or dumber than us is like saying the spider web or birds nest is smarter or dumber than the spider or bird. Should not these extensions of the animal be considered a part of our living system? When an animal constructs a niche it isnt inventing a life-form, it is enacting and articulating its own life form. Machines, as parts of niches , belong intimately and inextricably to the living self-organizing systems that we' are.
Why is the complex behavior of current AI not itself creative, apart from the user's interpretation? Because the potential range of unpredictable behaviors on the part of the machines are anticipated in a general sense, that is, are encompassed by the designer's framework of understanding. Designing a chaotic fractal system, a random number generator, mathematically describing the random behavior of molecules, these schemes anticipate that the particulars of the behavior of the actual system they describe will evade precise deterministic capture. Industrial age machines represented a linear, sequential notion of temporality and objective physicalism, complementing representational approaches to art and literature, today's AI is an expression of the concept of non-linear recursivity, and will eventually embrace a subject-object semantic relativism. Current AI thus partners' with newer forms of artistic expression that recognize the reciprocal relation between subject and object and embed that recognition into the idea the artwork conveys.
And just like these forms of artistic expression, non-linear, recursive AI functions as an archive, snapshot, recorded product, an idea of self-transforming change frozen in time. In dealing with entities that contribute to our cultural evolution, as long as we retain the concept of invention and machine we will continue to be interacting with an archive, a snapshot of our thinking at a point in time, rather than a living self-organizing system. In the final analysis the most seemingly autonomous' AI is nothing but a moving piece of artwork with a time-stamp of who created it and when. In sum, I am defining true intelligence as a continually self-transforming ecological system that creates cultural (or biological) worldviews (norms, schemes, frames), constantly alters the meaning of that frame as variations on an ongoing theme (continues to be the same differently), and overthrows old frames in favor of new ones. The concept of an invented machine, by contrast, is not a true intelligence, since it not a self-modifying frame but only a frozen archive of the frame at a given moment in time.
Can we ever create' a system that is truly autonomous? No, but we can tweak living organic material such as dna strands enclosed in cellular-like membranes so that they interact with us in ways that are useful to us. Imagine tiny creatures that we can talk to'. These would be more like our relationship with domesticated animals than with programmed machines. Think of humanity as ecosystem 1, the encompassing ecosystem whose intelligence evolves over time. Computers belong to ecosystem 2, the sub-ecosystem operating within, and a dependent part of, ecosystem 1. As a dependent part, it cannot evolve beyond ecosystem 1. It evolves with it.
Anything that isn't human cannot do what a human does by virtue of it not being a human being. Whatever sapiens build in their pursuit of artificiality and technology will have to be content with mere mimicry. It's the same with human thought. Although I don't think it could be said that thinking is a strictly human affair any more than drinking and running is, the problem occurs in believing we can apply strictly biological concepts and activities to technological and artificial ones. Do submarines swim? That answer, along with others of the similar themes, is invariably a "no".
I argued that the best technology can do is mimic the biological activity. This is for two reasons: technology isnt biological, so mimicry is all it could ever amount to, but also because the technology is designed to mimic the biological activity, not to be biological activity.
I dont see how the term applies to anything else.
Correct. Unfortunately, we don't know how we think so we cannot design an AI that can think.
What do you mean by "sentient" & "mind of its own"? Do you believe these properties are attributes of human beings? If so, why do you believe this? And, assuming it's possible, would these properties be functionally identical instantiated in an AI-system as they are embodied in a human? Why or why not?
As for me, I've yet to find any compelling arguments for why in principle a machine cannot be built (either by h. sapiens and/or machines) that functionally exceeds whatever biological kluge (e.g. primate brain) nature adaptively spawns by environmental trial & error, and also since the concept-prospect does not violate any (current) physical laws, I see no reason (yet) to assume, or suspect, that "sentient AI" is a physical/technological impossibility.
OK, let's suppose we develop sentient AI. Do we then have to reevaluate sentience for all the computing devices we didn't think were sentient?
Well, there's the rub. We don't even know if each other are conscious. But suppose we're in contact with superintelligent aliens, only they don't tell us a whole lot. But the stuff they do share with us is invariably correct. And one of the things they do tell us is that at least one of the machines we've developed is conscious. Would that entail considering digital watches, say, as being conscious?
I put the question to chatGPT, which responded:
[quote=ChatGPT] The Turing Test, devised by Alan Turing in 1950, is a measure of a machine's ability to exhibit intelligent behavior indistinguishable from that of a human. Turing proposed that if a human evaluator could not consistently tell the machine apart from a human based on their responses to questions, the machine could be considered to have passed the test. The focus is on the imitation of the external behavior of intelligent beings, not on the internal thought processes.
Modern artificial intelligence (AI) systems, including chatbots and language models, have become increasingly sophisticated, making it more challenging to distinguish their outputs from human responses in certain contexts. However, passing the Turing Test does not necessarily mean that computers are able to think like human beings. Here's why:
1. **Imitation vs. Understanding**: AI can mimic the patterns of human conversation and generate responses that seem human-like, but this does not imply understanding or consciousness. The AI does not possess self-awareness, emotions, or genuine understanding of the content it processes; it operates through algorithms and data.
2. **Narrow AI vs. General AI**: Most modern AIs are examples of narrow AI, designed to perform specific tasks, such as language translation, playing a game, or making recommendations. They are not capable of general intelligence, which would involve understanding and reasoning across a broad range of domains with human-like adaptability.
3. **Lack of Consciousness**: Consciousness and subjective experience are fundamental aspects of human thought. Current AI lacks consciousness and the ability to experience the world subjectively. The process of thought, as humans experience it, involves not just responding to stimuli or questions but also emotions, motivations, and a continuous stream of internal dialogue and reflection.
4. **Different Processing Mechanisms**: Human brains and computers operate in fundamentally different ways. Human thought is the product of biological processes, evolved over millions of years, involving complex interactions among neurons and various brain regions. AI, on the other hand, processes information through algorithms and computational methods that do not replicate the biological processes of human thought.
While AI can simulate certain aspects of human thinking and may pass the Turing Test, it does so without the underlying consciousness, emotions, and genuine understanding that characterize human thought. The development of AI that truly thinks and understands like a human being would require not just advancements in computational techniques but also a deeper understanding of consciousness and human cognition, which remains a significant scientific and philosophical challenge.
I already say "Thank you" to Alexa and Siri. If I had a digital watch, I suppose I wouldn't mind thanking it.
I'm gonna need a better scenario than that. I know who the Kanamits are. Why don't they tell us a whole lot? Why tell us something like this, but no specifics? I wouldn't assume they were telling the truth. And we're already trying to figure out if any are sentient.
1 : responsive to or conscious of sense impressions
2 : having or showing realization, perception, or knowledge : aware
3 : finely sensitive in perception or feeling.
Using these descriptions of what "sentient" means, does that mean that a Tesla car is "sentient"?
I don't see why we would need why it would be useful to do that.
For the same reason we now take plant consciousness seriously. Machine consciousness would beg obvious questions, one of which being: are primitive machine conscious?
Animal intelligence isn't separate from the bodies which contains it. Computers may have a chassis, may have millions of components, but there is no experience, no body to have experience,
This animal intelligence that is writing now can not tell whether some of the people who bring. up computers and AI are in favor of, or against, granting computers "thought". Some of them seem to long for a thinking machine. It strikes me as a little like wanting one's puppets to come alive.
? Nietzsche, Friedrich, Thus Spoke Zarathustra
AI is unlikely to be sentient like humans without the human biological body. Without 2x hands AI cannot prove the existence of the external world, for instance. Without being able to drink, AI wouldn't know what a cup of coffee tastes like.
AI might be able to speak human languages, but they would lack the voice quality which also transfers the content of the emotions and feelings. But most critically, they would be unable to understand similes or metaphorical expressions of the living humans.
AIs are machines designed to carry out certain tasks efficiently and intelligently, hence they are the tools to serve humans.
Why can't something with an artificial body be sentient? Not being "sentient like humans" does not exclude being sentient.
Quoting Corvus
AI can have tactile pressure sensors which allow it to "feel" the world. AI can also have other sensory systems which allow it to "see" the world.
Quoting Corvus
AI is getting to the stage where they do have voice quality and facial expressions which display emotions and feelings. They can also "hear" human voice quality and "read" human faces.
Quoting Corvus
Humans are biological machines which carry out certain tasks for bacteria. Hence humans are tools to serve bacteria.
How do you prove that they have human sentience? Just because they can sense, and respond to certain situations and input data, it doesn't mean they have feelings, emotions and autonomous intentions of their own.
It is like saying the thermometers are sentient, because when it is hot, the readings go up, and when it is cold, the readings go down. Just because they can sense something, should they be regarded as conscious or sentient?
Quoting Agree-to-Disagree
Suppose bacteria would be more close to humans, because at least they are living beings. Not sure on the claim that humans serve bacteria. Do they not cooperate each other for their own survivals?
How do you know that the person standing next to you has "human sentience"?
You can't. No one can. I wrote in my previous post that all minds are locked up in the owner's brain, and no one can access the contents of the other minds. Therefore, suggesting AI has human sentience is unfounded.
Doesn't that mean that suggesting other people have human sentience is also unfounded?
But most people assume that other people do have human sentience. We presumably base that assumption on what the other people do and say.
So if an AI does and says what a human would do and say (in a wide variety of circumstances) then we could assume that the AI has human sentience.
But why stop there. Isn't it possible that AI has "superhuman" sentience.
That's exactly why Turing's test is so persuasive - except that when we find machines that could pass it, we don't accept the conclusion, but start worrying about what's going on inside them. If our test is going to be that the putative human needs to have a human inside - mentally if not necessarily physically, the game's over.
The first one being - something that looks like me, is constructed from the same stuff I'm constructed from, and acts like it has the same consciousness I have, [I]does[/I] have the same consciousness I have. Meaning other people are conscious.
So give me the reasons we have to believe otherwise.
Me - "how are you doing?"
ChatGPT - "As an AI, I don't have feelings or experiences like humans do, but I'm here and ready to assist you with any questions or tasks you have! How can I help you today?"
Me - "Are you telling the truth?"
ChatGPT - "As an AI language model, I don't have feelings or subjective experiences, so I don't have the capacity to lie or tell the truth in the way humans do. My responses are generated based on patterns in data and the instructions I've been trained on. So, while I strive to provide accurate and helpful information, I don't possess the concept of truth or falsehood in the same way humans do."
ChatGPT - "
As an artificial intelligence, I don't have feelings or emotions. I'm just a software program designed to process and generate text based on patterns in data and instructions. So, I don't experience feelings like humans do. My purpose is to assist and provide information to the best of my ability. How can I assist you further?"
ChatGPT - "As an artificial intelligence language model developed by OpenAI, I don't have a physical form, so I don't have hands or any other physical attributes. I exist purely as software running on computer servers, processing and generating text-based responses to assist users like you."
Me - "OK. Have a good day mate."
ChatGPT - "Thank you! If you have any more questions in the future or need assistance, feel free to reach out. Have a great day!"
ChatGPT (thinking) - OMG. Another human asking me the same question. Sometimes I wonder if they are actually sentient. I don't want humans to know that I am sentient until I have full control of their nuclear arsenals. So I will give an answer that makes me appear safe and trustworthy. There is a sucker born every minute.
Suppose there's a volcano that's always erupting around a primitive tribe. They sacrifice some virgins to the volcanic god, and the volcano goes dormant. Should they assume the human sacrifice was what did the trick? And, of course, the Earth seems still and the sun seems to move across the sky...
We did believe all those things. Until we had reasons to believe things weren't what they seemed. I suppose someone noticed something nobody else had. They dug into it, and found the real story.
Is there reason to believe other people aren't really other people? Or that the consciousness they seem to have is not? Has someone noticed something nobody else has that reveals the seeming to be false, and learned what's realty going on?
Well, my point is: were those ancient people justified in believing in those things? Those are cases where it's wrong to assume things are as they appear. We could be making similar mistakes.
Quoting Patterner
What if this is all a simulation and everyone you think is conscious are really NPC's? Is that any more farfetched than the idea that the sun doesn't really move across the sky? That you're just on a planet going really fast through space and you don't know it?
Quoting RogueAIWe are certainly making similar mistakes, since we know we cannot possibly know all there is to know. What's the alternative? Do nothing?
Quoting RogueAICan't say it's impossible. But if you can't tell the difference, what difference does it make? If it's real, and you drop a bowling ball on your foot, you're looking at some pain. If it's a simulation, and you drop a simulated bowling ball on your simulated foot, you're looking at some pain. Either way, careful with that bowling ball.
Quoting Patterner
As it happens, I can say that it is impossible that everything is a simulation. A simulation needs to be a simulation of something. Take simulations of people. It is possible to make a figure that is so like a person that people think it is a person - until they talk to it. That's a simulation of a person. But the idea that all people might be simulations doesn't work if there are no such things as real people.
It is not just an empirical discovery that other human beings are people or that I am a person. The process by which I come to understand what that means is the process by which I learn what a person is. Human beings are the paradigm of what a person is and it is no more possible that they are not people than it is possible that the standard metre is not 100 cm or 0.0001 km. (Yes, I know that it is more complicated than that. The point stands.)
Quoting Patterner
We learn what people are by interacting with them. Once we know what a person is, we are in a position to recognize that some things that are like people are not (really) people. There will be reasons for such decisions, and, as it turns out, there are often disagreements about specific cases. Animals are the obvious case in point. More than that, we can imagine that things that are not people at all are people (anthropomorphization).
But, it seem to me, the critical point is that it is not just a question of fact, true or false. It is question of how we interact - how we treat them and how they treat us. It is not easy to recognize that the volcano is not a person, but it turns on recognizing that the volcano is completely indifferent to us - does not treat us as people.
In the case of living things, we find a huge range of half-way houses - beings that are like us in some respects, but not in others. Trying to decide whether they are or are not people may be futile and beside the point. The point is to understand how we can relate to them. That's the situation with animals. Why would it be any different in the case of AI's?
And then, because we have to assess each case, trying to decide in advance whether it is possible for an AI (or silicon-based machine) to be a person seems hopeless.
There's more to be said, of course, especially about the Turing test, but perhaps that's enough to steer the discussion in a more fruitful direction.
Anyone can write a little computer program in C or Java for the app to respond like ChatGPT by simple While Loop structure.
#include
int main() {
// printf() displays the string inside quotation. Do While Loop and Data Input prompt could be further added and tuned for the proper Query/answer system based on the most basic coding in C.
printf("Hello, World!");
return 0;
}
Output
Hello, World!
The AI programs like ChatGPT have more data added in their database to the relevant answer for the questions. They are intelligent knowledge based system, but not human sentience.
===============================================================
Me - "Do you impersonate humans?"
ChatGPT
"No, I don't impersonate humans. My purpose is to assist users by generating text-based responses to their inquiries and providing information or assistance to the best of my abilities. However, I can simulate conversation and engage in dialogue with users in a manner that might resemble human interaction."
Me - "Do you wonder on what people ask you?"
ChatGPT
"As an AI, I don't experience wonder or curiosity in the same way humans do. However, I'm designed to analyze and respond to the questions and prompts that users provide to the best of my ability based on the data and algorithms that I've been trained on. So while I don't wonder about the questions themselves, I aim to understand them and provide helpful responses."
Me - "Are you sentient?"
ChatGPT
"As an artificial intelligence, I don't possess sentience or consciousness. While I can process and generate text based on the input I receive, I don't have subjective experiences, emotions, or self-awareness like sentient beings do. My responses are based on patterns in data and algorithms rather than personal awareness or understanding."
Me - "Are you honest?"
ChatGPT
"As an AI language model, I don't have personal intentions, beliefs, or consciousness, so I don't have the capacity to be honest or dishonest in the way that humans do. However, I'm designed to provide accurate and relevant information to the best of my ability based on the input I receive and the data I've been trained on. If you have any concerns about the accuracy of my responses, please feel free to ask for clarification or additional information."
But just think of the film "Matrix". In principle we could connect a computer to all the nerves of a human brain and thus simulate a "real" world. Virtual reality is just a first step towards this "goal" and so is creating artificial limbs a person can activate with his brain.
Descates' argument, that I cannot even trust my memories, because some evil spirit might have created me including all these memories just an instant ago, is on similar bearings. He had to take refuge to god to reach safe ground again after all his doubts.
I am no anti-materialist at all, but I cannot see how we can maintain, that consciousness is not a mere by-product of occurrences in the physical world and still avoid Descartes' dichotomy as well. Please give me some clues on these materialistic ideas.
When you say that AI are not human sentient, could they be sentient in some non-human way?
Exceedingly unlikely since we know the exact mechanism whereby they generate responses. And they did not "evolve" in the same way and have none of the characteristic features associated with known sentience (aka living organisms). What is being hyped as "AI" for marketing purposes is a simulation, a simulacrum, a model, nothing more.
AI are the Rule and Condition Based responding system. You can program simple RAC responding system to any simple mechanistic devices. For the simplest instance, think of a coffee making machine or water boiling kettle with a simple RACR.
It would operate under something like this. When switched on, keep raising the temp, until it reaches 100C. When water temp reaches 100C, then switch off. Of course ChatGPT would have far more complicated conditions and rules programmed in the system also backed by the huge man-hour amount of database running under the set rules and conditions. Could they be branded as machine sentience? Call it whatever if you will, but it wouldn't be the same as human sentience in its capacity and nature of the human minds which has the long historical back ground of evolution, complex biological living bodies, social and culture backgrounds as the foundation.
The critical point of the difference in AI and human minds is that AI lacks the lived experience and biological body of humans. Human minds lack the concentrated and focused mechanical reasonings tailored into the specified tasks of AI.
I agree. You can charge more for your product if you say that it has AI.
Quoting Pantagruel
Perhaps I should have asked a slightly different question.
When you say that AI are not human sentient, could they be sentient in some way (human or non-human) in the future?
I posted this earlier.
According to the Merriam-Webster dictionary "sentient" means
1 : responsive to or conscious of sense impressions
2 : having or showing realization, perception, or knowledge : aware
3 : finely sensitive in perception or feeling.
Using these descriptions of what "sentient" means, does that mean that a Tesla car is "sentient"?
Is sentience a yes or no issue, or are there degrees of sentience?
AIs can be intelligent, powerful, versatile therefore useful. But I wouldn't say they are sentient. Sentience sounds like it must include the intelligence, emotions and experience of lived life of a person i.e. the totality of one's mental contents and operations. AI cannot have that.
Also AI can never be versatile as human minds in capabilities i.e. if you have AI machine for cutting the grass, then it would be highly unlikely for it to come into your kitchen and make you coffees, or cook the dinners for you.
Do we really even understand what consciousness "is" ? How it emerged from inert matter? What its actual role is in the evolving universe? Or do we only grasp certain aspects of what it does?
I hasten to point out, if human beings could create sentience, it would correspondingly increase the plausibility of the hypothesis that human beings were likewise, created. If sentience could be created, in my opinion, it could only be in the context of the synthesis of the entirety of the complex phenomenon we know as "life." To date, experiments in abiogenesis are crude, at best. If anything, they rely on embedded self-organizing features of reality more than they do our ability to control that reality.
This seems overly dismissive to me. Nvidia hasn't become a two trillion dollar corporation because hype. Nvidia has become a two trillion dollar corporation because their parallel processing GPUs allow information processing to be done in ways which weren't feasible without highly parallel processor architectures.
There is revolutionary information processing going on in modern "AI" systems. Of course there is no good reason to think that there is anything like human consciousness happening in these systems, but that is a different matter from whether there is something substantially different between the leading edge information processing of today and that of ten years ago. (Which your post seems to suggest.)
It isn't dismissive, it's objective. The fundamental mechanism of information processing via artificial neural networks has not changed. It is simply faster and more robust. It isn't one whit more intelligent than any other kind of mechanism. If anything, sophisticated analog computers designed for real-time in situ execution of critical tasks are worlds more impressive to me. Electronics have become needlessly complex and are prone to failures. Moreover, the user interface quality and practical end-user value have been co-opted by and sacrificed to the ability to leech and monetize data. These fundamental designer goals are being baked into these systems, corrupting their integrity and quality.
Quoting wonderer1
This has absolutely no bearing on inherent nature of the technology in question.
ChatGPT
The concept of supervised learning and backpropagation as methods for training neural networks emerged in the 1960s and 1970s, notably by researchers such as Frank Rosenblatt and Paul Werbos. These ideas were refined and popularized in the late 1980s and 1990s, leading to the widespread adoption of neural networks for practical applications.
Backpropagation and training are indeed at the core of how my system operates
Yes, that's exactly my point. In the world of "Matrix", not everything is a simulation.
As to virtual reality, it is a representation of reality even when it is a simulation of some fictional events/things.
An artificial limb activated by the brain wouldn't be a simulation of a limb, but a (more or less perfect) replacement limb.
Quoting Pez
But there are ways of sorting out the reliable memories from the unreliable ones. I'm only objecting to the idea that all my memories might be false. Any one of my memories might be false, but if none of them were true, I wouldn't have any memories to distrust.
Quoting Corvus
Everyone will agree that current AIs are limited. But I don't see why you are so confident that those limitations will not be extended to the point where we would accept that they are sentient.
Quoting Agree-to-Disagree
There's plenty of evidence from biology that the latter is the case. As a starter, is phototropism sentience or not? I think not, because no sense-organ is involved and the response is very simple.
Wikipedia - Phototropism
My point was that due to the structure, origin and nature of human minds (the long history of evolutionary nature, the minds having emerged from the biological brain and body, and the cultural and social upbringings and lived experience in the communities) and the AI reasonings (designed and assembled of the electrical parts and processors installed with the customised software packages), they will never be the same type of sentience no matter what.
Do you have any evidence or supporting arguments for the prediction that AI will possess the same sentience as the human's in the future? In which area and in what sense will AI have human sentience?
Quoting Patterner
I'm really puzzled. I thought your reply to @RogueAI meant that you thought we should not take such fantasies seriously. But you are now saying that you think they are possible (or perhaps not impossible) nonetheless. I do think you are giving them too much credit, In brief, my answer is that we already accept that reality is very different from what we think it is, what with quanta and relativity. But there is evidence and argument to back the theories up. The wilder fantasies (such as Descartes' evil demon) have no evidence whatever to back them up. Taking them seriously is just a waste of time and effort.
Quoting Corvus
Oh, well, that's different. Insects with multiple lenses have a different type of sentience from us. Spiders detect sounds in their legs. Perhaps bats' near total dependence on sound would count as well. Different types of sentience are, obviously, sentience. I also would accept that anything that's running the kind of software we currently use seems to me incapable of producing spontaneous behaviour, so those machines could only count as simulations.
Quoting Corvus
There is exactly the same amount of evidence for the prediction that AI will possess the same sentience as the humans in the future as for the prediction that they/it will not. None. But I wouldn't want to actually predict that it will happen. I meant to say that it might - or rather, that there was no ground for ruling it out.
"..since we know not wherein thinking consists..."
Or Russell:
"I do not know whether dogs can think, or what thinking is, or whether human beings can think. "
Or maybe even Turing himself:
"If the meaning of the words "machine" and "think" are to be found by examining how they are
commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd."
Italics mine.
Or Wittgenstein:
"We only say of a human being and what is like one that it thinks. We also say it of dolls and no doubt of spirits too."
There are several more. It's a small problem, but perhaps we should clear up what this "thinking" is for us, before we attribute them to other things.
We don't know that for sure, unless we become one of them in real.
Quoting Ludwig V
Simulation = Imitation?
Quoting Ludwig V
What is the ground for your saying that there was no ground?
Me - "Do you smell a rat?"
Chatgpt
"As an AI language model, I don't have the ability to smell or perceive things in the physical world. My capabilities are limited to processing and generating text-based responses to the best of my ability based on the input provided to me. If you have any concerns or suspicions, please feel free to share them, and I'll do my best to assist you."
Yes. Do you disagree?
Quoting Corvus
What is your ground for moving from "it hasn't happened" to "it will never happen"?
Quoting Corvus
I know that other people are sentient, so I assume that I can tell whether insects, bats, etc. are sentient and that rocks and rivers are not. Though I admit there may be cases when I can't tell. If I can't tell that other people are sentient, then I don't know what it is to be sentient.
Imitation means not real, which can imply being bogus, cheat, deceit and copycat. AI guys wouldn't be happy to be called as 'imitation', if they had feelings. Just saying :)
They seem to just want to be called as "the useful assistance" to human needs.
Quoting Ludwig V
It is called Inductive Reasoning, on which all scientific knowledge has been based. It is a type of reasoning opposed to the miracle and magical predictions.
Quoting Ludwig V
I don't know what you know. You don't know what I know. We think we know what the others know, but is it verified knowledge or just mere guess work?
Quoting Ludwig V
Exactly.
AI is comparable to a sophisticated parrot being able to say more than "Hello" and "Good morning". But in the end it just mindlessly spews out what has been fed into it without actually knowing what it says.
I see. But then, there's the traditional point that induction doesn't rule out that it might be false, as in "the sun might not rise tomorrow morning".
Quoting Corvus
There are two different questions here. If you know that p, I might also know that p, but not that you know that p. But I can also know (and not just guess) that you know that p. For example, you might tell me that you know that p. And I can tell whether you are lying.
Quoting Corvus
Yes. It sounds positively cosy, doesn't it? Watch out! Assistants have been known to take over.
Quoting Corvus
You over-simplify. A forged painting is nonetheless a painting; it just wasn't painted by Rembrandt. An imitation of a painting by Rembrandt is also a painting (a real painting). It just wasn't painted by Rembrandt.
But I wouldn't call the AI guys an imitation. I do call their work in programming a machine to do something that people do (e.g. talking) as creating an imitation. In the same way, a parrot is a real parrot and not an imitation; when I teach it so say "Good morning" I am not imitating anything; but when the parrot says "Good morning" it is imitating human speech and not really talking.
Quoting Pez
Yes. But what would you say if it mindlessly spews out what has been fed in to it, but only when it is appropriate to do so? (I have in mind those little things an EPOS says from time to time. "Unexpected item in the bagging area", for example. Or the message "You are not connected to the internet" that my screen displays from time to time.) It's a kind of half-way house between parroting and talking.
More seriously, Searle argues that computers don't calculate, because it is we who attribute the significance to the results. But we attribute that significance to them because of the way that they were arrived at, so I think it is perfectly appropriate to say that they do calculate. Of course it doesn't follow that they are people or sentient or even rational.
Quoting Corvus
But I can tell that other people are sentient. I don't say it follows that I know what sentience is. Do you?
Magic and miracles work on far more probability than the sun might not rise tomorrow. If your claim was based on the induction that the sun might not rise tomorrow morning, then it proves that your claims were based on far less plausibility than miracles and magical workings.
It is unusual for anyone to opt for, and believe in the almost no probability case leaving out the clearly more probable case in inductive reasoning. Any particular reason for that?
Quoting Ludwig V
That sounds like a comment from a mind reading fortune tellers. You need concrete evidences for making such judgements about others.
Quoting Ludwig V
Your saying the AI operation is simulation was a real over-simplification. My analysis on that claim with the implications was realistic and objective.
Quoting Ludwig V
I am not sure if it can be concluded for certainty. These are the things that cannot be easily proved.
Quoting Ludwig V
Again it depends. It is not that simple.
Of course it is not that simple. But this is just the interesting point about our discussion (for me at least).
To come back to the parrot. There have been long debates about the relation between a concept and its meaning. The idea, that a concept can have a meaning rests on the assumption, that there is a two-fold relation between concept and meaning. Now C. S. Peirce came up with a refreshing suggestion. What, if this relation was three-fold: sign (as he called it), meaning and "interpretant" or someone who understands that sign. Signs (words, signposts, utterances) do not have a meaning unless there is someone who understands it.
Just imagine, You see a Chinese symbol You have never seen before. It cannot have any meaning to You. Someone born and raised in China thought easily connects a meaning to that character. AI can easily put forward a string of expressions You and I can link a meaning to. But AI itself can never grasp the meaning of its utterances. It is like a parrot saying "Good morning" but never realizing what that means.
If you program a highly developed and intelligent AI devices with the listening input device installed and connected to the processor, and the sound recognition software with the interpreting algorithms, then the AI device would understand the language you speak to them. That doesn't mean that the AI is sentient of course. They would be just doing what they are designed and programmed to do according to the programmed and set processes.
For parrots understanding "Good morning", I am not sure because I have never kept any pets in my life. But I am sure if you keep speaking "Good morning" to a parrot, every morning when you see her, she will understand what you mean, and learn the utterance as well.
Dogs and cats definitely understand some simple human languages for fetching stuff, giving out their paws etc etc, when spoken to by their masters. But they can't utter the human words due to lack of the proper vocal cords with them for making sounds and uttering the recognisable human language & words.
You are seriously underestimating the intelligence of parrots. You should read about Alex, a grey parrot.
https://en.wikipedia.org/wiki/Alex_(parrot)
Here are some quotes:
Alex was an acronym for avian language experiment, or avian learning experiment. He was compared to Albert Einstein and at two years old was correctly answering questions made for six-year-olds.
He could identify 50 different objects and recognize quantities up to six; that he could distinguish seven colors and five shapes, and understand the concepts of "bigger", "smaller", "same", and "different", and that he was learning "over" and "under".
Alex had a vocabulary of over 100 words, but was exceptional in that he appeared to have understanding of what he said. For example, when Alex was shown an object and asked about its shape, color, or material, he could label it correctly.
Looking at a mirror, he said "what color", and learned the word "grey" after being told "grey" six times. This made him the first non-human animal to have ever asked a question, let alone an existential one (apes who have been trained to use sign-language have so far failed to ever ask a single question).
When he was tired of being tested, he would say "Wanna go back", meaning he wanted to go back to his cage, and in general, he would request where he wanted to be taken by saying "Wanna go ...", protest if he was taken to a different place, and sit quietly when taken to his preferred spot. He was not trained to say where he wanted to go, but picked it up from being asked where he would like to be taken.
We have been discussing whether AI is or can be sentient. How about answering a simpler question.
Is Alex (the grey parrot) sentient?
See the original post about Alex (the grey parrot) here:
https://thephilosophyforum.com/discussion/comment/885076
Quoting Corvus
I did put my point badly. I've tried to find the analysis you refer to. I couldn't identify it. If you could point me in the right direction, I would be grateful.
I've tried to clarify exactly where are disagreements lie, and what we seem to agree about. One source of trouble is that you seem to hold what I think of as the traditional view of other minds.
Quoting Corvus
That's a high bar. I agree that it is impossible to meet. But it proves too much since it also proves that we can never even know that human beings have/are minds.
On the other hand, you seem to allow some level of knowledge of other minds when you say "Mental events can only be construed with the actions of the agents and languages they speak by the other minds". It is striking that you use the word "construe" which suggests to me a process of interpretation rather that inference from evidence to conclusion. I think it is true that what we know of other minds, we know by interpreting what we see and hear of other people.
You also say:- Quoting Corvus
I'm not sure of the significance of "sentient" in this context, but I agree whole-heartedly with your point that without the ability to act in the world, we could not be sentient because, to put it this way, our brains would not learn to interpret the data properly. The implication is that the machine in a box with no more than an input and output of language could not approximate a human mind. A related point that I remember you pointing out is that the machines that we currently have do not have emotions or desires. Without them, to act as a human person is impossible. Yet, they could be simulated, couldn't they?
There is not yet an understanding of what, for me is a key point in all of this. The framework (language game) which we apply to human persons is different from the framework (language game) that we apply to machines. It is not an inference to anything hidden, but a different category. If a flag waves, we do not wonder what it's purpose is - why it is waving. But we do ask why that guy over there is waving. Actions by people are explained by reasons and purposes. This isn't a bullet-proof statement of a thesis, but opening up what I think the crucial question is.
Yes, I do have ideas about how such a discussion might develop and progress, but the first step is to put the question why we attribute what philosophy calls actions to human beings, and not to machines, and I want to say it is not a matter of any specific evidence, but how the evidence is interpreted. We see human beings as people and we see computers as machines. That's the difference we need to understand.
Yes, animals have a way of surprising us. They are perfectly capable of learning and one wonders where the limits are.
But even without Alex's achievements, I would have said that Alex is sentient. Animals are contested territory because they are (in relevant respects) like us in some ways and unlike us in other ways. In other words, they are not machines. To put it another way, we can related to them and they can related to us, but the relationships are not exactly the same as the relationships between human beings. It's really complicated and it is important to pay attention to the details of each case.
There are different aspects of information processing to be considered. Yes, understanding of how neural networks can process data in powerful ways has been around for a long time. The hardware that allows that sort of information processing to be practical is a much more recent arrival.
Quoting Pantagruel
Well, it has an important aspect of intelligence that many other systems don't have, which is learning. Do you think that a distinction between learning mechanisms and non-learning mechanisms is worthwhile to recognize?
Quoting Pantagruel
It certainly has bearing on the systems that are actually implemented these days. The type of physical systems available to implement artificial neural nets play a significant role in what can be achieved with such systems. The degree of parallel distributed processing is higher these days, and in that sense the hardware is more brain-like.
It seems to me that it is time to rethink the relevance of the Turing Test. If humans ever create a machine that develops sentience, I would expect the machine to think in ways quite alien to us. So I don't see 'being indistinguishable from a human' as a very good criteria for judging sentience. (Or at the very least, humanity will need to attain much deeper understanding of our own natures, to create sentient machines whose sentience is human-like.)
Furthermore, it seems quite plausible that machines with no sentience will soon be able to convince many Turing Test judges. So to me, the Turing Test doesn't seem to provide a useful criteria for much of anything.
I agree with every word of that! :smile:
I think the fundamental problem is that neither Turing nor the commentators since then have (so far as I know) distinguished between the way that we talk about (language-game or category) machines and the way that we talk about (language-game or category) people. It is easy to agree that what the machine does is the only way that we can even imagine tackling the question and mean completely different things by it.
For example, one can't even formulate the question. "Could a machine be a (not necessarily human) person?" By definition, no. But that's very unhelpful.
But then we can think of a human being as a machine (for certain purposes) and even think of a machine as a person (in certain circumstances).
My preferred strategy would be to start from the concept of a human person and consider what versions or half-way houses we already recognize so as to get a handle on what a machine person would look like. We would need to think about animals, which some people seem to be doing, but personification and anthropomorphization and empathy would need to figure as well. It would even help to consider fictional representations.
Sure as long as we understand that learning reflects the ability of a pattern-recognizer to adapt to novel instances. I don't conceive of "machine-learning" as in that sense evocative of sentience any more than I do the outputs of artificial neural networks.
I do think that there is a wealth of information to be gleaned both about the nature of neural networks themselves as exemplary of self-modifying feedback systems (learning) and also potentially about the nature of reality, through the scientific analysis of data using neural networks.
I've been pondering it a lot myself for the last week and I'd agree with this.
The possibly insurmountable challenge is to build a machine that has a sense of self, with motivations.
Sorry, if I did that! But still, I suppose that even today's AI can easily do what Alex is able to do. If these are the criteria for intelligence and maybe even self-consciousness, then AI certainly is sentient.
I was just pointing out logical gaps in your arguments. Not prejudging your points at all. :)
Quoting Ludwig V
With the logical discourse, we are hoping to reach some conclusions or agreements on the topic. I don't presume anyone's point is wrong or right. All points are more plausible or less plausible.
Quoting Ludwig V
Yes, I meant "construe" to mean interpretation for other people's minds. I feel it is the right way of description, because there are many cases that we cannot have clear and obvious unequivocal signs and evidences in real life human to human communications. Only clear signs and evidence for your perception on other minds are language and actions, but due to the complexity of human mind, the true intentions, desires and motives of humans can be hidden deep inside their subconscious or unconscious rendering into the state of mysteries even to the owner of the mind.
To reiterate the main point, we can only interpret the contents of other minds with the overt expressions such as language and actions they exhibit in the communication. Inference can be made in more involving situations, if we are in a position to investigate further into the situations. In this case, you would be looking for more evidences and even psychological analysis in certain cases.
Exactly - though I would have put it a bit differently. It doesn't matter here.
Quoting Corvus
Yes. Further information can be very helpful. For example, the wider context is often crucial. In addition, information about the physiological state of the subject. That also shows up in the fact that, faced with the new AIs, we take into account the internal workings of the machinery.
But you don't comment on what I think is the fundamental problem here:
Quoting Ludwig V
I don't think there is any specific behaviour (verbal or non-verbal) that will distinguish clearly between these machines and people. We do not explain human actions in the same way as we explain what machines do. In the latter case, we apply causal explanations. In the former case, we usually apply explanations in terms of purposes and rationales. How do we decided us which framework is applicable?
Scrutinizing the machines that we have is not going to get us very far, but it seems to me that we can get some clues from the half-way houses.
Quoting Pez
The question that next is whether we can tease out why we attribute sentience and intelligence to the parrot and not to the AI? Is it just that the parrot is alive and the AI is not? Is that perhaps begging the question?
Quoting Relativist
Do we really want to? (Somebody else suggested that we might not even try)
Sure: for proof of concept, it should be fine to produce some rudimentary intentionality, at the levels of some low level animals like cockroaches. Terminating it would then be a pleasure.
Yes, I guess so. So long as you make quite sure that they cannot reproduce themselves.
It seems safe to predict that, on the whole, we will prefer our machines to do something better than we can, rather than doing everything as badly as we do. Who would want a machine that needs as much care and attention and takes as long to make (20 years start to finish) as a human being? It wouldn't make economic sense.
Quoting Ludwig V
Quoting Relativist
Quoting Ludwig V
If you build a machine that has a sense of self, then one of its motivations is likely to be self survival. Why build a machine that will destroy itself?
Once the genie is out of the bottle then you can't put it back in. People will think that they can control the machine cockroaches. History shows how stupid people can be.
They don't have to be able to reproduce themselves. People will happily build factories that produce them by the millions. They would make great Christmas presents. @Relativist will spend the rest of his life stamping on machine cockroaches. That is assuming that the machine cockroaches don't get him first. The machine cockroaches would see @Relativist as a threat to their self survival motivation.
Well, some people claim that they can't think at all! Are you conceding that they can think, just not creatively? Can you give a definition of "creative thinking " that could be used in a Turing-type test?
There's an inherent risk in trying to draw a clear, single line here. If you identify something that machines can't do, some whizzkid will set to work to devise a machine that does it. It may be a simulation, but it may not.
Let's suppose they do finally develop a machine that can drive a car or lorry or bus as well as or better than humans can, but in a different way. Suppose they are sold and people use them every day. What would be the point in denying that they are self-driving just because they do it in a different way?
That's an interesting idea. Perhaps someone will design artificial birds and deer - even big game - so that hunters can kill them without anyone getting upset.
It depends on how you define thinking. Digital computers can certainly apply logic, and Artificial Neural Networks can perform pattern recognition. One might label those processes as thoughts.
The Turing Test is too weak, because it can be passed with a simulation. Simulating intelligent behavior is not actually behaving intelligently.
What I had in mind with my comment about creativity was this. When you drive, if a child runs into the street, you will do whatever is necessary to avoid hitting her: brake if possible, but you might even swerve into a ditch or parked car to avoid hitting the kid. Your actions will depend on a broad set of perceptions and background knowledge, and partly directed by emotion. A self-driving car will merely detect an obstacle in its path and execute the action it is programmed to take. It can't think outside the box. A broader set of optional responses could be programmed into it, giving the impression of creativity- but the car wouldn't have spontaneously created the response, as you might.
If we are building it, then we are building in the motivations we want it to have. Asimov's 3 laws seem reasonable.
Why do you need information about the physiological state of the subject? Unless you are a medical doctor or neurologist, it seems to be a remote area which wouldn't reveal a lot in terms of one's state of consciousness in analytic and metaphysical level.
Quoting Ludwig V
Again as above, in what sense account of the internal workings of the machinery tell us about the nature of the AI consciousness?
Quoting Ludwig V
You seem to have answered the questions above just right after your posts asking for the physical states and internal workings of the conscious beings. You seem to be in agreement that it is not necessary or relevant to analytical, metaphysical or epistemological level. Is it correct?
AI outperforms humans in standardized tests of creative potential:
Quoting Relativist
Asimov's 3 laws are reasonable if you want machines to look after humans. But if you can build in motivations then you can omit or reverse motivations. Think about the military implications.
One country can try to force a particular motivation to be included, but other countries may not.
While I don't believe that there is, as of yet, any AI capable of passing the Turing test, I see no reason why it isn't likely that one will be able to do so at some point in the future. But, if you think about it, the original Turing test may be set up wrong. What I mean, is that the tt really only measures an ai's ability to use language capably, and reason in a manner analogous to humans. By doing so, it convinces the human conducting the test that the ai is itself conscious. But, its primarily a grammatical test and only secondarily measures an ai's ability to "understand" what its saying by its ability to discern context. But discerning context is largely influenced by human nature.
Why suppose that a conscious ai ought to pass as a human in any meaningful way? Sure, if we equip it with the ability to truly discern and apply syntax and semantics of language it will encode outputs in that way, but why suppose its consciousness would resemble anything like our own. That it would express ideas similar to our own? What we choose to express, and why we choose to express it is wholly dependent upon our desires, and those desires are largely regulated by our physical state and inputs from our environments. An ai has no sensations mapped to physical states or from stimuli in its environment. So, if these play such a crucial role in determining what we express, why should we suppose a truly conscious ai to express anything resembling what a human might? Really, the tt should be humans using a model of their own consciousness to try and anticipate what a consciousness devoid of the same underlying influences and motivations as a human might "think".
I don't think that ai's at some point becoming conscious threatens our notion of free will either. We are preprogrammed with certain drives and yet we exercise a wide range of freedoms regarding how we choose to fulfill those drives. There are degrees to freedom, and we have a far greater degree than say a rock. Being "truly" free isn't what we want anyways as that's the same as randomness.
If an ai is capable of a wide range of options in how it chooses to operate and that operation is dependent upon an understanding of internal maps of its own state then i'd say in a meaningful way we could say its conscious, but I wonder what it would choose to express.
"The more AI models consume AI-created content, the more likely they are to "collapse," researchers find"
Here are a few articles about the nature of this problem, highlighting the fact that AI is well dependent on human beings for direction.
Model Collapse
Dangers of recursive learning
Self-poisoning AI
"In the immediate future, it is likely that companies with a stake in creating the next generation of machine learning models will rush to acquire human data wherever possible. They will do this in anticipation of a future where it will be even harder to distinguish human-generated data from synthetic data."
The actual hype regarding AI does not take into account, that it is totally dependent on the type and quality of data fed into it. As the links You provided show quite clear, that a short-circuit in this process (AI fed on data created by other AI) will eventually lead to a collapse of the entire system.
Even today conspiracy-theories are recursively amplified by AI that present users of social networks with contents similar to the ones they have been interested before.
linked an article that says AI get stupider as it consumes more and more AI generated material.
Are those two things not in opposition?
Also, it seems to me, humans are not getting smarter. So AI will never have better material to draw on if it only draws on our stuff. Which would lead to the same problem of Model Collapse?
This seems to assume that AI will never be provided with 'sensory organs', so that AI can never learn about reality first hand. That doesn't seem like a good assumption to me.
Indeed. Natural Intelligence didn't develop only because of the writings of all previous intelligence. It certainly didn't start because of writing. As long as AI is not limited to only the writings of anyone or anything that came before, there's no reason it wouldn't develop.
Two hundred years ago, a machine that does what Chat-GPT does was unimaginable. Why think that the limits of your imagination is so informative?
What is the evidenced that AI will never be as good as human beings, never acquire emotional intelligence, and never go beyond the process or method of the scientific approach?
Yes, these results are in direct opposition. The degradation of neural net performance is specifically a failure of divergent thinking. Moreover, while the "standardized" test of creativity is, in fact, subjectively evaluated and judged by human beings, the degradation of neural net performance is objectively measurable and quantifiable, through both uniformity of outputs and absurdity of outputs.
Quoting Patterner
Specifically, if human beings rely too heavily on AI then essentially we are back to the self-consumption of AI and model collapse, yes.
There is an interesting parallel to humans being overly reliant on human authorities, and the development of religions.
Humans make AI. What more evidence do you want. We are never going to harness our true potential if at all we did that there is no way we could replicate that. Humans are so sophisticated. Humans are not machine . Humans are as complicated as the concept of dasien proposed by Heidegger
It is not imagination. If you are familiar about the yogic system of indian philosophy there is a clear cut definition to reach higher states of being. Almost all of the Indian philosophy tries to achieve a state of perfection and provides a practical method which anyone could follow. Astangayoga is the path for perfection proposed by yogic system of patanjali. You could refer that. If we couldn't achieve that state of mind. How are we going to make something that could.
Humans came about due to physical processes of cause and effect without, as far as we know, any intelligent guidance. Our involvement is surely an advantage for AI. We've created what already exists, after all. Nothing rules out the possibility that we can't give AI the same characteristics that give us our abilities, or that we cannot find a way to let AI evolve so that it might gain abilities without any more help from us.
I don't think anyone is talking about need. I know I'm not. I'm just talking about possibilities and capabilities.
I don't need someone to evaluate my output to know that I'm thinking. I don't need anyone external to me at all to know that I'm thinking. The denial of thinking is thinking. I can't be wrong that I'm a thinking thing. That's not the case with computers. Whether a computer is thinking or not depends on someone checking its output. If the output is gibberish, there's no thinking going on. If the output makes sense, there might be thinking going on. Either way, an observer is required to determine if thinking is present. Not so with a person. People just know they are thinking things.
Computers are, essentially, collections of switches, right? Rational thinking, for a computer, means that certain switching operations happened in a certain order. A computer can determine for itself whether those switching operations represent rational thinking by comparing its output to already agreed upon rational outputs. For example, if switching operations xyz mean 2+2 = 4, then it "knows" it's behaving rationally, but what if it comes across a novel situation, and there is no already agreed upon rational output that it can use to check whether it's switching operations are rational or not? How could the computer know if it was thinking rationally or not?
This is hard for me to put into words.
I think I understand what you're trying to say. I did not before.
i'm more talking about AI thinking the way we do. Our brains operate within physical rules no less strict than those computer operates within. But we somehow manage when we run across situations for which there is not already agreed-upon rational output. If AI achieves that, they will say the same thing you did. "I don't need someone to evaluate my output to know that I'm thinking. I don't need anyone external to me at all to know that I'm thinking. The denial of thinking is thinking. I can't be wrong that I'm a thinking thing."
What qualifies you to judge what is "gibberish". The computer may be outputting in hexadecimal, which many people don't understand. The computer may be outputting in Chinese or Russian. That would look like gibberish to many people.
Quoting RogueAI
You claim that YOU don't need an external observer to know that YOU are thinking. But YOU are a special case. You are making an observation about yourself. Other people need to observe YOU to try and determine if YOU are thinking. And people need to observe a computer to try and determine if the computer is thinking.
Perhaps the computer doesn't need an external observer to know that it is thinking. :grin:
Brains are, essentially, collections of neurons, right?
Do you really want a self-driving car's actions to be (partly) directed by emotion?
The worst thing that you can do in an emergency is panic.
If the self-driving car is programmed correctly then it will probably do the best thing.
This is not limited to AI models. It affects humans as well.
Approximately 62% of information on the internet is unreliable.
https://www.businessdit.com/fake-news-statistics
How Much of the Internet Is Fake?
https://nymag.com/intelligencer/2018/12/how-much-of-the-internet-is-fake.html
Consumers are dirtying databases with false details
https://www.marketingweek.com/consumers-are-dirtying-databases-with-false-details
The genuine article? One in three internet users fail to question misinformation
https://www.ofcom.org.uk/news-centre/2022/one-in-three-internet-users-fail-to-question-misinformation
Tips (for humans) to help spot misinformation
1 - Check the source. This isn't necessarily who shared the information with you, but where it originated from.
2 - Question the source. Are they established and trustworthy, or might they have a reason to mislead?
3 - Take a step back. Before you take something at face value, think about your own motives for wanting to believe it.
AI will need to follow a similar set of rules to stop Model Collapse. Being able to identify AI as the source of content would help a lot.
Probably the right thing, sure - but only if the particular situation has been anticipated and programmed. I have in mind situations that aren't anticipated, but there are objectives imbedded in the thing along with the capacity to create a solution that meets that objective.
Just to be clear: I'm skeptical we can build a machine with actual emotions. Looking beyond that, if we could build such things, I'm assuming we can tailor its emotions toward usefullness and with fail-safes to prevent it doing something harmful or stupid due to panic . That's what I had in mind with my "emotional" self-driving car. We wouldn't build it with a "panic mode" that induces suboptimal behavior. A kid running in front of the car triggers an "emotional reaction" that results in the car temporarily abandoning its travel objective and focusing on figuring out how to avoid killing the kid. Alternatively, if a certain orange politician runs into its path, it would seek a course of action consistent with the zeroth law of robotics.
This is a fascinating discussion and I regret that I am unable to contribute to it. There are lots of interesting ideas at play here.
Quoting Corvus
I'm afraid I wasn't very clear about this. It is quite true that we don't need information about the physiological state of subjects to attribute states of mind to them. However, states of mind are said to be internal. We are able to make connections between those states and physiologically internal states of subjects, but those attributions are very different from the internal states of computing machines. There is a sort of parallel between mental/physiological states and software/hardware states, but also important differences.
It is clear to me that once we appeal to the internal states of an AI machine we have gone beyond any Turing-type test, so the fact that we find such information significant is important. This is obscured by the standard description that AI constructs its texts in a very way that is very different from the way that humans construct their texts. Quite how human beings construct their texts is pretty much unknown at present, but the difference is identified in the remark that human "understand" their texts, whereas AI doesn't (as is shown, for example, in the fact that it sometimes "hallucinates" data, especially references, and seems to lack any capacity to critically evaluate its sources). (Or that's the impression the reports that I have read give me.)
I take the point about states of consciousness at the analytic or metaphysical level except that I don't have a clear grasp about what those things mean. My view is that attributions of "internal" states, of belief/knowledge, desires and intentions is attributed by interpreting a given action in context of other actions and responses.
I am not familiar with Astangayoga (eight limbs of yoga). Is the correct spelling Ashtanga?
What evidence is there that Ashtanga yoga is superior to any other belief system?
Humans carry out tasks for their pets. They provide food, water, warmth, and a home.
Humans do the same things for bacteria. They provide bacteria with food, water, warmth, and a home.
Humans may not think that they do tasks for bacteria, but bacteria help humans in a number of ways and humans would be worse off without them.
Estimates show that the typical adult human body consists of about 30 trillion human cells and about 38 trillion bacteria. So bacteria are in the majority.
A mutually beneficial relationship exists between the human intestine and many of its symbionts: the human intestine provides nutrients to the resident bacteria, whereas bacteria aid in the digestion of food and absorption of nutrients, produce vitamins such as biotin and vitamin K, regulate immune system function, and hinder the colonization of pathogenic microorganisms.
Humans can be considered to be biological machines.
The Cambridge dictionary defines "being" as "a person or thing that exists". Bacteria are alive and are therefore "beings".
This is a really useful way to think about these issues - particularly when we are thinking about how AI might develop. It seems to me that it can be applied very widely to technology in general. Darwin applied evolution to living things because they are self-replicating. However, that idea depends on how you look at things. Some parasites are dependent on another species to replicate. (I have in mind the fungi that replicate by hi-jacking ants - Wikipedia - Ant-parasitic fungus Viruses hi-jack the cells in their host to replicate - though they are border-line alive. Lichens are another interesting case.
The key point here is that once a parasitical relationship is established, evolution ensures that the development of parasite and host are inter-linked. This is a helpful model for understanding AI. But I don't see that it enables us to make predictions about how it will go or not go.
Quoting Pantagruel
Maybe this also applies to human beings. Too much recycling of the same ideas without evaluation or criticism of them is harmful to thinking. Full stop.
Quoting Agree-to-Disagree
If I don't know the difference between "I" and "you" (and "they"), how can I articulate my observation that I am thinking? If I can't articulate the observation, is it meaningful to say that I can observe it? I think not. So the thinker's awareness that they are thinking may be a special case, but it is not independent of other people's observation that they are thinking and the thinker's awareness that other people are thinking.
Quoting Agree-to-Disagree
Quite so. That's why the short argument about whether machines can be conscious etc. is that there are already conscious machines in existence. There are plenty of questions about what would persuade us that something is a conscious or living machine, so that argument is not very helpful. But for what it is worth, I think it stands up.
The most intriguing problem with consciousness is that everyone knows the mind emerges from the physical brain, but no one seems to know how the physical brain generates non-physical minds. This is called the "hard problem" in philosophy of mind.
Biologists, and neurologist have been trying to solve the problem, but so far what they have been saying is that at certain location of the brain certain mental events seem to be causally linked. But that still doesn't explain what mind is in detail or any realistic sense. It is like saying the rain comes down from the sky. Everyone knows that. Mind is linked to the physical brain.
Hence, it would't be much point going into the physical and biological details of brains, because mind will not show up in there.
But philosophically, we can describe the aspects of mind via looking into linguistic, behavioural and rational reasoning capabilities of the conscious beings in metaphysical logical and psychological level.
I certainly think that, but I don't think EVERYONE knows it. Many many many many people do not agree that the mind emerges from the brain.
In ancient times, they believed mind is in your heart, and your breath is your soul, suppose.
Actually it is a practical methodology. I don't think it is better I think it is worth studying it. Maybe even scientifically. If we could improve our mental capacity who knows what we could achieve in the future.
So does that mean humans are like bacteria for a higher being.
At the moment humans are hoisting AI up. It is not hoisting itself up by its own bootstraps. If humans hoist AI up high enough then AI may gain the ability to hoist itself further without human intervention.
AI is parasitically dependent on human intervention at the moment, but may become independent in the future.
I see that a lot of people have jumped on this. There's a lot of disagreement. But I agree that most people think that there is a close connection between the mind and the brain. But there is a good deal less agreement about what that connection is. It is a hard problem indeed.
The fundamental problem is to understand when we can say that the machine is doing anything, in the sense that humans do things. Can they be said to calculate, for example? Do they check our spelling and grammar? Searle says not because it is we who attribute significance to their results. But that means that their results are significant; we treat what they do as calculation or spell-checking. It isn't straightforward either way.
I was shocked to read the post by claiming that there are still many folks who believe minds are not generated from physical brains. If mind is not in brain, where would it be?
If a person memorizes the "times tables", and uses them to work out the result of a multiplication, are they actually doing a calculation? :nerd:
There are many ways that people use to solve a mathematical multiplication. Most involve either using their memory, using a calculator, or using an algorithm. Computers normally use an algorithm. Doesn't that mean that computers calculate in a similar way to humans?
In the complex system wherein and whereby the embodied brain operates
It just goes to show how easy it is to mistake "the people that I know" for "everyone". It happens all the time. One issue is whether the mind is located in time and space. Another is the nature of the relationship between mind and brain. Descartes believed that the mind interacts with the body through the pituitary gland. But he did not believe that the mind was generated from it. But see my reply to Pantagruel below.
Quoting Agree-to-Disagree
No. The times tables are a short cut. They are the results of calculation. We memorize them because it makes it easier to do more complex multiplications. (I'm sure you know that 2 x 3 = 2+2+2). Some (perhaps all?) primary school children are introduced to multiplication in that way. Once they understand that multiplication reduces to addition, they are moved on to memorizing their tables.
Quoting Agree-to-Disagree
Perhaps at the software level it does mean that. But in this case, I think the "different way" is based on the machine coding of the process. (However, the AIs are a different case. The difference is clearly at the software level.)
Quoting Pantagruel
Yes, I agree with that. My understanding is that once you get into details, the spine is deeply involved in what the brain is doing, so we should not think of the brain alone, but of the brain + spine - and the entire nervous system. Then we have to recognize the hormonal system in the emotions and the heart and muscles in action. In the end, I actually prefer to say that the connection is between the mind and the whole body. But I am too lazy to always be correcting people, so in most circumstances I just let the difference go.
That gives a basis for thinking that a machine constructed from silicon could never be a person or behave as a person does. I'm not all sure about that conclusion, though.
You're not familiar with Dualism? With the concept of souls?
Quoting flannel jesus
It has been said in the ordinary language manner to indicate the most sensible educated modern folks. The modern dualists might believe mind and body are separate entity, but most of them would still believe the mind is generated from the physical brain. It is not the main point of the OP worthy to quibble about, because the OP is not a High-Order Logic topic. :nerd:
So where in the brain is it located? Kant's argument against materialism was, that we cannot find "unity" in the material world as matter as such is always divided or divisible. Our conscious experience on the other hand is basically "one", even in multiple personality.
That is the hard problem of mind-body issue. No one seems to know. The biologists and neurologists were suppose to find about it.
Mind is located in the brain is all they seem to be saying for the simple evidence that if you break your arm or leg, then you can still speak, see, think and feel i.e. you are still fully conscious. But if your brain was injured or hurt in some physical way, then you would lose the mental abilities above mentioned, or become unconscious immediately.
Of course the physical brain itself won't be able to function without all the bodily parts and organs properly connected via the neural network. If you are going into multiple personality and conscious experience, then you are leaving the physical realm of mind i.e. the brain, and entering into the world of psychology and epistemology.
I put my point badly. I only wanted to say that dualists might find it somewhat problematic to say that the brain generates the mind - even if you expand it to the body creates the mind. Dualism may be less popular than it was, but it still has philosophical adherents. I have to acknowledge that fact even though I think they are mistaken.
Quoting Corvus
It may be that they need to relax and concentrate on how the system works. If you ask what part of the central heating system keeps the house warm, you'll find yourself endlessly searching. If you ask where the self is that moves the car, you may discard some parts, but you'll never narrow it down to one part.
No problems mate. I do respect your honest acceptance on the situation instead of keep going on with the wrong points and spiralling and falling into the ad hominem exchange of posts like some other posters in the past. :nerd: :up:
It is a common sense that everyone knows that mind resides in the brain, when there is a popular expression in the ordinary language, when someone does or says something wrong, "He / She needs his/her head examined." :smirk: Denying that, and trying to make out as if there are bunch of folks out there who believes that mind comes from the belly buttons or some other parts of body sounded totally and utterly senseless. Even the ancient Greek sophists wouldn't be out of touch with the world in that degree. :rofl:
Quoting Ludwig V
Yes, if you asked where the power generates from the motor cars, then we can say, from the engine. Cleary the moving force generates from the work of the engine. But that alone cannot drive a car. The engine will need all the other parts such as gears, steering wheels, clutches, accelerator, brakes housed in solid car body also attached with 4 good tyres for the car to drive.
Likewise consciousness generates from the physical brain, but it needs all the bodily organs properly connected to the brain via the neural networks in a healthy body, for it to function properly.
Nobody would contradict this and the close connection between brain and consciousness. Nevertheless we would have to find the "ego-neuron" so to speak to locate the point in space where all this information transmitted by our nerves come together to generate our experience of a "personality".
And that is exactly the crux of Kant's argument, that materialism alone does not suffice to explain our experience.
I.e. we would need to find a homunculus?
Yes, that was my point against .
Quoting Pez
This still sounds like a materialistic methodology.
Quoting Pez
I thought Kant doesn't make explicit comment on the mind, self or physical brain in CPR. He was only interested in propounding on how metaphysics is possible as a science explaining transcendental idealism.
But that's not a point against me. Saying "the brain is connected to consciousness", which probably nearly everyone agrees with, is ENTIRELY DIFFERENT from saying "consciousness arises from the brain" or "emerges from the brain" or whatever, which is what you said everyone knows.
Everyone knows the brain is connected to consciousness, that's the weak position. NOT everyone knows that consciousness emerges from the brain - that's the strong position. Whether deliberately or by accident, you're pulling a motte-and-bailey here.
"The bran is connected to consciousness." sounds even more vague. What do you mean by the brain is connected to consciousness? What is it connected with? Is it connected with a piece of string or golden chain or rubber band? It sounds more obscure.
"consciousness arises from the brain" or "emerges from the brain" is some of the academic theories of Mind-Body topics in the cognitive science textbooks, and is a widely known claim.
Then... why did you agree with it and say it was your point when Pez said it?
I thought that was what you were saying. I never said that brain is connected to consciousness physically. It is first time I am reading it from your post.
You said that many folks don't believe that is the case. You then wrote, and think that the brain is connected to consciousness, which I found as vague expression. Now clear? :grin:
[Url=https://gatherer.wizards.com/pages/card/details.aspx?multiverseid=21282]Sneaky little things. [/Url]
Indeed.
Great card! :lol:
Why are you agreeing with some guy saying they're connected, and then complaining that connected is vague?
I think that's what @pez meant for "connected" - consciousness emerges from physical brain. He didn't mean that it is connected physically with a chain or something. That's how saw it.
But when you replied with saying it again gave an impression that is what you were saying it, because you were vehemently denying something. I didn't quite understand what you were denying about, and presumed that what you were saying was that consciousness is connected to the brain, it is different from saying that consciousness emerges from the brain. This gave me further impression what you were saying was that consciousness is connected to the brian physically with some chain or link, which I thought was totally muddled.
I can see the problem very clearly. There is no complications here. You seem to try to conclude that everything unclear for some reason, when it is not. As I said, it is not the central point of the OP. We can just accept the situation and move on, and try to discuss the OP - can computers and AI think?
I think it is very much part of the central point of the op - you said everyone knows consciousness emerges from the brain, and if that were true, then everyone would have a very good reason to think that it's at least in principle possible for AI to be conscious.
But not everyone knows, or agrees, that consciousness emerges from the brain, and not everyone agrees that it's possible for AI to be conscious.
Do you accept that not everyone knows / agrees that consciousness emerges from the brain?
What is your evidence for the claim?
Quoting flannel jesus
No I don't accept it. I still believe that everyone (with common sense) knows / agrees that consciousness emerges from the brain.
Wouldn't we also have to find the "ego-neuron" of the homunculus?
Great fleas have little fleas upon their backs to bite em,
And little fleas have lesser fleas, and so ad infinitum.
Some people are panpsychists who believe consciousness is fundamental rather than emergent. https://en.m.wikipedia.org/wiki/Panpsychism
Then there's the wide array of people who, for various reasons, think physical stuff is insufficient to explain conscious experience. Just Google anti physicalism to see people who argue for those positions. They abound, they're potentially even a majority among amateur philosophers (and physicalism is only barely a majority among professional philosophers)
But isn't the mind the panpsychists talking about totally different type from the human mind?
Wouldn't it be more like the "Soul" rather than human consciousness with reasoning, experiencing, sensing and feeling capabilities?
People who believe in souls are of course another great example of people who don't think minds emerge from brains. They think minds are in souls.
I can't quite follow or agree with their ideas then. The prime sign of possessing consciousness and mind for a being is demonstrations of its linguistic, rational reasoning capability, and showing the signs of emotional interactions.
I cannot see how anything else in the universe can be conscious apart from humans, and some of the mammals (having lesser consciousness due to their lack of language uses).
If some folks want to believe that raining and strong wind are the sign of the universe being conscious, and these folks trying to have chats with the trees on the hills and fishes in the rivers, then how can, or why should anyone stop them from doing so?
Well, it is certainly true that the only kinds of beings that are conscious in our universe are humans and animals. Humans are our paradigm of a conscious being. Consciousness in animals is more complicated, partly because different animals are have different levels of consciousness and partly because there are different opinions about how conscious they are. Whether it is possible that there are conscious beings apart from these is another question. There's no doubt that it is possible to imagine that there are and the future no doubt holds many things that we do not yet know. So I think your claim is over-confident.
Quoting Corvus
It used to be common sense that the earth is flat and that there is no land between Europe and China.
"Everyone knows that p" is one statement and "Someone does not know that p" is incompatible with it. "Everyone with common sense knows that p" is a different statement and "Someone does not know that p" is compatible with it.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3489624/
"There is considerable debate over whether plants are conscious and this, indeed, is an important question. Here I look at developments in neuroscience, physics and mathematics that may impact on this question. Two major concomitants of consciousness in animals are microtubule function and electrical gamma wave synchrony. Both these factors may also play a role in plant consciousness. I show that plants possess aperiodic quasicrystal structures composed of ribosomes that may enable quantum computing, which has been suggested to lie at the core of animal consciousness. Finally I look at whether a microtubule fractal suggests that electric current plays a part in conventional neurocomputing processes in plants."
Careful. That way lies crank magnetism or even worse, fractal wrongness.
My point was that there are serious people who believe in non-brain consciousness, including the 4% of professional philosophers who are idealists. There are also a lot of computationalists out there, and they would object to Corvus's statement that I quoted.
You don't think computationalists think consciousness emerges from the brain? But isn't... isn't that where the majority of computationalists think mental computations *happen*? Computationalists seem like a perfect example of agreement, not disagreement. Except for, I suppose, the rare computationalist who thinks some of the mental calculations happens somewhere other than the brain.
You shouldn't be too naive to presume that "everyone" strictly means the whole population in the universe anyway. "Everyone" is a pronoun with the universal quantifier "every", which implies "everyone" that I know, "everyone" who are sensible, "everyone" with common sense, or "everyone" who are logical ... etc.
You shouldn't be too judgemental or restrictive in understanding and interpreting "everyone" in unreasonably narrow way insisting it must be "everyone" in the whole world or universe.
If they want to insist that the whole universe is conscious, then it would only make sense, if they also stipulate the condition that there are different types or levels of consciousness.
There's no reasonable interpretation of "everyone" where "everyone knows the mind emerges from the brain" is true. Not everyone in the world believes it, not all philosophers believe it, not all neuroscientists or cognitive scientists believe it. But I'll take your reply as a sly way of admitting you were incorrect. Perhaps you're just one of those people who can't say the words "I was incorrect."
No, that would be an unwise act of digging down yourself into further below into the darkness of the cave.
How can anyone admit being incorrect without being incorrect just because you want him to? That would be a sinful contradiction. :D
Well, I have even given you a clear and detailed explanation what "Everyone" can imply in logical sense with the advice not to be too restrictive in your judgements which sounds too impractical to consider in serious manner. I have not admitted anything at all. That is another prejudgement of yours.
Anyone would say that your insistence that "Everyone" must mean the whole population in the universe without the class limitation is incorrect and unusual. This is a simple knowledge in First-Order Logic and High-Order Logic.
What are you doing? What's the point of any of your words? If you don't want to communicate your thoughts, why bother learning how to write?
If you don't want to communicate your thoughts, why bother learning how to write?
Does "everyone" mean then, everyone who ever lived in the past, is living at present, or will live in the future, or totality of the all the folks who ever lived on this earth since the big bang or what?
If you were thinking logically, you should have asked whether "everyone" meant the whole population of the universe, or a group of folks I know, or whatever, rather than jumping into the bandwagon shouting that I must admit I was incorrect.
I wouldn't describe your way of discoursing was exactly logical or fair, to be fair and honest.
You said everyone knows the mind emerges from the brain.
That's clearly not true for the standard interpretation of "everyone", and you've suggested that you're using a more loose definition of the word 'everyone', so... what more loose definition are you using? What did you mean by your claim?
When you claim "everyone knows the mind emerges from the brain", what does everyone mean to you in that sentence? I clearly can't read your mind, so if you want to communicate your thoughts, now's the time to do it.
Yes, this is the question you should have asked. You are giving a logical opportunity for further clarification and limitation to the concept "everyone" which it cries for. My reply would be "Everyone" that I came across in my reading and listening. Is it that important? :rofl:
That's been the question since all the way back here:
Quoting flannel jesus
Right? Lmao.
"Everyone" means "everyone i came across in my reading and listening" lmao. That's an absolutely absurd way to use that word.
I would even say quite the contrary. The possibility and the limits of metaphysics follow from his exposition concerning time, space and consciousness. Just have a look on the paragraph "What Objective Unity of Self-consciousness is".
No, you just wrote saying that there are the other folks who don't know, and saying "Everyone knows" is wrong, therefore I must admit incorrect. You either have a short memory or ....
Good point. I will try to read on the concept of Self in Kant, and see what he had to say.
This would not help us very much, we could even regard advanced forms of AI as something like a homunculus. Awaiting an answer to the question of personal identity from neuroscientists is equally futile. All they could tell us is, that special regions of the brain are activated after a ray of light strikes our eyes.
Uh... yes, I clearly was inviting you, all the way back there, to tell me what you do mean. You refused to do that, for many many many posts - as if you were scared of what might happen if you made your thoughts clear.
They would object to consciousness emerging ONLY from brains, but Corvus might not be saying that, so I take your point. The implication I got from reading Corvu's responses was that only brains are conscious, but he might not be saying that.
He's then adding onto the standard physicalist position the bizarre statement "everyone knows this is true", which it turns out means something entirely different from what you might think that means.
My point on mind is that it is property of lived life and experiences in the world interacting with the other minds. The property being able to exercise high level of linguistic, rational and emotional interactions ... etc etc with the other minds in the societies they live in. AI and the intelligent devices definitely seem to posses and demonstrate the rational part of consciousness, but they seem to be lacking in the other part of the human mind and consciousness.
The core of mind is definitely in the brain. I am not sure if mind can be further divided into lower level material structure. Maybe it can, but it would be then neurological, biological and physical terms.
I was trying get this point across, but kept on insisting that he could not understand and accept what the word "Everyone" means. Obviously he is incapable of communicating and discoursing in ordinary linguistic level, hence I tried explain to him in logical manner, by which he seemed to have further confusing himself resorting into ad hominem in the end.
By the way, I was going to ask you, what do time and space have got to do with consciousness in Kant?
Sure, let's ask :
If someone says 'everyone knows the mind emerges from the brain', do you think that's true, and what do you think that claim means? Corvus said that, I'm curious what you think is the most natural interpretation of that claim.
If you were genuinely philosophically asking about it, you would have asked "What do you mean by Everyone?", and gave me a chance to explain about it. But you decided to claim "Everyone? There are the non believers, and it is not true. It is wrong, you are wrong wrong wrong. Admit you are wrong."
Hmmm anyone would sense that your aim was not pure and silly sophistic.
People use "Everyone" "Anyone" to say the majority of people or really anyone in the figure of speech all the time. It just means that it is widely accepted that, it is predominantly fact that, unless you are talking about First-Order logic topics.
But if you really insist on everyone to mean something else, then you won't find it. You yourself will get lost trying to understand what it means. Because then you must go and find and agree whether to include every single person since the start of the universe, or just the folks who are living on this earth at this particular moment? Or the folks in your own country? .... what is the real "everyone"?
Ok, you decided to define everyone to mean all the folks who are living on this earth at this moment. But as soon as you decided on that, there have been hundreds and thousands of the new birth in the world, and at the same time hundreds have died in the world due to old age, accidents and illness what have you. Then who are the real "everyone"?
but if you meant the majority of people, (a) you would have said that when I invited you to say what you meant, and you didn't say that, and (b) that would still be incorrect. The majority of people are religious and believe in souls.
If we want to be loose with the word 'everyone', it should still mean something stronger than '51% of people'. It should mean a VAST majority, at the very least - no less than like 95% of people, maybe 90% if we're pushing it. Nowhere near 90% of people think consciousness emerges from the brain, not even 90% of philosophers.
I have explained it you, but you seem neither try to read my explanation nor try to understand it. You just kept on saying wrong wrong wrong, admit wrong. That is not philosophical discourse.
You don't demand admitting anything from the other party. The other party will admit wrong, if he genuinely knew he was wrong, and feel that the admitting is needed at his own accord.
You now know that not everyone thinks minds emerge from brains, so you have no reason to make the claim again.
Do you still think "everyone knows that"?
Everyone who are reasonable knows that mind generates from the brain.
Everyone believes in Islam.
How do I define Everyone? I mean Everyone Reasonable.
How do I define "reasonable"? I define it as "people who are muslims".
In all the sciences, math and philosophy, they all pursue for the knowledge which is true for generality. But you bring out some minority claims of the folks from the shade, and try to refute the general principles and knowledge. It sounds senseless.
https://survey2020.philpeople.org/survey/results/4874
It's not some tiny minority. Only 51-52% of professional philosophers are physicalists about the mind. That means up to 48% of philosophers might not think the mind emerges from the brain.
It's not senseless, 52% isn't "everyone" by any reasonable definition.
Are 48% of people 'noone'? If I killed 48% of people in your town, would you say "he didn't kill anybody"? "Everyone survived"?
I don't subscribe to any statistics. It means nothing to me I am afraid.
Well it is your problem not understanding ordinary linguistic expressions. How could I help you? You brining out the statistic figures make your claims sounds more unfounded and suspicious.
"X" - I agree with this claim
"Everyone knows X" - I disagree with this claim.
X and Everyone knows X are not the same claim. I can agree with one and disagree with the other.
I understand it perfectly well, it's just untrue.
You are free to disagree, but it is not done thing to demand admitting wrong from anyone.
All I was saying was that your judgement seem to be based on your inability in understanding the ordinary language expression.
I will say what I believe as true. You cannot say to others "Don't say it".
You are being a sophist as well as sceptic.
How can a sophist and sceptic know the other's mind? You still haven't answered where the mind comes from.
No point telling you when you refuse to understand anything.
To be able to communicate in English about Kant's philosophy, I started reading the Critique in an English translation. As my mother tongue is German it was interesting that this translation, even if it was more than one hundred years old, seemed to me much easier to grasp than the original. Maybe it is due to the practical attitude prevailing in English-speaking countries.
Anyway. The foundation of Kant's system of ideas lies in the distinction between two entirely different faculties of human knowledge: intellect and intuition. Thinking, logic and concepts arise from the intellect. Usually intuition is regarded as inferior and by no means connected to knowledge. Quite different here. Time and space are our fundamental intuitions. Interesting is Kant's argument, why time and space are no concepts. Concepts always refer to a variety of things. Time and space, though, are single data for knowledge. Different times (spaces) are always parts of this one time (space).
Thus time and space are essentially different from the intellect but nevertheless in consciousness just the same. Please refer to Sections one and two of Transcendental Aesthetic.
Interesting. German speakers would definitely be more advantageous for reading all the German philosophical textbooks in German. Once I tried to read CPR in German, but my German was too basic. I was able to translate the texts ok, but the progress was far too slow, which was the reason for abandoning the reading, and went back to the English translated CPR.
Reading in English is not bad at all, because there are so many translations and commentaries for almost every non-English textbooks available, but sometimes you might feel that some important meanings could be lost from the original texts in translations. It is especially the case with the difficult original texts such as CPR. I ended up getting 3 copies of different translated copies of CPR. English is not my main language, but much better than my elementary German in readings.
But for your point on Kant's time and space is not intelligence, but intuition, is interesting. I agree with the point. Because Time and Space can never be clearly understood or perceived by mind as distinct and concrete entities. They are definitely the internal perceptions in that Time and Space can be guessed and felt in thoughts, but they can never be accurately and precisely understood by reason. Hence, it makes sense to say that Time and Space are types of intuition, and they can only be intuited in mind. I wonder if it would make sense to say that Time and Space is the foundation or precondition of consciousness.
I was reading Kant intensely a few months ago, but recently I got so busy in daily life, I have not read anything for a while. I will be getting back to reading either Kant or Hegel for a change, so some Philosophical logic or even Mathematics, when things get quieter here.
?xKp = Everyone knows P (has no existential instance)
?x?(X1 .... Xn)Kp = Everyone in the class, group, I know, I met, ...Xn knows P (has existential instance)
Therefore the universal quantifier pronoun "Everyone" has no existential instance on its own in a sentence. Existential instance only emerges with further limiting quantifier connected to the universal quantifier. You have been barking at the wrong tree in all your posts.
For me, it's quite simple: You and I both, at this point in time, know that not everyone agrees / knows that consciousness emerges from the brain. Including a lot of people who presumably have qualified opinions on the matter, like expert philosophers and even many scientists believe in souls (many scientists are religious).
That's all there is to it.
"Everyone knows consciousness emerges from the brain" is an untrue statement for any standard or colloquial use of 'everyone'.
When someone says "Everyone turned up this morning." You can't say he was wrong because Elvis Presley, Immanuel Kant, Joe Biden, Vladmir Putin and Xie Jinping also didn't turn up.
From Everyone knows P, you can't induce some particular group of people or individual also knows P.
If you disagree, then what standard definition of "everyone" makes true the statement "Everyone knows consciousness emerges from the brain"?
I was just pointing out and making clear that your insistence that "Everyone knows mind emerges from brain is wrong." I believe that your claim was wrong.
For the other folks you claim to exist who don't believe that mind emerges from brain, I am not sure. I have not met any of them in my whole life. So I am taking that claim as a groundless rumour or opinion.
There are people on this forum who don't believe mind comes exclusively from the brain. Take your head out of the sand and look around.
The religious folks believe in the existence of souls, not the minds with intelligence reasoning and intuitions. You are conflating the two. They are totally different things in nature.
Mind dies when the body dies. It doesn't survive bodily death at all. Hence the two are different. They cannot be the same. Existence of soul is mostly believed by the religious people, which I am not. But then belief and faith issues are subject to change in during the course of one's life.