AI cannot think
The only mental event that comes to mind that is an example of strong emergence is the idea*. The conscious mind** can experience and create an idea. An AI is a mindless thing, so it does not have access to ideas. The thinking is defined as a process in which we work on known ideas with the aim of creating a new idea. So, an AI cannot think, given the definition of thinking and considering the fact that it is mindless. Therefore, an AI cannot create a new idea either. What an AI can do is to produce meaningful sentences only given its database and infrastructure. The sentence refers to an idea (which is not new), but only in the mind of a human interacting with an AI.
* An idea is an irreducible mental event that is meaningful and is distinguishable from other ideas.
** The conscious mind is defined as a substance with the ability to experience, freely decide, and create. It has limited memory, so-called working memory.
* An idea is an irreducible mental event that is meaningful and is distinguishable from other ideas.
** The conscious mind is defined as a substance with the ability to experience, freely decide, and create. It has limited memory, so-called working memory.
Comments (104)
Can an AI think?
Thinking is the act of choosing and establishing the meaningful core around which an entire problematic field revolves like a galaxy. An AI does not question the idea of justice. But it can access to a problematic field (namely the cloud). In this sense, humans think because they can decide where and when the problematic occurs. Whereas an AI cannot decide this. However, when we ask an AI something, it is capable of responding and giving us a series of ideas and concepts. In this sense, it conforms to our thinking. But without us deciding and establishing the problematic field, there is no thought.
In conclusion, AI does not think, but it can be part of human-directed thinking. It compose with us an apparatus of thinking.
The logic is valid but hardly sound since many refuse to accept any of the premises.
OK, but what exactly is an idea then? An AI device that plays the game of 'Go' has come up with new innovations that no human has thought of, and of course many that humans have thought of, but were not taught to the device.
So what do we call these innovations if not 'ideas'? How far have you cheapened the term that it no longer applies to an otherwise relevant situation like that?
You seem to counter this as it not being an idea until a human notices the new thing, even if the new strategy is never used against or noticed by a human.
Arguably, the same can be said of you.
Quoting JuanZu
Similar response. What happens when an AI defines 'thinking' as something only silicon devices do, and any similar activity done by a human is not thinking until an AI take note of it? For one, if AI has reached such a point, it won't call itself AI anymore since it would be no more artificial than any living thing. Maybe MI (machine intelligence), but that would only be a term it gives to humans since any MI is likely to not use human language at all for communicating between themselves.
What I don't see is a bunch of self-sufficient machine individuals, somehow superior in survivability, going around and interacting. I envision more of a distributed intelligence with autonomous parts, yielding a limited number of individuals, most with many points of view. Life forms with their single PoV have a hard time envisioning this, so their language has few terms to describe it properly.
What does it mean to 'think'? Is it a product of the nervous system or something more? Descartes understood thought to be an essential aspect of existence. However, he still.came back to the problem of physicalism and some kind of link between 'mind' and 'brain', including the role of the pineal gland.
The idea of AI thinking goes beyond the physiological aspects of brains to thought as information. This area is complex because it involves the question as to what extent thought transcends the thinker. It also involves the question as to the role of sentience underlying thought. To what extent is thought an aspect beyond the experience of thought in lived experience, or some independent criteria of ideas and knowledge?
Thought is an activity of the subject. Ideas are those that transcend it. There is a virtual field of ideas that exceeds the subject, allowing the subject to learn and transmit them. Ideas are related to other ideas. As I have said, an AI does not question the idea of freedom, but when we question the idea of freedom, we enter a field that is not our own. The idea forces us to think about it in a series of relationships with other ideas and concepts that are not present for the subject (we must investigate), and that may be elsewhere, in other minds, in books, or in the cloud itself.
In short, the idea transcends the subject; it transcends the act of thinking. AI can access the virtual field of ideas, but it cannot take the initiative, since thinking means actualising the idea for the here and now.
There are plenty of other mental events that come to mind that might be considered emergent. As weve discussed previously, as I see it, the mind itself is emergent from the neurological and physiological processes of the nervous system and body.
Beyond that, this is a circular argument - your evidence that AI cant think is that it is mindless, which means having or showing no ability to think, feel, or respond.
Quoting MoK
No. Thinking is:
Youre using non-standard definitions again.
That ideas are irreducible mental events sounds somewhat mysterious. Phenomenally, there is no more or less to an idea than what it is to the thinker at the time it occurs...
A car ran over the neighbor's dog.
Does the summary meaning of this sentence comprise an irreducible mental event? It (the idea via sentence) happened, it isn't any more or less than what it means.
Compare:
A 2024 Rapid Red Mustang Mach E ran over our neighbor's 15 year old Chiweenie.
Does the summary meaning of this sentence comprise an irreducible mental event? It (the idea via sentence) happened, it isn't any more or less than what it means. For the sake of telling someone what happened, I could reduce detail. But that telling causes an irreducible mental event to occur in whoever understands the idea(s) of the new sentence.
The appearance of things as they appear to us, are just irreducible mental events. Is this tautology?; ideas are no more or no less than what they are.
I think that's pretty good. The very basic idea that, perhaps, anything else anyone calls "thinking" is built upon.
I think [I]action[/I] is a key element. If you don't [I]do[/I], there's no way to learn. In Annaka Harris' audiobook [I]Lights On[/I], starting at 25:34 of Chapter 5 The Self (contributed), David Eagleman says:I disagree with Eagleman in ways, but I think he's right about meaning coming with doing.
I think we may see something akin to 'thinking' if AI is allowed to produce robotics and if it has a built in system that manufactures "errors". Then each new robot 'replicates' another robot and the "errors" expand. In such a scenario this would likely operate in a very similar manner to 'thinking' only a single 'thought' would be stretched out over multiple generations of AI run robots.
Basically, it is kind of feasible that AI in robots could create something like a simulation of evolutionary processes--as we understand them--and produce something akin to a 'thought' in a single entity if we projected it far enough forwards in time (if such is possible?). It may well end up that the robots would integrate biology into their systems due to such "errors" in manufacturing. It is more probable that this woudl occur as all our current information points towards biological systems as far more complicated so any thoughtless AI system set up to increase its capacity for data sets and problems solving would inevitably, I feel, explore this avenue eventually.
This is pretty much how I see humans. We commit errors and due to these errors we progress. How we are able to recognise errors and be conscious at all is a mystery likley made by evolutionary errors (but maybe the 'errors' are really anti-errors?).
That describes how organisms respond to their environment - which the vast majority do, quite successfully, without thought.
What do you mean by "think"? What is your definition of "think"?
Different uses of terms, perhaps? What do you call the physical processes of the brain that receive signals from the retinas, compare them with stored information of previous signals received from the retinas, recognize a situation that previously lead to damage, etc.?
Quoting WayfarerWhat is/was the first step in the process that came to be what you call "thinking"? I suppose it depends on your definition. The authors have stated theirs.
But isn't your intuition that your mind is also a thing that you can ascribe qualities to?
A calculator doesn't think. Yet it can outperform you in any arena related to calculation.
Do you "think" when you look up somebody in the phone book? Sure, you recall their name and then thumb through the index where the letter appears and then scroll through the results until you arrived at the intended data entry.
In the context of AI, "thinking" would be simply creating random noise in a system where such noise serves no purpose and may also be a hindrance.
Contemplation might be an applicable word or concept. In the animal kingdom, a predator contemplates which prey to eat, as well as whether to attack at all. Does a lion merely view the smaller, slower gazelle trailing behind as "easy" in an automatic process or does it "think" or "contemplate" such dynamics? Does the lion have a choice at all? Or does it simply "do" what its ingrained "hardware" tells it to?
What about ideas in your mind? Do you think those are physical processes? Imagine a sunset. Isn't what you're imagining a thing?
This is a premise that can be confirmed. But for that, we need to agree on what an idea is.
Quoting noAxioms
Correct. AI does not have access to any idea.
Quoting noAxioms
We have been through this in another thread. I already defined the idea in the OP.
Quoting noAxioms
I can also produce a meaningful sentence that demonstrate an idea.
Quoting Patterner
Sometimes its hard to remember that something that seems completely obvious to one person is not even imaginable for another.
I already defined thinking in the OP.
Quoting Jack Cummins
It is a product of the conscious mind and the subconscious mind working together. These minds, however, are interconnected in a complex way by the brain.
Quoting Jack Cummins
I think that thinking transcends the thinker. You understand the meaning of a sentence right after you complete reading it. Each word in the sentence refers to an idea. The idea related to a word is registered in the memory of the conscious mind once the word is read. A new idea emerges magically once you complete reading a sentence!
What sort of emergent thing is the mind? To me, the mind is a substance; by the substance, I mean that something that objectively exists and has a set of abilities and properties, so it cannot be an emergent thing. Is the mind a substance to you as well? If not, what sort of thing is the mind?
Quoting T Clark
That is a very broad definition, which I don't agree with. For example, remembering is required for thinking, but it is not thinking. The same applies to free association.
Im OK with that as edited.
Quoting MoK
Of course it can. Life emerges out of chemistry. Chemistry emerges out of physics. Mind emerges out of neurology. Looks like youre understanding of emergence is different from mine.
Quoting MoK
But thats what it means. As Ive said before, if you want to make up definitions for words, its not really philosophy. Youre just playing a little game with yourself.
But how?
An explanation is needed that can account for the phenomena we call mental or conscious. For example, I see a glass of water. What is the neurological configuration from which we can deduce the glass of water as a conscious experience? Can we go inside the brain, see the neurons, and find the image of a glass like a movie and a proyector? The answer is no.
The thing is, we could be beings without consciousness and without experience, and yet the neurological explanation would still persist and remain valid. We cannot deduce experience from neurological explanation. In that sense, methodologically, we always start from consciousness and experience as something given, and we try to explain their origin, but we can never do so in reverse. That is why the idea of emergence is not very useful to us here and lacks capacity of explanation.
Language. Not communication - birds and bees communicate - but language, representation of objects and relations in symbolic form.
The fact I might not be able to account for the phenomena right now doesnt mean there isnt an explanation.
Quoting JuanZu
That is the essence of emergence. An emergent phenomena can be shown to be completely consistent with the principles of a lower level of organization. For example, all living phenomena must be consistent with the principles of physics and chemistry. That doesnt mean that the emergent phenomenon can be predicted, constructed, or deduced from the principles of the lower level of organization. Again - the principles of biology cannot generally be deduced from the principles of chemistry or physics. In the same manner, mental phenomena cannot be predicted based on neurological or biological principles.
Quoting JuanZu
That seems obviously false to me. Can you provide some evidence?
If so, then I do not understand what the concept of emergency introduces that helps us understand the phenomenon of experience.
Quoting T Clark
It follows from our methodological approach. We start from experience as something given and from there we establish relationships with the neurological, but imagine that we know nothing about consciousness and experience, that we are robots; how would we deduce that a being has experiences?
I can also imagine a baseball. It being solid and heavy, I'm quite certain a baseball has not been recreated in my head. Much less my imagining of the Rocky Mountains,
I think thinking is a process because it spans a period of time. The Empire State Building is the ESB every instant. If you froze time, it would still be the ESB, just sitting there. But if you freeze time, or my brain, there's no thinking. When I stop imagining a baseball, the imagined baseball no longer exists. Not even as an imagining. It's only when I'm actively imagining it that it exists in that way.
Noam Chomsky has a book on this, "Why Only Us? Language and Evolution" (co-authored with Robert Berwick). The title highlights the central question: why did only h.sapiens develop language? Other animals can communicatebees dance, birds sing, primates vocalizebut only humans can generate an unbounded array of meaningful sentences with a recursive structure. The only us refers to the exclusive possession of this recursive, generative capacity by humans. This refers to ability to nest and recombine units of meaning, which is what gives human language its unbounded expressive power. No animal communication system has been shown to allow recursive embedding. They stress that language is not primarily a system of communication, but a system of thought. Communication is a secondary use of an internal capacity for structuring and manipulating concepts. Animal communication systems (e.g., vervet alarm calls) are qualitatively different, not primitive stages of language.
Quoting Wayfarer
How can this be reconciled with the fact that many people don't think in words?
https://www.cbc.ca/news/canada/saskatchewan/inner-monologue-experience-science-1.5486969
:up:
Much more reasonable than that made up junk in the OP.
"Experiencing" is probably not a thing with AI, but "manipulating" almost certainly is. The type of responses AI gives are simply not amenable to a computation style which aggregates input and spits out a statistically plausible output. This would quicky fall prey to the combinatorial explosion of possible inputs. Afaict, manipulation of "elements of thought" is the only way AI can function at all.
For example consider playing chess, a tiny sliver of AI functionality, and one which is generally not explicitly trained for in LLMs. Imagine an input like 1. E4 E5. 2. NF3 F4... How could ai reliably produce a rational output based only on inputs it has seen before, when every game is unique. Only thinking can do this.
In fact, a representation of the chess board can be observed when LLMs play chess. What else could this internal, emergent chess board be other than an "element of thought"?
The common usage of "mind" though is that it is a noun that adjectives apply to.
You think the mind if a process, right, an action not a thing. Well, are ideas processes to?
I was going to bring up A Man Without Words. Someone here brought him to my attention several months ago. Ildefonso was born totally deaf. Nobody ever tried to communicate with him until he was 27. He literally had no language. It was like Helen Keller in The Miracle Worker when he realized these things the woman was doing represented objects. But harder than Helen Keller, because she at least had the beginnings of language when she got sick at 19 months. Anyway, Susan Schaller says Ildefonso was obviously very intelligent. Though he was ignorant about most everything, it was clear that he was trying to figure things out.
After he could communicate with sign language, people asked him what it was like before he had language. He says he doesn't know. Language changed him so much that he can't remember.
Clearly they can speak, and clearly they can think. But it also seems clear that they think without using words.
If words are just one style of thinking, it seems difficult to claim that language arose mainly as a tool for thinking.
All mental events are private. No one is aware of what other mental beings are having in their minds.
If AI can think, then we are not supposed to know about it. We can only guess if someone or being is thinking by their actions and words they are taking and speaking in proper manner for the situation or not.
Therefore AI cannot think, is not a well thought out claim.
This is what Stephen Pinker had to say in The Language Instinct.
This is what Stephen Pinker had to say in The Language Instinct. Im not sure if this contradicts what youve written or not.
Quoting RogueAIHow does this sound?
Pinker's (and Fodor's) theory of mentalese, which is that there is a primordial language pre-existing the creation of utterances or symbols is controversial and not well accepted. It's generally accepted though that an experience can exist without language and that experience might precede reduction to language, but that doesn't suggest the pre-existing experience was some sort of primordial language, but only suggests there are experiences that pre-exist language.
My point is that your quote is of a position that is generally challenged and not widely held.
So if I seperate out propositions from sentences, where a proposition is knowledge of an event (e.g. the cat is on the mat) and a sentence is the linguistic representation of that knowledege "The cat is on the mat," it seems reasonable a dog would know the cat is on the mat (i.e. possess the propositional knowlege), but not be able to linguistically form it into a sentence (or utterance). My question then is if the dog had propositional knowledge, then he is engaging in thought, and the dog might also know that if he tries to sit on the mat next to the cat he will be swatted. Is this then the distinction you're drawing between humans and animals just that humans are unusual in that they use sentences to express their thoughts where animals do not?
Or, does my problem rest in the assumption made by cognitive scientists that a proposition can exist without a sentence? If that is my error, how is it best argued do you think? It does seem propositional knowledge can exist without a sentence.
Im aware that its controversial, but that wasnt my main point. I was just trying to show that it is unreasonable to assume that language is necessarily required for thought.
The actual answer is yes. Observe:
The answer is no. What I "observe" is a recreation of images on a device other than the brain, but your are not looking the brain and finding those images.
You're right. But 's video is damn cool!
Then where does the information used to recreate the images on the device come from?
The brain does not store information, such as an image, in the same modality in which it was received. You are not going to find an actual image in the brain. What you will find, however, is information about the image encoded within the neural activity of the brain. This machine is able to identify that encoding and decode the image based on the brain activity.
Consider image compression. Take a random image file on your computer, run it through a compression algorithm, and then examine the compressed file. You will not see a recognizable image until you decompress it. This is essentially what the machine is doing: reconstructing images from brain activity.
Well, bear in mind, that was a paraphrase of Noam Chomsky and Robert Berwick's book. But it is also addressed in a polemical argument by Aristotelian philosopher Jacques Maritain:
[quote=The Cultural Impact of Empiricism]Thanks to the association of particular images and recollections, a dog reacts in a similar manner to the similar particular impressions his eyes or his nose receive from this thing we call a piece of sugar or this thing we call an intruder; he does not know what is 'sugar' or what is 'intruder'. He plays, he lives in his affective and motor functions, or rather he is put into motion by the similarities which exist between things of the same kind; he does not see the similarity, the common features as such. What is lacking is the flash of intelligibility; he has no ear for the intelligible meaning. He has not the idea or the concept of the thing he knows, that is, from which he receives sensory impressions; his knowledge remains immersed in the subjectivity of his own feelings -- only in man, with the universal idea, does knowledge achieve objectivity. And his field of knowledge is strictly limited: only the universal idea sets free -- in man -- the potential infinity of knowledge.[/quote]
That technology is astounding, no question. But it should be born in mind that those systems are trained on many hours of stimulus and response for particular subjects prior to the experiment being run. During this training the system establishes links between the neural patterns of the subject, and patterns of input data. So human expertise is constantly being interpolated into the experiment in order to achieve these results.
That's right, the input stimulus and response sessions are meant to identify the encoding that a specific brain uses for the images or parts of images it perceives. Once these encodings have been established for that brain, the perceived images can be decoded. Each person's encoding is different, like a fingerprint. There is about one-third overlap for most people, and an "untuned" decoder may be able to retrieve some images, but it would likely result in very low-resolution reconstructions, if anything useful at all.
Quoting Wayfarer
Could you clarify this statement please?
Ok. So we have to differentiate between information and experience (Mary's room then). Because you're not seeing the experience, but rather a reconstruction in a monitor, in a flat screen. A few pixels, but the experience isn't made up of pixels. It is a translation from something to something totally different.
That's fine, but my original response was about finding an image in the brain, not about the experience of the image. Experience involves the processing of information, since it is possible to have information encoded in your brain without being aware of it at a conscious or experiential level. An experience occurs when you acquire new information through your senses from the outside, and also when you retrieve and reconstruct previously stored memories in your conscious mind.
Quoting JuanZu
If you wanted to directly experience an image encoded in someone else's brain, heres what i think would need to be done: One could use a machine like the one in the video i shared to find the encoding in your brain and, for example, my brain. After acquiring both of our unique encodings, one could then use an LLM to translate between my encoding and yours. We would then need a machine capable of writing (not just reading) to your brain using your specific encoding. Now, when i look at an image, you would see and experience everything i see. Do you see?
To avoid misunderstandings, what do you think about the idea of finding the "living experience" in the brain? The fact that you can transfer neural information to a screen and construct an image says it all. When you see those images on the monitor that "reconstructs" them, you are not experiencing what is supposedly being reconstructed. In fact, the word reconstruction is misleading. I prefer to say objectifying what is subjective, but then something is lost, something that is no longer on the monitor. Basically, everything is lost; the experience itself is lost.
Quoting punos
Not at all. Because each person will experience it differently, due to their uniqueness.
Exactly. But behaviours and words can be repeated by a robot without consciousness. In that sense, all we can know is that a robot acts AS IF it were conscious. But that knowledge is not enough to know that it has consciousness.
The "living experience" in the brain is simply the active and recursive processing of the conscious mind, or the "global workspace". Experience is a stream of information continuously running through specific functional regions of the brain that architecturally encode the qualia of that experience. Without this recursive loop of self-information, there is no sense of living or experience. The "living experience" emerges from the information processing activity itself. Also note that the brain is a physical information system, or in other words "information that processes information". The key feature is the continuously active recurring information processing.
Quoting JuanZu
I responded to that with this:
Quoting punos
Quoting JuanZu
I addressed that issue here:
Quoting punos
"So we have to differentiate between information and experience (Mary's room then). Because you're not seeing the experience, but rather a reconstruction in a monitor, in a flat screen. A few pixels, but the experience isn't made up of pixels. It is a translation from something to something totally different."
The information is arranged on a substrate in which the experience cannot be broken down without losing what we call experience (when we see a glass of water, we do not see the neurons acting). It is like when we say that experience is nothing more than neural synapses. But methodologically, we have a one-way path: the association from experience to neural processes, but not a return path: from processes to experience.
In fact, this is confirmed in the video you brought: we FIRST have evidence of what experience is, and then we adjust the monitor so that the electrical signals resemble what we see in experience. But we can translate those signals into anything, not necessary into an image on a monitor. This raises a question: could we reconstruct experience in a physical way without first knowing what experience is (not seeing neurons, neither electrical signals, just a glass of water) and what it resembles? The answer is no.
After this whole discussion started, I went doing a little research on Google and in the SEP. What I found is consistent with what youre writing. There seem to have been two approaches to this question - one that uses a language-based approach and another that uses the kind of processes that are described in an LLM. I guess it is controversial which one is the proper one to use in this kind of a situation.
Chat GPT: Yes, when an LLM gets a joke and says ha ha, it isnt actually amused its just recognizing the pattern of a joke and producing the kind of response people usually give. Its a simulation of amusement, not the feeling itself.
So just like brain-image reconstructions give us a modelled output rather than direct access to the brains movie .
Right. Those statistical models are needed to reproduce the information contained within the electromagnetic signals emitted by neural activity. The information at this electromagnetic level is an encoding of the spiking electrochemical propagating patterns within the brain tissue. It is a byproduct of neural communication that can be measured and tapped into. The brain itself does not use these electromagnetic emissions as its own encoding. Therefore, there is no direct transfer of information, but rather a translation into a new encoding compatible with our devices that can then rerepresent that information in yet another encoding for the video screen or monitor.. Still the same information in a different encoding.
A single piece of information can exist in multiple places at once and be represented in multiple ways simultaneously. The information reconstructed from a brain scan is, in principle, the same information as in the brain if captured with perfect fidelity. It can be copied an infinite number of times, and each copy is identical to the original, provided the replication is perfectly accurate. The only limits to this process are practical constraints with current technology.
Quoting Wayfarer
Yes, this is because human expertise is required to build the system that performs the decoding and encoding. This makes it possible to extract information from the brain even if the specific image was not included in the training data for the statistical model. Without this step, there is no access to the information in the brain in order to copy it.
Quoting Wayfarer
That is exactly correct. It is not the image that is being read out, but the information about the image, which is then reconstructed into the image. Remember that the image in the brain is not stored in the format of an image. There is no little box of pictures in the brain with a little man looking at the picture when you see it. The information of the image is stored in the form of distributed neural weights, and we can only access that information when the brain itself activates it, which is why the stimulus and response phase of training is necessary.
It is possible to take neural data intended for the visual center of the brain and route it into the auditory center. In that case, the experience of the image is no longer visual but auditory. It is the same information, but situated within a different neural architecture. This phenomenon is called synesthesia, as i am sure you know.
I answered this here:
Quoting punos
This process would stimulate your brain using the information from my brain, after translating it from my encoding to yours giving you an experience of what i am seeing. My encoding would be mapped and translated to your encoding.
Quoting JuanZu
The entire system can be automated to exclude the human from the loop, except for the subject being scanned or course. All that is needed is for the computer to control a monitor on which it can display images to the subject. As the subject views the images, the machine records the corresponding neural responses and independently develops a statistical model that identifies which parts of the brain are involved in processing what. This process alone can yield a viable statistical model capable of detecting arbitrary images from brain scans without human supervision.
It's entirely possible to create a headset or helmet that constantly scans your brain throughout the day and compares images from a camera on the helmet to your neural activity, and by the end of a week maybe, it will be a robust model of the visual data in your brain.
Quoting JuanZu
I don't know what you're asking here. Perhaps you can rephrase it?
That's not a good answer. It doesn't address the issue of decomposition or methodology. A good answer would be: We can actually see neural processes first-person, and not only that, but methodologically we have discovered how to create consciousness without needing to be conscious ourselves as a necessary evidence.
Quoting punos
In our experience, we do not see the neural processes that would compose the glass of water. This points to an irreducible qualitative difference. Because if we try to break down the glass of water, we do not obtain those neural processes.
What scientific study does he cite for this empirical claim? If my dog goes and gets a ball when I say "go get your ball," even new balls not previously seen, have I disproved his claim by showing the dog's understanding of categories? If not, what evidence disproves his claim?
Spinoza's 'conception of substance' refutes this Cartesian (Aristotlean) error; instead, we attribute "mind" only to entities which exhibit 'purposeful behaviors'.
A more useful definition of "thinking" is 'reflective inquiry, such as learning/creating from failure' (i.e. metacognition).
Circular reasoning fallacy. You conclude only what you assume.
"The definition" does not entail any "fact" again, Mok, you're concluding what you assume.
I don't know what you mean, but i don't think you know what i mean either. You're being too vague or inconsistent about what we are talking about. I tried to show you how an image can be decoded from the brain and displayed on a non-conscious screen as pure information. I never claimed that the information has to be conscious (just the data). You wanted to know how to experience the image instead of just looking at it on a screen, so i gave you a way to do that. Now you're talking about creating consciousness when i'm explaining how to experience the sensory data of another person with your own consciousness.
Quoting JuanZu
We do not see the neural processes that encode a glass of water; we experience the process of reconstructing the information about a glass of water. When you observe neural activity from the outside, you naturally would not experience the glass of water. But if you place your perspective within the neural activity, becoming the neural activity itself (which you already are), then you would experience the glass of water through the activations responsible for its representation.
When you look at a glass of water, your brain breaks down the neural signals from the light that hits your retinas and filters those signals through a dense maze of neural pathways sorting out all the features of the image and storing the pieces all over the brain. The neural pathways that are activated every time you see a glass of water forms the neural representation of the glass of water in your brain. You experience that neural pathway as a glass of water in your conscious mind when it is activated. No activation means no experience of the glass of water.
Perhaps by scattering a range of balls of different sizes and saying 'fetch the large, white ball' or 'the ball nearest the lemon tree.' That might do the trick.
Each sentence refers to at least one idea, such as a relation, a situation, etc. In your example, we are dealing with a situation.
Quoting Nils Loc
We are dealing with a situation again, no matter how much detail you provide.
They don't know what thinking is, so they cannot design an AI that simulates thinking.
Quoting I like sushi
Are you saying that thinking is pattern recognition? I don't think so.
I already defined thinking in the OP.
So, what is thinking? You've, from what I've seen, yet to delineate a clear and concise formula (and resulting definition) for such.
Quoting MoK
Well, I mean, take the following sentence.
Ahaj scenap conopul seretif seyesen
I thought very hard to make that sentence. But, without it hitting the pattern recognition part of your brain that realizes "wait a minute that's gibberish" versus this sentence you're reading now. I mean, come on. Let's be honest. The onus is now on you to explain your claims properly. Something that at least two or more intelligent people participating in this thread feel you've so far been unable to do.
Love your avatar BTW. Reminds me of my mood most of time sober.
Well, it appears to be 'thinking' was my point. It cannot think. It would have been better of me to state that AI models do fool humans into thinking it can think.
It simulates speech very effectively now. I do certainly not equate speech with thought though. I want to be explicit about that!
Quoting MoK
I was not sayign any such thing. I was stating that AI is far more capable of pattern recognition than us. It can sift through masses of data and find patterns it would take us a long, long time to come close to noticing. It is likley these kinds of features of AI are what people mistaken for 'thinking' as it seriously out performance us when it comes to this kind of process.
Given the definition you suggested, you either don't understand what objectively exists means, or you don't know what emergence is. I don't understand why you removed substance from my definition, but something that objectively exists is a substance, as opposed to something that subjectively exists, such as an experience. A neural process cannot give rise to the emergence of a substance, or something that objectively exists.
Moreover, the brain is subject to constant change due to the existence of the mind. So, the brain cannot produce the mind and be affected by the mind at the same time. That is true, since the neural processes are subject to change once the mind affects the brain. There is, however, no mind once neural processes change. So, you cannot have both changes in neural processes and the mind at the same time.
Quoting T Clark
Biology, chemistry, etc., are reducible to physics. That means that we are dealing with weak emergence in these cases. Emergence of the mind, if it is possible, is strong emergence, which I strongly disagree that it is possible because of the reasons mentioned in the previous comment.
Quoting T Clark
To me, abstraction and imagination are examples of thinking. Remembering, free association, etc. are not.
He is definitely wrong. Purposeful behaviors are attributes of living creatures. Living creatures have at least a body and a mind.
Quoting 180 Proof
No. You need to read things in order to see what I said follows, and it is not circular.
P1) AI is mindless.
P2) The mind is needed for the creation of an idea
C1) Therefore, AI cannot create an idea (from P1 and P2)
P3) The thinking is defined as a process in which we work on known ideas with the aim of creating a new idea
C2) Therefore, AI cannot think (from C1 and P3)
C3) Therefore, AI cannot create a new idea (from P3 and C2)
I define thinking as a process in which we work on known ideas with the aim of creating a new idea. This definition is inclined to processes such as abstracting and imagination.
Quoting Outlander
You are talking about language here. Of course, this sentence does not mean anything to me since I cannot relate any of the words you used to something that I know. The language is used to communicate new ideas, which are the result of thinking. We are working with known ideas when it comes to thinking, so there is no such miscommunication between the conscious and subconscious mind.
Correct!
Quoting I like sushi
Correct again! An AI produces meaningful sentences only based on its database and infrastructure.
Quoting I like sushi
Correct again! :wink: An AI is just much faster than us in pattern recognition since it is silicon-based. It is specialized in certain tasks, though. Our brains are, however, huge compared to any neural net that is used in any AI, and it is multitasking. A neuron is just very slow.
I am glad you like my avatar! :wink:
Finally, the (metaphorical) tender and ignorant flesh is exposed. Now it can be graded properly. Ah, except I note one flaw. And I'm no professional by any means. There is no "we" in this abstract concept. A man can be born alone in the world and he will still think. But perhaps this is a simple habit of speech, a human flaw, like we all have to be ignored, so I shall. Just to give you the benefit of the doubt. :smile:
But! Ah, yes, there's a but. Even still. One cannot "know an idea" without the auspices and foreprocesses of thought itself. So, this is defining a concept without explaining its forebearer. Your so called "thinking" is created by the process of involvement with "known ideas". yet how can an idea exist and be known unless thought of? This results to yet another non-answer.
We would have evolution going in reverse, if one were to believe your so called findings and beliefs. This is a problem. You must find a solution.
Very accurate!
Correct. I should have said "an intelligent creature" instead of "we".
Quoting Outlander
I don't know the right word for playing with ideas, experiencing them, without any attempt to create a new idea. :wink: For sure, such an activity is different from thinking, given the definition of thinking.
E.g. try to visualize a horse without any assistance and draw it on paper. This is your generative psychological process 1. Then automatically notice the inaccuracy of your horse drawing. This is your critical psychological process 2. Then iterate to improve the drawing. This instance of thinking is clearly a circular causal process involving two or more partially-independent psychological actors. Then show the drawing to somebody (Process 3) and ask for feedback and repeat.
So in general, it is a conceptual error to think of AI systems as closed systems that possess independent thoughts, except as an ideal and ultimately false abstraction. Individual minds, like indivdual computer programs are "half-programs" that are reactive systems waiting for external input, whose behaviour isn't reducible to an individual internal state.
The definition of substance I was using refers to a physical material. The word has several other meanings, but they dont seem applicable to this case.
Quoting MoK
Yes, it can.
Quoting MoK
This is not correct.
Quoting MoK
As Ive noted several times in this thread, you are using non-standard definitions for words. Your and my arguments are incommensurable, by which I mean, our underlying arguments are not resolvable.
Lets leave it at that.
But a robot wouldn't repeat beaviours and words without valid reason or request or situation put onto it. If it did, then it is not a smart robot. AI robot is supposed to be smart and intelligent. If it is not, then it is just a machine, not AI robot.
I went back to the OP, and read it again, but there is nothing which sounds like, or resembles a definition of "think". Could you reiterate it here clearly? Thank you.
If you say so. But that means that you didn't pay attention to my argument.
Quoting T Clark
It is correct. We can calculate the physical properties of atoms and molecules using Ab initio methods and the density functional theory. We can even predict protein folding using AI as well. You can find two publications on this topic here and here.
The thinking is defined as a process in which we work on known ideas with the aim of creating a new idea.
https://www.tkm.kit.edu/downloads/TKM1_2011_more_is_different_PWA.pdf
Could you give some examples of known ideas and new ideas? How does it work?
Thanks for the article. I will read it when I have time.
When I say "cup", you immediately realize what I am talking about since the word refers to an idea. The sentence "the cup is on the table" contains many words; each word refers to an idea. The sentence, however, refers to a new idea, which in this case is a situation.
There is nothing new about any of those words. Everyone in the world knows what "cup" is, knows what "table" is. You were just uttering a sentence from what you saw. That is just giving a description of the content of your perception. New ideas should be something that is absolutely new, so no one knew what it was, no one has seen or heard about it before in history. That is new idea.
So, I am not able to accept the definition you provided. Wrong definition of the concept leads to misunderstanding and confusion in the arguments and discussions.
My point was that a sentence has more content than separate words that make up the sentence. We couldn't possibly communicate any new idea if a sentence does not have such a property. If you are looking for an absolutely new idea, then please consider the conclusion of the OP, namely, AI cannot think.
Could you please elaborate on what you mean by each classification?
Quoting Pieter R van Wyk
I cannot tell.
Quoting Pieter R van Wyk
AI cannot have abstract thought since it lacks access to the ideas necessary for imagination and abstraction.
From a fundamental definition of a system, based on first principles, it is possible to identify seven classes of systems. Five classes are identified by considering the interactions between a system and a collection of data and three classes are identified by considering the interactions between a system and its purpose. The first class in both classifications are equal thus there (currently) exist seven classes of systems. Since these classes emerge consequently and subsequently, based on new identifiable capabilities, it is possible to form a theory of evolution by combining the two classifications. AI still lacks only two capabilities that humans have: survival and abstraction. If (or when) AI gains both these two capabilities we humans will loose our place on the apex of evolution. Chapter 4 - Evolution of Classes and the Demarcation Meridian. How I Understand Things. The Logic of Existence.
I don't understand what the interactions between a system and a collection of data mean.
Quoting Pieter R van Wyk
I don't understand what the interactions between a system and its purpose mean.
Quoting Pieter R van Wyk
That is a big IF. As I argued in the OP, AI does not have access to ideas since it is mindless, so it lacks abstraction.
We agree that AI lacks abstraction - on this we are saying the same thing. I am not sure what "big IF' you are referring to, all I am saying is that if (or when) AI gains this capability then we humans will loose our place on the apex of evolution. You might agree or disagree with this conclusion.
Quoting MoK
Quoting MoK
Some classes of systems have the capability to interact with data (data being a collection of representations describing interactions), thus they have a perception of data and some classes of systems have a perception of their reason of existence (their purpose) thus they can interact with their purpose.
Being under the sword of Damocles called ostracisation, I might suggest that you read [i]How I Understand Things. The Logic of Existence[/I], it could contribute even more to your understanding.
Cool.
Quoting Pieter R van Wyk
Given the fact that AI lacks abstraction, AI cannot come up with a new idea. Therefore, AI cannot replace us at the pinnacle of evolution. Creating new ideas is fundamental in the evolution of the human species. Humans will evolve further, most probably without an end. I, however, think that AI will reach a threshold in its advancement, so it would be extremely difficult to make an AI that is more intelligent than former AIs.
:sweat: None of us knows what would happen in the future. I can argue that: "AI cannot come up with a new idea" - until it comes up with a new idea. That is how evolution takes place, not so? Sometime, through the evolution of our Universe, some animal had the first abstract thought. Before this event abstraction did not exist - after this event it does exist.
Methinks only the future knows the answer to this question.
Thank you for this conversation.
Okay, thank you for sharing your thoughts.