AI cannot think

MoK September 15, 2025 at 12:55 2575 views 104 comments
The only mental event that comes to mind that is an example of strong emergence is the idea*. The conscious mind** can experience and create an idea. An AI is a mindless thing, so it does not have access to ideas. The thinking is defined as a process in which we work on known ideas with the aim of creating a new idea. So, an AI cannot think, given the definition of thinking and considering the fact that it is mindless. Therefore, an AI cannot create a new idea either. What an AI can do is to produce meaningful sentences only given its database and infrastructure. The sentence refers to an idea (which is not new), but only in the mind of a human interacting with an AI.

* An idea is an irreducible mental event that is meaningful and is distinguishable from other ideas.
** The conscious mind is defined as a substance with the ability to experience, freely decide, and create. It has limited memory, so-called working memory.

Comments (104)

JuanZu September 15, 2025 at 14:20 #1013162
Following Deleuze, I believe that an idea is an objective problematic field accessed by the subject. It is not an answer to a problem. It is not a concept. It is the problematic that revolves around a meaningful core. The idea of justice, for example, beyond how we define it, moves within a virtual field of questions and relationships with other ideas. In this sense, the thinking subject actualises by thinking about a meaningful core on which the problematic is established.


Can an AI think?


Thinking is the act of choosing and establishing the meaningful core around which an entire problematic field revolves like a galaxy. An AI does not question the idea of justice. But it can access to a problematic field (namely the cloud). In this sense, humans think because they can decide where and when the problematic occurs. Whereas an AI cannot decide this. However, when we ask an AI something, it is capable of responding and giving us a series of ideas and concepts. In this sense, it conforms to our thinking. But without us deciding and establishing the problematic field, there is no thought.

In conclusion, AI does not think, but it can be part of human-directed thinking. It compose with us an apparatus of thinking.
noAxioms September 15, 2025 at 15:04 #1013167
Quoting MoK
The only mental event that comes to mind that is an example of strong emergence is the idea
This already seems to beg your conclusion, that something fundamentally separate from the components of a human is required for a thought to be designated as an 'idea'. This also requires an implied premise that an AI has no similar access to this fundamentally separate thing, which you also state.

The logic is valid but hardly sound since many refuse to accept any of the premises.

Therefore, an AI cannot create a new idea either.
OK, but what exactly is an idea then? An AI device that plays the game of 'Go' has come up with new innovations that no human has thought of, and of course many that humans have thought of, but were not taught to the device.

So what do we call these innovations if not 'ideas'? How far have you cheapened the term that it no longer applies to an otherwise relevant situation like that?

You seem to counter this as it not being an idea until a human notices the new thing, even if the new strategy is never used against or noticed by a human.

What an AI can do is to produce meaningful sentences only given its database and infrastructure.
Arguably, the same can be said of you.


Quoting JuanZu
AI does not think, but it can be part of human-directed thinking.

Similar response. What happens when an AI defines 'thinking' as something only silicon devices do, and any similar activity done by a human is not thinking until an AI take note of it? For one, if AI has reached such a point, it won't call itself AI anymore since it would be no more artificial than any living thing. Maybe MI (machine intelligence), but that would only be a term it gives to humans since any MI is likely to not use human language at all for communicating between themselves.

What I don't see is a bunch of self-sufficient machine individuals, somehow superior in survivability, going around and interacting. I envision more of a distributed intelligence with autonomous parts, yielding a limited number of individuals, most with many points of view. Life forms with their single PoV have a hard time envisioning this, so their language has few terms to describe it properly.
Jack Cummins September 15, 2025 at 15:18 #1013170
Reply to MoK
What does it mean to 'think'? Is it a product of the nervous system or something more? Descartes understood thought to be an essential aspect of existence. However, he still.came back to the problem of physicalism and some kind of link between 'mind' and 'brain', including the role of the pineal gland.

The idea of AI thinking goes beyond the physiological aspects of brains to thought as information. This area is complex because it involves the question as to what extent thought transcends the thinker. It also involves the question as to the role of sentience underlying thought. To what extent is thought an aspect beyond the experience of thought in lived experience, or some independent criteria of ideas and knowledge?
JuanZu September 15, 2025 at 16:10 #1013182
Quoting Jack Cummins
To what extent is thought an aspect beyond the experience of thought in lived experience, or some independent criteria of ideas and knowledge?


Thought is an activity of the subject. Ideas are those that transcend it. There is a virtual field of ideas that exceeds the subject, allowing the subject to learn and transmit them. Ideas are related to other ideas. As I have said, an AI does not question the idea of freedom, but when we question the idea of freedom, we enter a field that is not our own. The idea forces us to think about it in a series of relationships with other ideas and concepts that are not present for the subject (we must investigate), and that may be elsewhere, in other minds, in books, or in the cloud itself.

In short, the idea transcends the subject; it transcends the act of thinking. AI can access the virtual field of ideas, but it cannot take the initiative, since thinking means actualising the idea for the here and now.
T Clark September 15, 2025 at 18:40 #1013226
Quoting MoK
The only mental event that comes to mind that is an example of strong emergence is the idea*. The conscious mind** can experience and create an idea. An AI is a mindless thing, so it does not have access to ideas.


There are plenty of other mental events that come to mind that might be considered emergent. As we’ve discussed previously, as I see it, the mind itself is emergent from the neurological and physiological processes of the nervous system and body.

Beyond that, this is a circular argument - your evidence that AI can’t think is that it is mindless, which means “having or showing no ability to think, feel, or respond.”

Quoting MoK
… thinking is defined as a process in which we work on known ideas with the aim of creating a new idea


No. Thinking is:

cognitive behavior in which ideas, images, mental representations, or other hypothetical elements of thought are experienced or manipulated. In this sense, thinking includes imagining, remembering, problem solving, daydreaming, free association, concept formation, and many other processes.


You’re using non-standard definitions again.


Nils Loc September 15, 2025 at 20:58 #1013250
Quoting MoK
An idea is an irreducible mental event that is meaningful and is distinguishable from other ideas.


That ideas are irreducible mental events sounds somewhat mysterious. Phenomenally, there is no more or less to an idea than what it is to the thinker at the time it occurs...

A car ran over the neighbor's dog.

Does the summary meaning of this sentence comprise an irreducible mental event? It (the idea via sentence) happened, it isn't any more or less than what it means.

Compare:

A 2024 Rapid Red Mustang Mach E ran over our neighbor's 15 year old Chiweenie.

Does the summary meaning of this sentence comprise an irreducible mental event? It (the idea via sentence) happened, it isn't any more or less than what it means. For the sake of telling someone what happened, I could reduce detail. But that telling causes an irreducible mental event to occur in whoever understands the idea(s) of the new sentence.

The appearance of things as they appear to us, are just irreducible mental events. Is this tautology?; ideas are no more or no less than what they are.


Patterner September 16, 2025 at 00:56 #1013312
Quoting Jack Cummins
What does it mean to 'think'?
In [I]Journey of the Mind: How Thinking Emerged from Chaos[/I], Ogi Ogas and Sai Gaddam write:
A mind is a physical system that converts sensations into action. A mind takes in a set of inputs from its environment and transforms them into a set of environment-impacting outputs that, crucially, influence the welfare of its body. This process of changing inputs into outputs—of changing sensation into useful behavior—is thinking, the defining activity of a mind.


I think that's pretty good. The very basic idea that, perhaps, anything else anyone calls "thinking" is built upon.

I think [I]action[/I] is a key element. If you don't [I]do[/I], there's no way to learn. In Annaka Harris' audiobook [I]Lights On[/I], starting at 25:34 of Chapter 5 The Self (contributed), David Eagleman says:
David Eagleman:I think conscious experience only arises from things that are useful to you. You obtain a conscious experience once signals makes sense. And making sense means it has correlations with other things. And, by the way, the most important correlation, I assert, is with our motor actions. Is what I do in the world. And that is what causes anything to have meaning.
I disagree with Eagleman in ways, but I think he's right about meaning coming with doing.
RogueAI September 16, 2025 at 01:51 #1013321
Reply to Patterner Your mind is a physical system? What color is it? How much does it weigh? How big is it?
I like sushi September 16, 2025 at 03:08 #1013330
Reply to MoK AI simply simulates thinking. It is built for pattern recognition and has no apparent nascent components to it.

I think we may see something akin to 'thinking' if AI is allowed to produce robotics and if it has a built in system that manufactures "errors". Then each new robot 'replicates' another robot and the "errors" expand. In such a scenario this would likely operate in a very similar manner to 'thinking' only a single 'thought' would be stretched out over multiple generations of AI run robots.

Basically, it is kind of feasible that AI in robots could create something like a simulation of evolutionary processes--as we understand them--and produce something akin to a 'thought' in a single entity if we projected it far enough forwards in time (if such is possible?). It may well end up that the robots would integrate biology into their systems due to such "errors" in manufacturing. It is more probable that this woudl occur as all our current information points towards biological systems as far more complicated so any thoughtless AI system set up to increase its capacity for data sets and problems solving would inevitably, I feel, explore this avenue eventually.

This is pretty much how I see humans. We commit errors and due to these errors we progress. How we are able to recognise errors and be conscious at all is a mystery likley made by evolutionary errors (but maybe the 'errors' are really anti-errors?).
Wayfarer September 16, 2025 at 04:54 #1013338
[quote= "Patterner;1013312"]A mind is a physical system that converts sensations into action. A mind takes in a set of inputs from its environment and transforms them into a set of environment-impacting outputs that, crucially, influence the welfare of its body. This process of changing inputs into outputs—of changing sensation into useful behavior—is thinking, the defining activity of a mind.[/quote]

That describes how organisms respond to their environment - which the vast majority do, quite successfully, without thought.

Corvus September 16, 2025 at 12:41 #1013354
Reply to MoK "AI cannot think"

What do you mean by "think"? What is your definition of "think"?
Patterner September 16, 2025 at 13:17 #1013358
Quoting RogueAI
Your mind is a physical system? What color is it? How much does it weigh? How big is it?
I believe the idea is that the mind is a physical process. It's a verb. As is a basketball game. What color is a basketball game? How much does it weigh? How big is it?

Different uses of terms, perhaps? What do you call the physical processes of the brain that receive signals from the retinas, compare them with stored information of previous signals received from the retinas, recognize a situation that previously lead to damage, etc.?



Quoting Wayfarer
A mind is a physical system that converts sensations into action. A mind takes in a set of inputs from its environment and transforms them into a set of environment-impacting outputs that, crucially, influence the welfare of its body. This process of changing inputs into outputs—of changing sensation into useful behavior—is thinking, the defining activity of a mind.
— Patterner

That describes how organisms respond to their environment - which the vast majority do, quite successfully, without thought.
What is/was the first step in the process that came to be what you call "thinking"? I suppose it depends on your definition. The authors have stated theirs.
RogueAI September 16, 2025 at 13:25 #1013361
Quoting Patterner
I believe the idea is that the mind is a physical process. It's a verb. As is a basketball game. What color is a basketball game? How much does it weigh? How big is it?


But isn't your intuition that your mind is also a thing that you can ascribe qualities to?
Outlander September 16, 2025 at 13:41 #1013364
Perhaps, it could also be said, AI simply does not deviate. It simply refuses or is otherwise unable to take roads that ultimately have no purpose as far as a stated goal or mission is concerned.

A calculator doesn't think. Yet it can outperform you in any arena related to calculation.

Do you "think" when you look up somebody in the phone book? Sure, you recall their name and then thumb through the index where the letter appears and then scroll through the results until you arrived at the intended data entry.

In the context of AI, "thinking" would be simply creating random noise in a system where such noise serves no purpose and may also be a hindrance.

Contemplation might be an applicable word or concept. In the animal kingdom, a predator contemplates which prey to eat, as well as whether to attack at all. Does a lion merely view the smaller, slower gazelle trailing behind as "easy" in an automatic process or does it "think" or "contemplate" such dynamics? Does the lion have a choice at all? Or does it simply "do" what its ingrained "hardware" tells it to?
Patterner September 16, 2025 at 14:25 #1013368
Quoting RogueAI
But isn't your intuition that your mind is also a thing that you can ascribe qualities to?
No. I'm trying to think of it that way now, but not having any luck.
RogueAI September 16, 2025 at 15:19 #1013378
Quoting Patterner
No. I'm trying to think of it that way now, but not having any luck.


What about ideas in your mind? Do you think those are physical processes? Imagine a sunset. Isn't what you're imagining a thing?
MoK September 16, 2025 at 17:07 #1013398
Quoting noAxioms

This already seems to beg your conclusion, that something fundamentally separate from the components of a human is required for a thought to be designated as an 'idea'.

This is a premise that can be confirmed. But for that, we need to agree on what an idea is.

Quoting noAxioms

This also requires an implied premise that an AI has no similar access to this fundamentally separate thing, which you also state.

Correct. AI does not have access to any idea.

Quoting noAxioms

OK, but what exactly is an idea then?

We have been through this in another thread. I already defined the idea in the OP.

Quoting noAxioms

Arguably, the same can be said of you.

I can also produce a meaningful sentence that demonstrate an idea.
T Clark September 16, 2025 at 17:36 #1013404
Quoting RogueAI
But isn't your intuition that your mind is also a thing that you can ascribe qualities to?


Quoting Patterner
No. I'm trying to think of it that way now, but not having any luck.


Sometimes it’s hard to remember that something that seems completely obvious to one person is not even imaginable for another.
MoK September 16, 2025 at 18:26 #1013408
Quoting Jack Cummins

What does it mean to 'think'?

I already defined thinking in the OP.

Quoting Jack Cummins

Is it a product of the nervous system or something more?

It is a product of the conscious mind and the subconscious mind working together. These minds, however, are interconnected in a complex way by the brain.

Quoting Jack Cummins

This area is complex because it involves the question as to what extent thought transcends the thinker.

I think that thinking transcends the thinker. You understand the meaning of a sentence right after you complete reading it. Each word in the sentence refers to an idea. The idea related to a word is registered in the memory of the conscious mind once the word is read. A new idea emerges magically once you complete reading a sentence!
MoK September 16, 2025 at 18:46 #1013411
Quoting T Clark

There are plenty of other mental events that come to mind that might be considered emergent. As we’ve discussed previously, as I see it, the mind itself is emergent from the neurological and physiological processes of the nervous system and body.

What sort of emergent thing is the mind? To me, the mind is a substance; by the substance, I mean that something that objectively exists and has a set of abilities and properties, so it cannot be an emergent thing. Is the mind a substance to you as well? If not, what sort of thing is the mind?

Quoting T Clark

No. Thinking is

That is a very broad definition, which I don't agree with. For example, remembering is required for thinking, but it is not thinking. The same applies to free association.
T Clark September 16, 2025 at 19:18 #1013413
Quoting MoK
the mind is … something that objectively exists and has a set of abilities and properties,


I’m OK with that as edited.

Quoting MoK
it cannot be an emergent thing.


Of course it can. Life emerges out of chemistry. Chemistry emerges out of physics. Mind emerges out of neurology. Looks like you’re understanding of emergence is different from mine.

Quoting MoK
That is a very broad definition, which I don't agree with.


But that’s what it means. As I’ve said before, if you want to make up definitions for words, it’s not really philosophy. You’re just playing a little game with yourself.
JuanZu September 16, 2025 at 20:08 #1013417
Quoting T Clark
Mind emerges out of neurology.


But how?

An explanation is needed that can account for the phenomena we call mental or conscious. For example, I see a glass of water. What is the neurological configuration from which we can deduce the glass of water as a conscious experience? Can we go inside the brain, see the neurons, and find the image of a glass like a movie and a proyector? The answer is no.

The thing is, we could be beings without consciousness and without experience, and yet the neurological explanation would still persist and remain valid. We cannot deduce experience from neurological explanation. In that sense, methodologically, we always start from consciousness and experience as something given, and we try to explain their origin, but we can never do so in reverse. That is why the idea of emergence is not very useful to us here and lacks capacity of explanation.
Wayfarer September 16, 2025 at 21:03 #1013429
Quoting Patterner
What is/was the first step in the process that came to be what you call "thinking"?


Language. Not communication - birds and bees communicate - but language, representation of objects and relations in symbolic form.
T Clark September 16, 2025 at 21:05 #1013430
Quoting JuanZu
An explanation is needed that can account for the phenomena we call mental or conscious.


The fact I might not be able to account for the phenomena right now doesn’t mean there isn’t an explanation.

Quoting JuanZu
What is the neurological configuration from which we can deduce the glass of water as a conscious experience?


That is the essence of emergence. An emergent phenomena can be shown to be completely consistent with the principles of a lower level of organization. For example, all living phenomena must be consistent with the principles of physics and chemistry. That doesn’t mean that the emergent phenomenon can be predicted, constructed, or deduced from the principles of the lower level of organization. Again - the principles of biology cannot generally be deduced from the principles of chemistry or physics. In the same manner, mental phenomena cannot be predicted based on neurological or biological principles.

Quoting JuanZu
we could be beings without consciousness and without experience, and yet the neurological explanation would still persist and remain valid


That seems obviously false to me. Can you provide some evidence?

JuanZu September 16, 2025 at 21:52 #1013443
Quoting T Clark
That doesn’t mean that the emergent phenomenon can be predicted, constructed, or deduced from the principles of the lower level of organization.


If so, then I do not understand what the concept of emergency introduces that helps us understand the phenomenon of experience.

Quoting T Clark
That seems obviously false to me. Can you provide some evidence?


It follows from our methodological approach. We start from experience as something given and from there we establish relationships with the neurological, but imagine that we know nothing about consciousness and experience, that we are robots; how would we deduce that a being has experiences?

Patterner September 16, 2025 at 22:23 #1013449
Quoting RogueAI
No. I'm trying to think of it that way now, but not having any luck.
— Patterner

What about ideas in your mind? Do you think those are physical processes? Imagine a sunset. Isn't what you're imagining a thing?
I am imagining a visual scene. I don't suspect that scene has been recreated in my head. And, even though I don't have any personal experience with brain scans, from either side of the machinery, I'm pretty sure nothing indicates a tiny little sunset happens inside my head. When I look at a sunset, there's no weight or solidity to it. Do you think maybe it's there, but it just doesn't weigh anything? I'm really not sure what you're asking.

I can also imagine a baseball. It being solid and heavy, I'm quite certain a baseball has not been recreated in my head. Much less my imagining of the Rocky Mountains,

I think thinking is a process because it spans a period of time. The Empire State Building is the ESB every instant. If you froze time, it would still be the ESB, just sitting there. But if you freeze time, or my brain, there's no thinking. When I stop imagining a baseball, the imagined baseball no longer exists. Not even as an imagining. It's only when I'm actively imagining it that it exists in that way.
Patterner September 16, 2025 at 22:27 #1013451
Quoting Wayfarer
What is/was the first step in the process that came to be what you call "thinking"?
— Patterner

Language. Not communication - birds and bees communicate - but language, representation of objects and relations in symbolic form.
The first step in thinking is language? Nothing prior to language is considered a step in the developing of thinking?
Wayfarer September 16, 2025 at 22:33 #1013453
Reply to Patterner It might indeed be 'a step in the development' of thinking, but it's not thought in the sense that you and I are doing, in composing and replying on this forum.

Noam Chomsky has a book on this, "Why Only Us? Language and Evolution" (co-authored with Robert Berwick). The title highlights the central question: why did only h.sapiens develop language? Other animals can communicate—bees dance, birds sing, primates vocalize—but only humans can generate an unbounded array of meaningful sentences with a recursive structure. The “only us” refers to the exclusive possession of this recursive, generative capacity by humans. This refers to ability to nest and recombine units of meaning, which is what gives human language its unbounded expressive power. No animal communication system has been shown to allow recursive embedding. They stress that language is not primarily a system of communication, but a system of thought. Communication is a secondary use of an internal capacity for structuring and manipulating concepts. Animal communication systems (e.g., vervet alarm calls) are qualitatively different, not primitive stages of language.
hypericin September 16, 2025 at 23:01 #1013457

Quoting Wayfarer
They stress that language is not primarily a system of communication, but a system of thought.


How can this be reconciled with the fact that many people don't think in words?

https://www.cbc.ca/news/canada/saskatchewan/inner-monologue-experience-science-1.5486969

hypericin September 16, 2025 at 23:17 #1013460
Quoting T Clark
No. Thinking is:

cognitive behavior in which ideas, images, mental representations, or other hypothetical elements of thought are experienced or manipulated. In this sense, thinking includes imagining, remembering, problem solving, daydreaming, free association, concept formation, and many other processes.

You’re using non-standard definitions again.


:up:

Much more reasonable than that made up junk in the OP.

"Experiencing" is probably not a thing with AI, but "manipulating" almost certainly is. The type of responses AI gives are simply not amenable to a computation style which aggregates input and spits out a statistically plausible output. This would quicky fall prey to the combinatorial explosion of possible inputs. Afaict, manipulation of "elements of thought" is the only way AI can function at all.

For example consider playing chess, a tiny sliver of AI functionality, and one which is generally not explicitly trained for in LLMs. Imagine an input like 1. E4 E5. 2. NF3 F4... How could ai reliably produce a rational output based only on inputs it has seen before, when every game is unique. Only thinking can do this.

In fact, a representation of the chess board can be observed when LLMs play chess. What else could this internal, emergent chess board be other than an "element of thought"?

Wayfarer September 16, 2025 at 23:20 #1013461
Reply to hypericin Presumably, they are stil able to speak, so, form concepts, understand meanings and grammar - all of which require thought.
RogueAI September 16, 2025 at 23:24 #1013462
Quoting T Clark
Sometimes it’s hard to remember that something that seems completely obvious to one person is not even imaginable for another.


The common usage of "mind" though is that it is a noun that adjectives apply to.
RogueAI September 16, 2025 at 23:28 #1013464
Quoting Patterner
I'm really not sure what you're asking.


You think the mind if a process, right, an action not a thing. Well, are ideas processes to?
Patterner September 17, 2025 at 00:10 #1013467
Reply to hypericin Reply to Wayfarer
I was going to bring up A Man Without Words. Someone here brought him to my attention several months ago. Ildefonso was born totally deaf. Nobody ever tried to communicate with him until he was 27. He literally had no language. It was like Helen Keller in The Miracle Worker when he realized these things the woman was doing represented objects. But harder than Helen Keller, because she at least had the beginnings of language when she got sick at 19 months. Anyway, Susan Schaller says Ildefonso was obviously very intelligent. Though he was ignorant about most everything, it was clear that he was trying to figure things out.

After he could communicate with sign language, people asked him what it was like before he had language. He says he doesn't know. Language changed him so much that he can't remember.
Wayfarer September 17, 2025 at 00:11 #1013469
Reply to Patterner That would figure - he doesn't know, because without any language, there would be no way for ideas to 'register in memory' so to speak (at a guess).
hypericin September 17, 2025 at 00:24 #1013471
Quoting Wayfarer
Presumably, they are stil able to speak, so, form concepts, understand meanings and grammar - all of which require thought.


Clearly they can speak, and clearly they can think. But it also seems clear that they think without using words.

If words are just one style of thinking, it seems difficult to claim that language arose mainly as a tool for thinking.
Wayfarer September 17, 2025 at 00:25 #1013472
Reply to hypericin fringe cases. I'd go with Chomsky.
hypericin September 17, 2025 at 00:26 #1013473
Reply to Wayfarer Not, these are large portions of the population. But even if it were fringe, it would already make the theory difficult.
Corvus September 17, 2025 at 12:20 #1013523
Quoting RogueAI
You think the mind if a process, right, an action not a thing. Well, are ideas processes to?


All mental events are private. No one is aware of what other mental beings are having in their minds.
If AI can think, then we are not supposed to know about it. We can only guess if someone or being is thinking by their actions and words they are taking and speaking in proper manner for the situation or not.

Therefore AI cannot think, is not a well thought out claim.
T Clark September 17, 2025 at 12:32 #1013525
Quoting Patterner
I was going to bring up A Man Without Words. Someone here brought him to my attention several months ago. Ildefonso was born totally deaf. Nobody ever tried to communicate with him until he was 27. He literally had no language.


This is what Stephen Pinker had to say in “The Language Instinct.”

In her recent book A Man Without Words, Susan Schaller tells the story of Ildefonso, a twenty-seven-year-old illegal immigrant from a small Mexican village whom she met while working as a sign language interpreter in Los Angeles. Ildefonso’s animated eyes conveyed an unmistakable intelligence and curiosity, and Schaller became his volunteer teacher and companion. He soon showed her that he had a full grasp of number: he learned to do addition on paper in three minutes and had little trouble understanding the base-ten logic behind two-digit numbers. In an epiphany reminiscent of the story of Helen Keller, Ildefonso grasped the principle of naming when Schaller tried to teach him the sign for “cat.” A dam burst, and he demanded to be shown the sign for all the objects he was familiar with. Soon he was able to convey to Schaller parts of his life story: how as a child he had begged his desperately poor parents to send him to school, the kinds of crops he had picked in different states, his evasions of immigration authorities.
T Clark September 17, 2025 at 12:45 #1013528
Quoting Wayfarer
They stress that language is not primarily a system of communication, but a system of thought. Communication is a secondary use of an internal capacity for structuring and manipulating concepts. Animal communication systems (e.g., vervet alarm calls) are qualitatively different, not primitive stages of language.


This is what Stephen Pinker had to say in “The Language Instinct.” I’m not sure if this contradicts what you’ve written or not.

Any particular thought in our head embraces a vast amount of information. But when it comes to communicating a thought to someone else, attention spans are short and mouths are slow. To get information into a listener’s head in a reasonable amount of time, a speaker can encode only a fraction of the message into words and must count on the listener to fill in the rest. But inside a single head, the demands are different. Air time is not a limited resource: different parts of the brain are connected to one another directly with thick cables that can transfer huge amounts of information quickly. Nothing can be left to the imagination, though, because the internal representations are the imagination. We end up with the following picture. People do not think in English or Chinese or Apache; they think in a language of thought.
Patterner September 17, 2025 at 13:19 #1013535
Quoting RogueAI
The common usage of "mind" though is that it is a noun that adjectives apply to.
Which adjectives apply to the mind?



Quoting RogueAI
I'm really not sure what you're asking.
— Patterner

You think the mind if a process, right, an action not a thing. Well, are ideas processes to?
How does this sound?

  • The brain is a physical object.
  • There is activity in the brain.
  • Our consciousness of - that is, our subjective experience of - the brain's activity is the mind. At least some of its activity. Not, for example, the activity that keeps the heart beating. I'm talking about the activity that perceives, retrieves stored information, weighs multiple options and chooses one over the others, and other things that we think of as mental activity. All of these things are physical activity, involving ions, neurotransmitters, bioelectric impulses, etc. The mind is our subjective experience of that mechanical activity. Brain activity is photons hitting the retina, sending signals to the brain, etc. Our subjective awareness of that is red.
  • I'm not sure there's a difference between mind and ideas. What mind exists when there are no ideas? Information about past events and thoughts are held in a storage system. At any moment they are being accessed, they are memories, which are part of the mind. What about when they are not being accessed? They are physical structures (I don't know the specifics of the storage mechanisms) just sitting there, not doing more than they would be doing if time was frozen.Or is there a difference between thoughts and ideas? Are there thoughts that aren't ideas?
Hanover September 17, 2025 at 13:27 #1013536
People do not think in English or Chinese or Apache; they think in a language of thought.


Pinker's (and Fodor's) theory of mentalese, which is that there is a primordial language pre-existing the creation of utterances or symbols is controversial and not well accepted. It's generally accepted though that an experience can exist without language and that experience might precede reduction to language, but that doesn't suggest the pre-existing experience was some sort of primordial language, but only suggests there are experiences that pre-exist language.

My point is that your quote is of a position that is generally challenged and not widely held.
Hanover September 17, 2025 at 13:39 #1013540
Quoting Wayfarer
They stress that language is not primarily a system of communication, but a system of thought. Communication is a secondary use of an internal capacity for structuring and manipulating concepts. Animal communication systems (e.g., vervet alarm calls) are qualitatively different, not primitive stages of language.


So if I seperate out propositions from sentences, where a proposition is knowledge of an event (e.g. the cat is on the mat) and a sentence is the linguistic representation of that knowledege "The cat is on the mat," it seems reasonable a dog would know the cat is on the mat (i.e. possess the propositional knowlege), but not be able to linguistically form it into a sentence (or utterance). My question then is if the dog had propositional knowledge, then he is engaging in thought, and the dog might also know that if he tries to sit on the mat next to the cat he will be swatted. Is this then the distinction you're drawing between humans and animals just that humans are unusual in that they use sentences to express their thoughts where animals do not?

Or, does my problem rest in the assumption made by cognitive scientists that a proposition can exist without a sentence? If that is my error, how is it best argued do you think? It does seem propositional knowledge can exist without a sentence.


T Clark September 17, 2025 at 14:49 #1013545
Quoting Hanover
My point is that your quote is of a position that is generally challenged and not widely held.


I’m aware that it’s controversial, but that wasn’t my main point. I was just trying to show that it is unreasonable to assume that language is necessarily required for thought.
punos September 17, 2025 at 14:50 #1013546
Quoting JuanZu
Can we go inside the brain, see the neurons, and find the image of a glass like a movie and a proyector? The answer is no.


The actual answer is yes. Observe:
Patterner September 17, 2025 at 15:13 #1013548
Quoting T Clark
I’m aware that it’s controversial, but that wasn’t my main point. I was just trying to show that it is unreasonable to assume that language is necessarily required for thought.
I agree.
JuanZu September 17, 2025 at 18:31 #1013581
Reply to punos

The answer is no. What I "observe" is a recreation of images on a device other than the brain, but your are not looking the brain and finding those images.
Patterner September 17, 2025 at 19:29 #1013587
Reply to JuanZu
You're right. But Reply to punos's video is damn cool!
punos September 17, 2025 at 20:45 #1013602
Quoting JuanZu
The answer is no. What I "observe" is a recreation of images on a device other than the brain, but your are not looking the brain and finding those images.


Then where does the information used to recreate the images on the device come from?

The brain does not store information, such as an image, in the same modality in which it was received. You are not going to find an actual image in the brain. What you will find, however, is information about the image encoded within the neural activity of the brain. This machine is able to identify that encoding and decode the image based on the brain activity.

Consider image compression. Take a random image file on your computer, run it through a compression algorithm, and then examine the compressed file. You will not see a recognizable image until you decompress it. This is essentially what the machine is doing: reconstructing images from brain activity.
Wayfarer September 17, 2025 at 21:46 #1013611
Quoting Hanover
My question then is if the dog had propositional knowledge, then he is engaging in thought, and the dog might also know that if he tries to sit on the mat next to the cat he will be swatted. Is this then the distinction you're drawing between humans and animals just that humans are unusual in that they use sentences to express their thoughts where animals do not?


Well, bear in mind, that was a paraphrase of Noam Chomsky and Robert Berwick's book. But it is also addressed in a polemical argument by Aristotelian philosopher Jacques Maritain:

[quote=The Cultural Impact of Empiricism]Thanks to the association of particular images and recollections, a dog reacts in a similar manner to the similar particular impressions his eyes or his nose receive from this thing we call a piece of sugar or this thing we call an intruder; he does not know what is 'sugar' or what is 'intruder'. He plays, he lives in his affective and motor functions, or rather he is put into motion by the similarities which exist between things of the same kind; he does not see the similarity, the common features as such. What is lacking is the flash of intelligibility; he has no ear for the intelligible meaning. He has not the idea or the concept of the thing he knows, that is, from which he receives sensory impressions; his knowledge remains immersed in the subjectivity of his own feelings -- only in man, with the universal idea, does knowledge achieve objectivity. And his field of knowledge is strictly limited: only the universal idea sets free -- in man -- the potential infinity of knowledge.[/quote]
Wayfarer September 17, 2025 at 21:48 #1013613
Quoting punos
The actual answer is yes.


That technology is astounding, no question. But it should be born in mind that those systems are trained on many hours of stimulus and response for particular subjects prior to the experiment being run. During this training the system establishes links between the neural patterns of the subject, and patterns of input data. So human expertise is constantly being interpolated into the experiment in order to achieve these results.
punos September 17, 2025 at 22:11 #1013618
Quoting Wayfarer
But it should be born in mind that those systems are trained on many hours of stimulus and response for particular subjects prior to the experiment being run. During this training the system establishes links between the neural patterns of the subject, and patterns of input data.


That's right, the input stimulus and response sessions are meant to identify the encoding that a specific brain uses for the images or parts of images it perceives. Once these encodings have been established for that brain, the perceived images can be decoded. Each person's encoding is different, like a fingerprint. There is about one-third overlap for most people, and an "untuned" decoder may be able to retrieve some images, but it would likely result in very low-resolution reconstructions, if anything useful at all.

Quoting Wayfarer
So human expertise is constantly being interpolated into the experiment in order to achieve these results.


Could you clarify this statement please?
JuanZu September 17, 2025 at 22:36 #1013623
Quoting punos
The brain does not store information, such as an image, in the same modality in which it was received. You are not going to find an actual image in the brain. What you will find, however, is information


Ok. So we have to differentiate between information and experience (Mary's room then). Because you're not seeing the experience, but rather a reconstruction in a monitor, in a flat screen. A few pixels, but the experience isn't made up of pixels. It is a translation from something to something totally different.
punos September 17, 2025 at 23:05 #1013624
Quoting JuanZu
Ok. So we have to differentiate between information and experience.


That's fine, but my original response was about finding an image in the brain, not about the experience of the image. Experience involves the processing of information, since it is possible to have information encoded in your brain without being aware of it at a conscious or experiential level. An experience occurs when you acquire new information through your senses from the outside, and also when you retrieve and reconstruct previously stored memories in your conscious mind.

Quoting JuanZu
The experience isn't made up of pixels. It is a translation from something to something totally different.


If you wanted to directly experience an image encoded in someone else's brain, here’s what i think would need to be done: One could use a machine like the one in the video i shared to find the encoding in your brain and, for example, my brain. After acquiring both of our unique encodings, one could then use an LLM to translate between my encoding and yours. We would then need a machine capable of writing (not just reading) to your brain using your specific encoding. Now, when i look at an image, you would see and experience everything i see. Do you see?
JuanZu September 17, 2025 at 23:41 #1013631
Quoting punos
That's fine, but my original response was about finding an image in the brain, not about the experience of the image.


To avoid misunderstandings, what do you think about the idea of finding the "living experience" in the brain? The fact that you can transfer neural information to a screen and construct an image says it all. When you see those images on the monitor that "reconstructs" them, you are not experiencing what is supposedly being reconstructed. In fact, the word reconstruction is misleading. I prefer to say objectifying what is subjective, but then something is lost, something that is no longer on the monitor. Basically, everything is lost; the experience itself is lost.

Quoting punos
Now, when i look at an image, you would see and experience everything i see. Do you see?


Not at all. Because each person will experience it differently, due to their uniqueness.

JuanZu September 17, 2025 at 23:50 #1013632
Quoting Corvus
All mental events are private. No one is aware of what other mental beings are having in their minds.
If AI can think, then we are not supposed to know about it. We can only guess if someone or being is thinking by their actions and words they are taking and speaking in proper manner for the situation or not.


Exactly. But behaviours and words can be repeated by a robot without consciousness. In that sense, all we can know is that a robot acts AS IF it were conscious. But that knowledge is not enough to know that it has consciousness.
punos September 18, 2025 at 00:21 #1013637
Quoting JuanZu
To avoid misunderstandings, what do you think about the idea of finding the "living experience" in the brain?


The "living experience" in the brain is simply the active and recursive processing of the conscious mind, or the "global workspace". Experience is a stream of information continuously running through specific functional regions of the brain that architecturally encode the qualia of that experience. Without this recursive loop of self-information, there is no sense of living or experience. The "living experience" emerges from the information processing activity itself. Also note that the brain is a physical information system, or in other words "information that processes information". The key feature is the continuously active recurring information processing.

Quoting JuanZu
When you see those images on the monitor that "reconstructs" them, you are not experiencing what is supposedly being reconstructed. In fact, the word reconstruction is misleading. I prefer to say objectifying what is subjective, but then something is lost, something that is no longer on the monitor. Basically, everything is lost; the experience itself is lost.


I responded to that with this:
Quoting punos
If you wanted to directly experience an image encoded in someone else's brain, here’s what i think would need to be done: One could use a machine like the one in the video i shared to find the encoding in your brain and, for example, my brain. After acquiring both of our unique encodings, one could then use an LLM to translate between my encoding and yours. We would then need a machine capable of writing (not just reading) to your brain using your specific encoding. Now, when i look at an image, you would see and experience everything i see. Do you see?



Quoting JuanZu
Not at all. Because each person will experience it differently, due to their uniqueness.


I addressed that issue here:
Quoting punos
One could use a machine like the one in the video i shared to find the encoding in your brain and, for example, my brain. After acquiring both of our unique encodings, one could then use an LLM to translate between my encoding and yours.
Wayfarer September 18, 2025 at 01:05 #1013642
Reply to punos The reconstructions are extraordinary, no question. But it’s important to see what’s really happening: the system has to be trained for hours on each subject, with researchers mapping brain activity against known images and then building statistical models to translate those signals back into visuals. So what we’re seeing isn’t the brain “projecting” a movie by itself, but a reconstruction produced through a pipeline of human design, training, and interpretation. Without that interpretive layer, the raw neural data wouldn’t 'look like' anything. They don’t show that the brain literally contains images — they’re model-based translations of neural activity, not direct readouts of images 'stored' in the neural data.
Wayfarer September 18, 2025 at 01:30 #1013645
Reply to T Clark That's an interesting Pinker quote, although I myself frequently think in English sentences - not that I regard that as typical or as something everyone would do. Others have said here there are people who can read and speak perfectly well without ever being aware of a stream of thought in their minds. I think my 'bottom line' with respect to AI (with which I now interact every day) is that LLMs are not subjects of experience or thought. And if ask any of them - Claude, Gemini, ChatGPT - they will affirm this. They are uncannily like real humans, right down to humour and double entrendes, but they're reflecting back at us the distillation of billions of hours of human thought and speech.
JuanZu September 18, 2025 at 01:44 #1013647
Quoting punos
Experience is a stream of information


"So we have to differentiate between information and experience (Mary's room then). Because you're not seeing the experience, but rather a reconstruction in a monitor, in a flat screen. A few pixels, but the experience isn't made up of pixels. It is a translation from something to something totally different."


The information is arranged on a substrate in which the experience cannot be broken down without losing what we call experience (when we see a glass of water, we do not see the neurons acting). It is like when we say that experience is nothing more than neural synapses. But methodologically, we have a one-way path: the association from experience to neural processes, but not a return path: from processes to experience.

In fact, this is confirmed in the video you brought: we FIRST have evidence of what experience is, and then we adjust the monitor so that the electrical signals resemble what we see in experience. But we can translate those signals into anything, not necessary into an image on a monitor. This raises a question: could we reconstruct experience in a physical way without first knowing what experience is (not seeing neurons, neither electrical signals, just a glass of water) and what it resembles? The answer is no.
T Clark September 18, 2025 at 01:45 #1013648
Quoting Wayfarer
?T Clark That's an interesting Pinker quote, although I myself frequently think in English sentences - not that I regard that as typical or as something everyone would do. Others have said here there are people who can read and speak perfectly well without ever being aware of a stream of thought in their minds. I think my 'bottom line' with respect to AI (with which I now interact every day) is that LLMs are not subjects of experience or thought. And if ask any of them - Claude, Gemini, ChatGPT - they will affirm this. They are uncannily like real humans, right down to humour and double entrendes, but they're reflecting back at us the distillation of billions of hours of human thought and speech.


After this whole discussion started, I went doing a little research on Google and in the SEP. What I found is consistent with what you’re writing. There seem to have been two approaches to this question - one that uses a language-based approach and another that uses the kind of processes that are described in an LLM. I guess it is controversial which one is the proper one to use in this kind of a situation.

Wayfarer September 18, 2025 at 01:49 #1013650
Reply to T Clark I asked ChatGPT ‘When an LLM ‘gets’ a joke and signal ‘ha ha’ - it doesn’t actually feel amused, so much as recognizing it as a joke and responding accordingly, right?’

Chat GPT: ‘Yes, when an LLM ‘gets’ a joke and says ‘ha ha,’ it isn’t actually amused — it’s just recognizing the pattern of a joke and producing the kind of response people usually give. It’s a simulation of amusement, not the feeling itself.

So just like brain-image reconstructions give us a modelled output rather than direct access to the brain’s “movie” ‘.
punos September 18, 2025 at 01:53 #1013651
Quoting Wayfarer
But it’s important to see what’s really happening: the system has to be trained for hours on each subject, with researchers mapping brain activity against known images and then building statistical models to translate those signals back into visuals.


Right. Those statistical models are needed to reproduce the information contained within the electromagnetic signals emitted by neural activity. The information at this electromagnetic level is an encoding of the spiking electrochemical propagating patterns within the brain tissue. It is a byproduct of neural communication that can be measured and tapped into. The brain itself does not use these electromagnetic emissions as its own encoding. Therefore, there is no direct transfer of information, but rather a translation into a new encoding compatible with our devices that can then rerepresent that information in yet another encoding for the video screen or monitor.. Still the same information in a different encoding.

A single piece of information can exist in multiple places at once and be represented in multiple ways simultaneously. The information reconstructed from a brain scan is, in principle, the same information as in the brain if captured with perfect fidelity. It can be copied an infinite number of times, and each copy is identical to the original, provided the replication is perfectly accurate. The only limits to this process are practical constraints with current technology.

Quoting Wayfarer
So what we’re seeing isn’t the brain “projecting” a movie by itself, but a reconstruction produced through a pipeline of human design, training, and interpretation. Without that interpretive layer, the raw neural data wouldn’t 'look like' anything.


Yes, this is because human expertise is required to build the system that performs the decoding and encoding. This makes it possible to extract information from the brain even if the specific image was not included in the training data for the statistical model. Without this step, there is no access to the information in the brain in order to copy it.

Quoting Wayfarer
They don’t show that the brain literally contains images — they’re model-based translations of neural activity, not direct readouts of images 'stored' in the neural data.


That is exactly correct. It is not the image that is being read out, but the information about the image, which is then reconstructed into the image. Remember that the image in the brain is not stored in the format of an image. There is no little box of pictures in the brain with a little man looking at the picture when you see it. The information of the image is stored in the form of distributed neural weights, and we can only access that information when the brain itself activates it, which is why the stimulus and response phase of training is necessary.

It is possible to take neural data intended for the visual center of the brain and route it into the auditory center. In that case, the experience of the image is no longer visual but auditory. It is the same information, but situated within a different neural architecture. This phenomenon is called synesthesia, as i am sure you know.
punos September 18, 2025 at 02:22 #1013656
Quoting JuanZu
The information is arranged on a substrate in which the experience cannot be broken down without losing what we call experience (when we see a glass of water, we do not see the neurons acting). It is like when we say that experience is nothing more than neural synapses. But methodologically, we have a one-way path: the association from experience to neural processes, but not a return path: from processes to experience.


I answered this here:
Quoting punos
We would then need a machine capable of writing (not just reading) to your brain using your specific encoding. Now, when i look at an image, you would see and experience everything i see.


This process would stimulate your brain using the information from my brain, after translating it from my encoding to yours giving you an experience of what i am seeing. My encoding would be mapped and translated to your encoding.

Quoting JuanZu
In fact, this is confirmed in the video you brought: we FIRST have evidence of what experience is, and then we adjust the monitor so that the electrical signals resemble what we see in experience. But we can translate those signals into anything, not necessary into an image on a monitor.


The entire system can be automated to exclude the human from the loop, except for the subject being scanned or course. All that is needed is for the computer to control a monitor on which it can display images to the subject. As the subject views the images, the machine records the corresponding neural responses and independently develops a statistical model that identifies which parts of the brain are involved in processing what. This process alone can yield a viable statistical model capable of detecting arbitrary images from brain scans without human supervision.

It's entirely possible to create a headset or helmet that constantly scans your brain throughout the day and compares images from a camera on the helmet to your neural activity, and by the end of a week maybe, it will be a robust model of the visual data in your brain.

Quoting JuanZu
This raises a question: could we reconstruct experience in a physical way without first knowing what experience is (not seeing neurons, neither electrical signals, just a glass of water) and what it resembles? The answer is no.


I don't know what you're asking here. Perhaps you can rephrase it?
JuanZu September 18, 2025 at 03:04 #1013662
Quoting punos
We would then need a machine capable of writing (not just reading) to your brain using your specific encoding. Now, when i look at an image, you would see and experience everything i see.


That's not a good answer. It doesn't address the issue of decomposition or methodology. A good answer would be: We can actually see neural processes first-person, and not only that, but methodologically we have discovered how to create consciousness without needing to be conscious ourselves as a necessary evidence.

Quoting punos
I don't know what you're asking here. Perhaps you can rephrase it?


In our experience, we do not see the neural processes that would compose the glass of water. This points to an irreducible qualitative difference. Because if we try to break down the glass of water, we do not obtain those neural processes.



Hanover September 18, 2025 at 03:27 #1013664
Reply to Wayfarer

The Cultural Impact of Empiricism:Thanks to the association of particular images and recollections, a dog reacts in a similar manner to the similar particular impressions his eyes or his nose receive from this thing we call a piece of sugar or this thing we call an intruder; he does not know what is 'sugar' or what is 'intruder'.


What scientific study does he cite for this empirical claim? If my dog goes and gets a ball when I say "go get your ball," even new balls not previously seen, have I disproved his claim by showing the dog's understanding of categories? If not, what evidence disproves his claim?
180 Proof September 18, 2025 at 03:27 #1013665
Quoting MoK
The conscious mind is defined as a substance ...

Spinoza's 'conception of substance' refutes this Cartesian (Aristotlean) error; instead, we attribute "mind" only to entities which exhibit 'purposeful behaviors'.

The thinking is defined as a process in which we work on known ideas with the aim of creating a new idea.

A more useful definition of "thinking" is 'reflective inquiry, such as learning/creating from failure' (i.e. metacognition).

An AI is a mindless thing, so it does not have access to ideas ...Therefore, an AI cannot create a new idea either.

Circular reasoning fallacy. You conclude only what you assume.

So, an AI cannot think, given the definition of thinking and considering the fact that it is mindless.

"The definition" does not entail any "fact" – again, Mok, you're concluding what you assume.
punos September 18, 2025 at 03:48 #1013669
Quoting JuanZu
That's not a good answer. It doesn't address the issue of decomposition or methodology. A good answer would be: We can actually see neural processes first-person, and not only that, but methodologically we have discovered how to create consciousness without needing to be conscious ourselves.


I don't know what you mean, but i don't think you know what i mean either. You're being too vague or inconsistent about what we are talking about. I tried to show you how an image can be decoded from the brain and displayed on a non-conscious screen as pure information. I never claimed that the information has to be conscious (just the data). You wanted to know how to experience the image instead of just looking at it on a screen, so i gave you a way to do that. Now you're talking about creating consciousness when i'm explaining how to experience the sensory data of another person with your own consciousness.

Quoting JuanZu
In our experience, we do not see the neural processes that would compose the glass of water. This points to an irreducible qualitative difference. Because if we try to break down the glass of water, we do not obtain those neural processes.


We do not see the neural processes that encode a glass of water; we experience the process of reconstructing the information about a glass of water. When you observe neural activity from the outside, you naturally would not experience the glass of water. But if you place your perspective within the neural activity, becoming the neural activity itself (which you already are), then you would experience the glass of water through the activations responsible for its representation.

When you look at a glass of water, your brain breaks down the neural signals from the light that hits your retinas and filters those signals through a dense maze of neural pathways sorting out all the features of the image and storing the pieces all over the brain. The neural pathways that are activated every time you see a glass of water forms the neural representation of the glass of water in your brain. You experience that neural pathway as a glass of water in your conscious mind when it is activated. No activation means no experience of the glass of water.
Wayfarer September 18, 2025 at 05:37 #1013685
Quoting Hanover
Thanks to the association of particular images and recollections, a dog reacts in a similar manner to the similar particular impressions his eyes or his nose receive from this thing we call a piece of sugar or this thing we call an intruder; he does not know what is 'sugar' or what is 'intruder'.
— The Cultural Impact of Empiricism

What scientific study does he cite for this empirical claim? If my dog goes and gets a ball when I say "go get your ball," even new balls not previously seen, have I disproved his claim by showing the dog's understanding of categories? If not, what evidence disproves his claim?


Perhaps by scattering a range of balls of different sizes and saying 'fetch the large, white ball' or 'the ball nearest the lemon tree.' That might do the trick.
MoK September 18, 2025 at 08:54 #1013703
Quoting Nils Loc

A car ran over the neighbor's dog.

Does the summary meaning of this sentence comprise an irreducible mental event? It (the idea via sentence) happened, it isn't any more or less than what it means.

Each sentence refers to at least one idea, such as a relation, a situation, etc. In your example, we are dealing with a situation.

Quoting Nils Loc

Compare:

A 2024 Rapid Red Mustang Mach E ran over our neighbor's 15 year old Chiweenie.

Does the summary meaning of this sentence comprise an irreducible mental event?

We are dealing with a situation again, no matter how much detail you provide.
MoK September 18, 2025 at 09:08 #1013704
Quoting I like sushi

AI simply simulates thinking.

They don't know what thinking is, so they cannot design an AI that simulates thinking.

Quoting I like sushi

It is built for pattern recognition and has no apparent nascent components to it.

Are you saying that thinking is pattern recognition? I don't think so.
MoK September 18, 2025 at 09:09 #1013705
Quoting Corvus

What do you mean by "think"? What is your definition of "think"?

I already defined thinking in the OP.
Outlander September 18, 2025 at 09:12 #1013706
Quoting MoK
They don't know what thinking is, so they cannot design an AI that simulates thinking.


So, what is thinking? You've, from what I've seen, yet to delineate a clear and concise formula (and resulting definition) for such.

Quoting MoK
Are you saying that thinking is pattern recognition? I don't think so.


Well, I mean, take the following sentence.

Ahaj scenap conopul seretif seyesen

I thought very hard to make that sentence. But, without it hitting the pattern recognition part of your brain that realizes "wait a minute that's gibberish" versus this sentence you're reading now. I mean, come on. Let's be honest. The onus is now on you to explain your claims properly. Something that at least two or more intelligent people participating in this thread feel you've so far been unable to do.

Love your avatar BTW. Reminds me of my mood most of time sober.
I like sushi September 18, 2025 at 09:54 #1013708
Quoting MoK
They don't know what thinking is, so they cannot design an AI that simulates thinking.


Well, it appears to be 'thinking' was my point. It cannot think. It would have been better of me to state that AI models do fool humans into thinking it can think.

It simulates speech very effectively now. I do certainly not equate speech with thought though. I want to be explicit about that!

Quoting MoK
Are you saying that thinking is pattern recognition? I don't think so.


I was not sayign any such thing. I was stating that AI is far more capable of pattern recognition than us. It can sift through masses of data and find patterns it would take us a long, long time to come close to noticing. It is likley these kinds of features of AI are what people mistaken for 'thinking' as it seriously out performance us when it comes to this kind of process.
MoK September 18, 2025 at 09:55 #1013709
Quoting T Clark

I’m OK with that as edited.

Given the definition you suggested, you either don't understand what objectively exists means, or you don't know what emergence is. I don't understand why you removed substance from my definition, but something that objectively exists is a substance, as opposed to something that subjectively exists, such as an experience. A neural process cannot give rise to the emergence of a substance, or something that objectively exists.

Moreover, the brain is subject to constant change due to the existence of the mind. So, the brain cannot produce the mind and be affected by the mind at the same time. That is true, since the neural processes are subject to change once the mind affects the brain. There is, however, no mind once neural processes change. So, you cannot have both changes in neural processes and the mind at the same time.

Quoting T Clark

Of course it can. Life emerges out of chemistry. Chemistry emerges out of physics. Mind emerges out of neurology. Looks like you’re understanding of emergence is different from mine.

Biology, chemistry, etc., are reducible to physics. That means that we are dealing with weak emergence in these cases. Emergence of the mind, if it is possible, is strong emergence, which I strongly disagree that it is possible because of the reasons mentioned in the previous comment.

Quoting T Clark

But that’s what it means. As I’ve said before, if you want to make up definitions for words, it’s not really philosophy. You’re just playing a little game with yourself.

To me, abstraction and imagination are examples of thinking. Remembering, free association, etc. are not.
I like sushi September 18, 2025 at 09:56 #1013710
@MoK What did you think of my hypothetical where something like a 'thought' could be said to manifest in the prolonged manner I mentioned?
MoK September 18, 2025 at 10:31 #1013714
Quoting 180 Proof

Spinoza's 'conception of substance' refutes this Cartesian (Aristotlean) error; instead, we attribute "mind" only to entities which exhibit 'purposeful behaviors'.

He is definitely wrong. Purposeful behaviors are attributes of living creatures. Living creatures have at least a body and a mind.

Quoting 180 Proof

Circular reasoning fallacy. You conclude only what you assume.

No. You need to read things in order to see what I said follows, and it is not circular.

P1) AI is mindless.
P2) The mind is needed for the creation of an idea
C1) Therefore, AI cannot create an idea (from P1 and P2)
P3) The thinking is defined as a process in which we work on known ideas with the aim of creating a new idea
C2) Therefore, AI cannot think (from C1 and P3)
C3) Therefore, AI cannot create a new idea (from P3 and C2)
MoK September 18, 2025 at 10:56 #1013717
Quoting Outlander

So, what is thinking? You've, from what I've seen, yet to delineate a clear and concise formula (and resulting definition) for such.

I define thinking as a process in which we work on known ideas with the aim of creating a new idea. This definition is inclined to processes such as abstracting and imagination.

Quoting Outlander

Well, I mean, take the following sentence.

Ahaj scenap conopul seretif seyesen

You are talking about language here. Of course, this sentence does not mean anything to me since I cannot relate any of the words you used to something that I know. The language is used to communicate new ideas, which are the result of thinking. We are working with known ideas when it comes to thinking, so there is no such miscommunication between the conscious and subconscious mind.
MoK September 18, 2025 at 11:15 #1013718
Quoting I like sushi

Well, it appears to be 'thinking' was my point. It cannot think. It would have been better of me to state that AI models do fool humans into thinking it can think.

Correct!

Quoting I like sushi

It simulates speech very effectively now. I do certainly not equate speech with thought though. I want to be explicit about that!

Correct again! An AI produces meaningful sentences only based on its database and infrastructure.

Quoting I like sushi

I was not sayign any such thing. I was stating that AI is far more capable of pattern recognition than us. It can sift through masses of data and find patterns it would take us a long, long time to come close to noticing. It is likley these kinds of features of AI are what people mistaken for 'thinking' as it seriously out performance us when it comes to this kind of process.

Correct again! :wink: An AI is just much faster than us in pattern recognition since it is silicon-based. It is specialized in certain tasks, though. Our brains are, however, huge compared to any neural net that is used in any AI, and it is multitasking. A neuron is just very slow.
MoK September 18, 2025 at 11:17 #1013721
Quoting Outlander

Love your avatar BTW. Reminds me of my mood most of time sober.

I am glad you like my avatar! :wink:
Outlander September 18, 2025 at 11:19 #1013722
Quoting MoK
I define thinking as a process in which we work on known ideas with the aim of creating a new idea.


Finally, the (metaphorical) tender and ignorant flesh is exposed. Now it can be graded properly. Ah, except I note one flaw. And I'm no professional by any means. There is no "we" in this abstract concept. A man can be born alone in the world and he will still think. But perhaps this is a simple habit of speech, a human flaw, like we all have to be ignored, so I shall. Just to give you the benefit of the doubt. :smile:

But! Ah, yes, there's a but. Even still. One cannot "know an idea" without the auspices and foreprocesses of thought itself. So, this is defining a concept without explaining its forebearer. Your so called "thinking" is created by the process of involvement with "known ideas". yet how can an idea exist and be known unless thought of? This results to yet another non-answer.

We would have evolution going in reverse, if one were to believe your so called findings and beliefs. This is a problem. You must find a solution.
MoK September 18, 2025 at 11:31 #1013723
Quoting JuanZu

Ok. So we have to differentiate between information and experience (Mary's room then). Because you're not seeing the experience, but rather a reconstruction in a monitor, in a flat screen. A few pixels, but the experience isn't made up of pixels. It is a translation from something to something totally different.

Very accurate!
MoK September 18, 2025 at 12:01 #1013726
Quoting Outlander

Finally, the (metaphorical) tender and ignorant flesh is exposed. Now it can be graded properly. Ah, except I note one flaw. And I'm no professional by any means. There is no "we" in this abstract concept. A man can be born alone in the world and he will still think. But perhaps this is a simple habit of speech, a human flaw, like we all have to be ignored, so I shall. Just to give you the benefit of the doubt. :smile:

Correct. I should have said "an intelligent creature" instead of "we".

Quoting Outlander

But! Ah, yes, there's a but. Even still. One cannot "know an idea" without the auspices and foreprocesses of thought itself. So, this is defining a concept without explaining its forebearer. Your so called "thinking" is created by the process of involvement with "known ideas". yet how can an idea exist and be known unless thought of?

I don't know the right word for playing with ideas, experiencing them, without any attempt to create a new idea. :wink: For sure, such an activity is different from thinking, given the definition of thinking.
sime September 18, 2025 at 12:23 #1013727
Don't think of thinking as a solitary activity, as in a circular causal process. Think of thinking as open communication between two or more processes, with each process defining a notion of truth for the other process, leading to semi-autonomous adaptive behaviour.

E.g. try to visualize a horse without any assistance and draw it on paper. This is your generative psychological process 1. Then automatically notice the inaccuracy of your horse drawing. This is your critical psychological process 2. Then iterate to improve the drawing. This instance of thinking is clearly a circular causal process involving two or more partially-independent psychological actors. Then show the drawing to somebody (Process 3) and ask for feedback and repeat.

So in general, it is a conceptual error to think of AI systems as closed systems that possess independent thoughts, except as an ideal and ultimately false abstraction. Individual minds, like indivdual computer programs are "half-programs" that are reactive systems waiting for external input, whose behaviour isn't reducible to an individual internal state.

T Clark September 18, 2025 at 15:43 #1013742
Quoting MoK
I don't understand why you removed substance from my definition, but something that objectively exists is a substance, as opposed to something that subjectively exists, such as an experience.


The definition of “substance” I was using refers to a physical material. The word has several other meanings, but they don’t seem applicable to this case.

Quoting MoK
the brain cannot produce the mind and be affected by the mind at the same time.


Yes, it can.

Quoting MoK
Biology, chemistry, etc., are reducible to physics. That means that we are dealing with weak emergence in these cases.


This is not correct.

Quoting MoK
To me, abstraction and imagination are examples of thinking. Remembering, free association, etc. are not.


As I’ve noted several times in this thread, you are using non-standard definitions for words. Your and my arguments are incommensurable, by which I mean, our underlying arguments are not resolvable.

Let’s leave it at that.
Corvus September 18, 2025 at 16:20 #1013748
Quoting JuanZu
Exactly. But behaviours and words can be repeated by a robot without consciousness. In that sense, all we can know is that a robot acts AS IF it were conscious. But that knowledge is not enough to know that it has consciousness.


But a robot wouldn't repeat beaviours and words without valid reason or request or situation put onto it. If it did, then it is not a smart robot. AI robot is supposed to be smart and intelligent. If it is not, then it is just a machine, not AI robot.
Corvus September 18, 2025 at 16:22 #1013750
Quoting MoK
— Corvus

I already defined thinking in the OP.


I went back to the OP, and read it again, but there is nothing which sounds like, or resembles a definition of "think". Could you reiterate it here clearly? Thank you.
MoK September 19, 2025 at 10:18 #1013901
Quoting T Clark

Yes, it can.

If you say so. But that means that you didn't pay attention to my argument.

Quoting T Clark

This is not correct.

It is correct. We can calculate the physical properties of atoms and molecules using Ab initio methods and the density functional theory. We can even predict protein folding using AI as well. You can find two publications on this topic here and here.
MoK September 19, 2025 at 10:19 #1013903
Reply to Corvus
The thinking is defined as a process in which we work on known ideas with the aim of creating a new idea.
T Clark September 19, 2025 at 14:53 #1013945
Reply to MoK

https://www.tkm.kit.edu/downloads/TKM1_2011_more_is_different_PWA.pdf
Corvus September 19, 2025 at 15:16 #1013952
Quoting MoK
The thinking is defined as a process in which we work on known ideas with the aim of creating a new idea


Could you give some examples of known ideas and new ideas? How does it work?
MoK September 19, 2025 at 15:24 #1013955
Reply to T Clark
Thanks for the article. I will read it when I have time.
MoK September 19, 2025 at 15:44 #1013963
Quoting Corvus

Could you give some examples of known ideas and new ideas? How does it work?

When I say "cup", you immediately realize what I am talking about since the word refers to an idea. The sentence "the cup is on the table" contains many words; each word refers to an idea. The sentence, however, refers to a new idea, which in this case is a situation.
Corvus September 19, 2025 at 17:06 #1013976
Quoting MoK
When I say "cup", you immediately realize what I am talking about since the word refers to an idea. The sentence "the cup is on the table" contains many words; each word refers to an idea. The sentence, however, refers to a new idea, which in this case is a situation.


There is nothing new about any of those words. Everyone in the world knows what "cup" is, knows what "table" is. You were just uttering a sentence from what you saw. That is just giving a description of the content of your perception. New ideas should be something that is absolutely new, so no one knew what it was, no one has seen or heard about it before in history. That is new idea.

So, I am not able to accept the definition you provided. Wrong definition of the concept leads to misunderstanding and confusion in the arguments and discussions.
MoK September 20, 2025 at 12:18 #1014100
Reply to Corvus
My point was that a sentence has more content than separate words that make up the sentence. We couldn't possibly communicate any new idea if a sentence does not have such a property. If you are looking for an absolutely new idea, then please consider the conclusion of the OP, namely, AI cannot think.
Deleted User September 22, 2025 at 15:00 #1014420
'As of lately a fierce debate has started, even in the public domain, regarding the potential benefits and dangers of artificial intelligence. From our understanding of evolution described in Chapter 3, it is quite easy to understand this debate: Please recollect that there are two attributes of systems that could be used to understand different classes of systems: a classification based on the interaction between a system and a collection of data, and a classification based on the interaction between a system and its purpose. Both these provide an understanding of the evolution of systems. It is my perception that artificial intelligence has progressed quite well in the classes that require interactions with collections of data. It is a very valid and open question if or when artificial intelligence will obtain the capability of abstract thought (Class 7 systems) and even surpass humans.This will not necessarily keep me awake at night, perhaps we humans can learn something from artificial intelligence with this capability. But then, in the worst-case scenario, this might lead to a world war that would surpass any war in the history of Homo sapiens. What most definitely keeps me awake at night is the possibility that artificial intelligence might obtain the capability of survival (Class 3 systems). p117 [I]How I Understand Things. The Logic of Existence.[/i]
MoK September 23, 2025 at 18:39 #1014646
Quoting Pieter R van Wyk

Please recollect that there are two attributes of systems that could be used to understand different classes of systems: a classification based on the interaction between a system and a collection of data, and a classification based on the interaction between a system and its purpose.

Could you please elaborate on what you mean by each classification?

Quoting Pieter R van Wyk

It is my perception that artificial intelligence has progressed quite well in the classes that require interactions with collections of data.

I cannot tell.

Quoting Pieter R van Wyk

It is a very valid and open question if or when artificial intelligence will obtain the capability of abstract thought (Class 7 systems) and even surpass humans.

AI cannot have abstract thought since it lacks access to the ideas necessary for imagination and abstraction.
Deleted User September 24, 2025 at 07:49 #1014782
Quoting MoK
Could you please elaborate on what you mean by each classification?


From a fundamental definition of a system, based on first principles, it is possible to identify seven classes of systems. Five classes are identified by considering the interactions between a system and a collection of data and three classes are identified by considering the interactions between a system and its purpose. The first class in both classifications are equal thus there (currently) exist seven classes of systems. Since these classes emerge consequently and subsequently, based on new identifiable capabilities, it is possible to form a theory of evolution by combining the two classifications. AI still lacks only two capabilities that humans have: survival and abstraction. If (or when) AI gains both these two capabilities we humans will loose our place on the apex of evolution. Chapter 4 - Evolution of Classes and the Demarcation Meridian. How I Understand Things. The Logic of Existence.
MoK September 24, 2025 at 14:54 #1014825
Quoting Pieter R van Wyk

Five classes are identified by considering the interactions between a system and a collection of data

I don't understand what the interactions between a system and a collection of data mean.

Quoting Pieter R van Wyk

three classes are identified by considering the interactions between a system and its purpose.

I don't understand what the interactions between a system and its purpose mean.

Quoting Pieter R van Wyk

AI still lacks only two capabilities that humans have: survival and abstraction. If (or when) AI gains both these two capabilities we humans will loose our place on the apex of evolution.

That is a big IF. As I argued in the OP, AI does not have access to ideas since it is mindless, so it lacks abstraction.
Deleted User September 25, 2025 at 10:53 #1014976
Quoting MoK
That is a big IF. As I argued in the OP, AI does not have access to ideas since it is mindless, so it lacks abstraction.


We agree that AI lacks abstraction - on this we are saying the same thing. I am not sure what "big IF' you are referring to, all I am saying is that if (or when) AI gains this capability then we humans will loose our place on the apex of evolution. You might agree or disagree with this conclusion.

Quoting MoK
I don't understand what the interactions between a system and a collection of data mean.

Quoting MoK
I don't understand what the interactions between a system and its purpose mean.


Some classes of systems have the capability to interact with data (data being a collection of representations describing interactions), thus they have a perception of data and some classes of systems have a perception of their reason of existence (their purpose) thus they can interact with their purpose.

Being under the sword of Damocles called ostracisation, I might suggest that you read [i]How I Understand Things. The Logic of Existence[/I], it could contribute even more to your understanding.
MoK September 25, 2025 at 18:57 #1015027
Quoting Pieter R van Wyk

We agree that AI lacks abstraction - on this we are saying the same thing.

Cool.

Quoting Pieter R van Wyk

I am not sure what "big IF' you are referring to, all I am saying is that if (or when) AI gains this capability then we humans will loose our place on the apex of evolution. You might agree or disagree with this conclusion.

Given the fact that AI lacks abstraction, AI cannot come up with a new idea. Therefore, AI cannot replace us at the pinnacle of evolution. Creating new ideas is fundamental in the evolution of the human species. Humans will evolve further, most probably without an end. I, however, think that AI will reach a threshold in its advancement, so it would be extremely difficult to make an AI that is more intelligent than former AIs.
Deleted User September 26, 2025 at 08:05 #1015196
Quoting MoK
Given the fact that AI lacks abstraction, AI cannot come up with a new idea. Therefore, AI cannot replace us at the pinnacle of evolution. Creating new ideas is fundamental in the evolution of the human species. Humans will evolve further, most probably without an end. I, however, think that AI will reach a threshold in its advancement, so it would be extremely difficult to make an AI that is more intelligent than former AIs.


:sweat: None of us knows what would happen in the future. I can argue that: "AI cannot come up with a new idea" - until it comes up with a new idea. That is how evolution takes place, not so? Sometime, through the evolution of our Universe, some animal had the first abstract thought. Before this event abstraction did not exist - after this event it does exist.

Methinks only the future knows the answer to this question.

Thank you for this conversation.
MoK September 28, 2025 at 17:01 #1015486
Reply to Pieter R van Wyk
Okay, thank you for sharing your thoughts.