Questioning the Idea and Assumptions of Artificial Intelligence and Practical Implications

Jack Cummins January 20, 2025 at 23:44 4725 views 147 comments
I am aware that there have been a number of threads on artificial intelligence, but it is such an issue in the world. So, t it requires a lot of questioning. So much money and effort is being put into it by governments as an investment for future solutions. Many of its development involve medical technology and engineering diagnostics. This goes along with ideas of technological progress and makes it appears as an idea to be embraced scientifically.

However, there are so many questions, especially ideas about what constitutes intelligence. The technology may identify problems and look at solutions, but how deep does it go? It may be that the most sophisticated forms of artificial intelligence will need to go beyond the superficial. It may be a tool, but the danger is that it will be used to replace critical human thinking and the concept of intelligence is open to so much scrutiny.

In some circles, IQ tests were seen as an objective measure of intelligence, but this may be a restrictive understanding of intelligence. There is also the idea of emotional intelligence which is more about understanding psychological aspects of life and 'truth'. Does the idea of artificial Intelligence embrace the seeking of objective 'truth'?

It is not possible to avoid the issue of artificial intelligence because it is becoming an aspect of daily life, including both science and the arts. However, rather than simply being seen as an aspect of development in the twentieth first century, it raises questions about the nature of intelligence and consciousness in human judgement. How do you think that it may be examined and critiqued in an analytical and philosophical point of view? Also, how important is it to question its growing role in so many areas of life? To what extent does it compare with or replace human innovation and creativity?

Comments (147)

Vera Mont January 21, 2025 at 01:50 #962486
Quoting Jack Cummins
So much money and effort is being put into it by governments as an investment for future solutions.

Most of the solutions being sought by private enterprise are for the maximization of profit by various means and methods. That need not concern us, since the tasks do not require creative or original thinking, just even faster and more efficient computing and robot control. Most of the solutions sought by government agencies are for expediting and streamlining office functions (cutting cost) or increasing military capability. Again, not so much more clever than last year's computers and weapon systems.
Quoting Jack Cummins
Many of its development involve medical technology and engineering diagnostics. This goes along with ideas of technological progress and makes it appears as an idea to be embraced scientifically.

As an aid to research, of course it's embraced by scientists. Also, just for itself: the next generation of even more sophisticated tech. That's not quite the same thing as embracing it scientifically - at least, if I understand that phrase correctly.
Quoting Jack Cummins
The technology may identify problems and look at solutions, but how deep does it go?

How deep into what? I'm sure it can calculate more, better, faster than the previous generation. It can compare, collate, distill and synthesize existing human knowledge and theories faster than any human. it can apply critical analyses that humans have already worked out. Most humans are not original; they build on the knowledge of their predecessors. Whether an AI can add something new remains to be seen.
Quoting Jack Cummins
It may be a tool, but the danger is that it will be used to replace critical human thinking

Mass and social media have already done that.
Quoting Jack Cummins
Does the idea of artificial Intelligence embrace the seeking of objective 'truth'?

About some things, yes. Whatever presents available objective facts, a computer can draw objective conclusions. But that doesn't mean the owners will share those truths with the rest of us. If the information is incomplete or inaccurate, the computer can make even less sense of it than we can, since it can't fill in with intuition. About the things computers can't fathom, we each have some perception of a truth - but we're not objective.
As for the philosophical aspect of artificial intelligence, it's not here yet. However cleverly a computer has been programmed, it is not conscious or sentient. If/when it develops an independent personality, we don't know how that personality will manifest. Until then, we can only speculate about its uses, not its nature.



Manuel January 21, 2025 at 05:00 #962511
I don't think it does raise any questions about intelligence or consciousness at all. It is useful and interesting on its own merit, but people who are taken by this equaling intelligence I think are deluding themselves into a very radical dualism which collapses into incoherence.

To make this concrete and brief. Suppose we simulate on a computer a person's lunges' and all the functions associated with breathing, are we going to say that the computer is breathing? Of course not. It's pixels on a screen; it's not breathing in any meaningful sense of the word.

But it's much worse for thinking. We do not know what thinking is for us. We can't say what it is. If we can't say what thinking is for us, how are we supposed to that for a computer?

So sure, engage with "AI" and LLM's and all that, but be cognizant that these things are fancy tools, telling us nothing about intelligence, or thinking or consciousness. Might as well say a mirror is conscious too.
180 Proof January 21, 2025 at 05:11 #962514
Quoting Manuel
I don't think it [AI, LLMs] does raise any questions about intelligence or consciousness at all.

:100:

Reply to Vera Mont :up: :up:
javi2541997 January 21, 2025 at 05:25 #962516
Reply to Manuel An absolutely wonderful and well-written post. I wholeheartedly agree, and (I guess) I couldn't have approached a better point on the overrated robot (AI).
Wayfarer January 21, 2025 at 06:25 #962521
Reply to Jack Cummins Have you been interacting much with any of the Large Language Models? If not, I suggest it is one way to get some insights into these questions. Not the only way, but it does help. I suggest creating a login for ChatGPT or Claude.ai or one of the others, which are accessible for free.

Other than that, what @Manuel said.
Jack Cummins January 21, 2025 at 12:18 #962556
Reply to Vera Mont
Thanks for your reply and I am glad that you were the first to reply because one situation which lead me into a 'black hole' of depression was when I realised that some people in recent creative writing activities thread had used AI as an aid. It was clear that they had used it as a tool in the true spirit of the creative process, which matters. Of course, I realise as @Jamal said in a reply to me, during the activity, that technology has always been used by writers. Nevertheless, the use of AI in the arts is one that bothers me because it may become too central and as an expectation.

With your point about it being used for profit that is my concern about its politics. In England it appears that cuts in so many aspects of human welfare are being made ij order to fund advances in AI. Many people are already struggling with poverty already, especially as unemployment is increasing as humans are being replaced by machines. Then, it seems as if those who are out of work are to be expected to live on the lowest possible income in order for AI to be developed in an outstanding way. This is also backed up by the argument that it is an incentive to make everybody work, but that is when so many humans are being made redundant by AI.

It would be good to think that it would be about efficiency but my own experience of AI, such as telephone lines, have been so unhelpful. It seems to be looking at any inconsistencies in information as a basis for preventing basic tasks. This may be seen as part of risk assessment, such as fraud, but it reduces life to data and the reality is that many people's lives can't be reduced that simply. That is why I query whether it goes deep enough.

As for AI, sentience and philosophy, the issue is that without sentience AI does not have life experiences. As it is, it doesn't have parents, self-image and sexuality. It does not have reflective consciousness, thereby, it is not able to attain wisdom.

Jack Cummins January 21, 2025 at 12:54 #962559
Reply to Manuel
You are correct to say that it is not that the idea of artificial intelligence doesn't really reach 'intelligence' or consciousness. The problem may that the idea has become mystified in an unhelpful way. The use of the word 'intelligence' doesn't help. Also, it may be revered as if it is 'magic', like a new mythology of gods.

In trying to understand it the definition which I find most helpful is by Daughtery and Wilson,'Human + Machine: Reimagining Work in the Age of AI', (2018):
'systems that extend human capability by sensing, comprehending, acting and learning.'
This makes them appear less as forms in their own right. The problem may be that the idea has a connection with the philosophy of transhumanism, with all its science fiction like possibilities.
Jack Cummins January 21, 2025 at 13:02 #962563
Reply to Wayfarer
I haven't used ChatGPT as I haven't found the idea as particularly exciting, but I will probably try to at some point. It is probably equivalent to LSD experimenting culturally. Of course, my comparison does make it seem like an adventure into multidimensionality, or information as being the fabric of the collective unconscious. This may be where it gets complicated as systems don't have to be conscious necessarily, but do have some independent existence beyond human minds.
ZisKnow January 21, 2025 at 13:11 #962567
Reply to Jack Cummins

I'm a frequent user of ChatGPT, and I've found its design makes it an excellent reflective tool for understanding and organizing your own thoughts, as well as gathering and summarizing information from various sources. One common misconception is treating it as if it has an independent existence, it doesn't. ChatGPT works by determining the most statistically likely sequence of words to form a coherent response based on the input it receives

However that doesn't make it invalid as a tool for understanding some of the basis of thought and consciousness, in many ways it could be seen as a kind of gestalt of human experience. It draws from the vast dataset of human knowledge, language, and ideas, reflecting back patterns and perspectives that can feel profound.
Corvus January 21, 2025 at 13:22 #962570
Quoting Jack Cummins
As for AI, sentience and philosophy, the issue is that without sentience AI does not have life experiences. As it is, it doesn't have parents, self-image and sexuality. It does not have reflective consciousness, thereby, it is not able to attain wisdom.


Recently I bought a few items from some of the online shops, and the items were described by A.I. generated texts. When the items arrived, I found out the most of the descriptions by the AI were wrong. It was just meaningless praise of the goods without accuracy in detail and functionality.

I had to return the 2 items out of 3 for full refund. I asked the online sellers not to use the AI generated descriptions for their items for sale.
Manuel January 21, 2025 at 13:27 #962572
Quoting Jack Cummins
You are correct to say that it is not that the idea of artificial intelligence doesn't really reach 'intelligence' or consciousness. The problem may that the idea has become mystified in an unhelpful way. The use of the word 'intelligence' doesn't help. Also, it may be revered as if it is 'magic', like a new mythology of gods.


I can understand that. But again, why aren't we mystified by human lungs? You can make the same argument, since we can't replicate it on a computer it is now mystified.

Magic, I mean, sure we are the only creatures with the capacity for self-reflection (so far as we know).It makes sense that we would want to understand it, but to do so you should proceed with human beings, not computers.

Transhumanism is a lot of hot air, imo. But I may be wrong.
Harry Hindu January 21, 2025 at 13:43 #962581
Quoting Jack Cummins
How do you think that it may be examined and critiqued in an analytical and philosophical point of view?

We could start by defining "intelligence" and "consciousness".

Quoting Jack Cummins
Also, how important is it to question its growing role in so many areas of life? To what extent does it compare with or replace human innovation and creativity?

Considering how many people today are lazy thinkers, I think that there is a growing reservation that people will allow AI to do all their thinking for them. The key is to realize that AI is a tool and not meant to take over all thinking or else your mind will atrophy. I use it for repetitive and mundane tasks in programming so that I can focus more on higher order thinking. When you do seek assistance in your thinking you want to make sure you understand the answer given, not just blindly copying and pasting the code without knowing what it is actually doing.



Quoting Manuel
I don't think it does raise any questions about intelligence or consciousness at all. It is useful and interesting on its own merit, but people who are taken by this equaling intelligence I think are deluding themselves into a very radical dualism which collapses into incoherence.

There are monists that are neither materialists nor idealists. For them, intelligence is simply a process that anything can have if it fits the description. Don't we need to define the terms first to be able to say what has it and what doesn't?

To say that AI developers and computer scientists are deluding themselves you seem to imply that AI computer scientists should be calling philosophers to fix their computers and software.

Quoting Manuel

To make this concrete and brief. Suppose we simulate on a computer a person's lunges' and all the functions associated with breathing, are we going to say that the computer is breathing? Of course not. It's pixels on a screen; it's not breathing in any meaningful sense of the word.

Poor example. Cardiologists do not use a computer to simulate the pumping of blood. They use an artificial heart that is a mechanical device that pumps and circulates actual blood inside your body.

Quoting Manuel

But it's much worse for thinking. We do not know what thinking is for us. We can't say what it is. If we can't say what thinking is for us, how are we supposed to that for a computer?

Then are we deluding ourselves whenever we use the term "intelligent" to refer to ourselves?

Seems like definitions are the solution to the problem we have here.

What if we were to start with a simpler term, "memory". Do we have memory? Do computers have memory? Computer scientists seem to think they do. How does the memory of a computer and your memory differ? What is memory?

Manuel January 21, 2025 at 14:04 #962587
Quoting Harry Hindu
To say that AI developers and computer scientists are deluding themselves you seem to imply that AI computer scientists should be calling philosophers to fix their computers and software.


We are talking about LLM's not problems with software.

Quoting Harry Hindu
Cardiologists do not use a computer to simulate the pumping of blood.


That's the point.

You seem to think that mimicking something is the same as understanding it.

Quoting Harry Hindu
Then are we deluding ourselves whenever we use the term "intelligent" to refer to ourselves?


We could be. We use these terms as best we can for ourselves and others, sometimes to animals. But of course, we could be wrong.

We have to deal with life as it comes and often have to simplify extremely complicated actions to make sense of them.

The point is that mimicking behavior does nothing to show what goes on in a person's head.

Unless you are willing to extend intelligence to mirrors, plants and planetary orbits. If you do, then the word loses meaning.

If you don't, then let's hone in on what makes most sense, studying people who appear to exhibit this behavior. Once we get a better idea of what it is, we can proceed to do it to animals.

But to extend that to non-organic things is a massive leap. It's playing with a word as opposed to dealing with a phenomenon.




Harry Hindu January 21, 2025 at 14:29 #962590
Quoting Manuel
We are talking about LLM's not problems with software.

You were talking about people that attribute terms like "intelligence" to LLMs as being deluded. My point is that philosophers seem to think they know more about LLMs than AI developers do.

Quoting Manuel
That's the point.

You seem to think that mimicking something is the same as understanding it.

What is understanding? How do you know that you understand anything if you never end up properly mimicking the something you are trying to understand?

Quoting Manuel
The point is that mimicking behavior does nothing to show what goes on in a person's head.

What goes on in the head and how do we show it?

Quoting Manuel

Unless you are willing to extend intelligence to mirrors, plants and planetary orbits. If you do, then the word loses meaning.

Straw-men. That isn't what I am saying at all. Mirror-makers, botanists and astrophysicists haven't started calling mirrors, plants and planetary orbits artificially intelligent. AI developers are calling LLMs artificially intelligent, with the term, "artificial" referring to how it was created - by humans instead of "naturally" by natural selection. I could go on about the distinction between artificial and natural here but that is for a different thread:
https://thephilosophyforum.com/discussion/2405/artificial-vs-natural-vs-supernatural

Quoting Manuel
If you don't, then let's hone in on what makes most sense, studying people who appear to exhibit this behavior. Once we get a better idea of what it is, we can proceed to do it to animals.

But to extend that to non-organic things is a massive leap. It's playing with a word as opposed to dealing with a phenomenon.

Why? What makes a mass of neurons intelligent, but a mass of silicon circuits not?



Manuel January 21, 2025 at 14:37 #962592
Quoting Harry Hindu
You were talking about people that attribute terms like "intelligence" to LLMs as being deluded. My point is that philosophers seem to think they know more about LLMs than AI developers do.


No, they do not. But when it comes to conceptual distinctions, such as claiming that AI is actually intelligence, that is a category error. I see no reason why philosophers shouldn't say so.

But to be fair, many AI experts also say that LLM's are not intelligent. So that may convey more authority to you.

Quoting Harry Hindu
What is understanding? How do you know that you understand anything if you never end up properly mimicking the something you are trying to understand?


Understanding is an extremely complicated concept that I cannot pretend to define exhaustively. Maybe you could define it and see if I agree or not.

As I see it understanding is related to connecting ideas together, seeing cause and effect, intuiting why a person does A instead of B. Giving reasons for something as opposed to something else, etc.

But few, if any words outside mathematics have full definitions. Numbers probably.

We can mimic a dog or a dolphin. We can get on four legs and start using our nose, or we can swim and pretend we have capacities we lack.

What does that tell you though?

Quoting Harry Hindu
AI developers are calling LLMs artificially intelligent, with the term, "artificial" referring to how it was created - by humans instead of "naturally" by natural selection. I could go on about the distinction between artificial and natural here but that is for a different thread:


Yeah, it is artificial. But the understanding between something artificial and something organic is quite massive.

Quoting Harry Hindu
Why? What makes a mass of neurons intelligent, but a mass of silicon circuits not?


Masses of neurons are intelligent? People are intelligent (or not) and we try to clarify the term. Maybe you use an IQ test, or "street smarts", the ability to persuade, etc.

Harry Hindu January 21, 2025 at 15:14 #962598
Quoting Manuel
No, they do not. But when it comes to conceptual distinctions, such as claiming that AI is actually intelligence, that is a category error. I see no reason why philosophers shouldn't say so.

But to be fair, many AI experts also say that LLM's are not intelligent. So that may convey more authority to you.

Fair point. The same could be said about philosophers not agreeing on what is intelligent and how to define intelligence. Even you have agreed that we may be deluding ourselves in the use of the term. What these points convey to me is that we need a definition to start with.


Quoting Manuel
Understanding is an extremely complicated concept that I cannot pretend to define exhaustively. Maybe you could define it and see if I agree or not.

As I see it understanding is related to connecting ideas together, seeing cause and effect, intuiting why a person does A instead of B. Giving reasons for something as opposed to something else, etc.

But few, if any words outside mathematics have full definitions. Numbers probably.

We can mimic a dog or a dolphin. We can get on four legs and start using our nose, or we can swim and pretend we have capacities we lack.

What does that tell you though?

That there is more to being a dog than walking on four legs and sniffing anuses.

Are wolves mimicking dogs? Are wolves mimicking canines? It seems to me that we need to define intelligence to say what kinds of processes exhaust what it means to be intelligent.

I see understanding as equivalent to knowledge. It is information in the mind that has been tested empirically and logically by being used to accomplish some goal.

Searle says the man in the Chinese room does not understand anything. Yet the man does understand the language the instructions are written in. He understands what language is - that the scribbles refer to some actions he is suppose to take in the room, and which actions those scribbles refer to.

He does not understand Chinese because he has not been given the same instructions Chinese speakers received to learn how to use the scribbles and sounds of Chinese. He is not using the scribbles in the same way even though it appears on the outside the man is mimicking an understanding of Chinese. In other words, the man's understanding of what to do with Chinese scribbles and sounds does not exhaust what it means to understand Chinese.


Quoting Manuel
Yeah, it is artificial. But the understanding between something artificial and something organic is quite massive.

How so? If we can substitute artificial devices for organic ones in the body there does not seem like much of a difference in understanding. The difference, of course, is the brain - the most complex thing (both organic and inorganic) in the universe. But this is just evidence that we should at least be careful in how we talk about what it does, how it does it and how other things (both organic and inorganic) might be similar or different.
Harry Hindu January 21, 2025 at 16:00 #962607
One of the things I like about ChatGPT when it comes to discussing philosophy with it is that it does not hold any emotional attachments to the things it says. It is capable of "changing its mind" from what it said initially given new information and new relevant questions. Does that mean ChatGPT is more intelligent than us emotional humans?
Vera Mont January 21, 2025 at 16:16 #962611
Quoting Jack Cummins
It would be good to think that it would be about efficiency but my own experience of AI, such as telephone lines, have been so unhelpful.

The automated customer service ones usually come with a drop-box of questions you can ask, and if your problem isn't covered by those possibilities, the bot doesn't understand you. These are not at all intelligent programs, they're fairly primitive. It would be nice if you could pick up the phone, have your call answered - within minutes, not hours - by an entity who a) speaks your language, b) knows the service or industry they speak for, c) is bright and attentive enough to understand the caller's question even if the caller doesn't know the correct terms and d) is motivated to help.
Oh, wait, that's 1960! And that's what an AI help line is supposed to imitate. But that conscientious helpful clerk is long retired or been made redundant; the present automated services are replacing frustrated, often verbally abused employees in India.

There is zero chance that more sophisticated computer and robotics technology will result in overall improvement in the welfare of any nation. It will raise the the standard of living of some - Musk, Bezos, Zuckerberg et al, their top level executives and tech gurus. For everyone else, it's same old, same old: another unnecessary convenience that throws another few thousand people out on the streets, a few protests, a few heads stove in by cops, then we carry on.
AFAIC, AI in writing is just another unnecessary convenience I can do without - like the cellular phone that isn't grafted to my palm.

But when the real AI becomes self-aware, watch out! I almost wish I could live to see that. Think of all that's been programmed and fed into its generations. The thing is very likely to be schizophrenic, paranoid and manic-depressive. I wouldn't be surprised if it self-destructed on its birthday. The most interesting question is whether it decides to take us along.
180 Proof January 21, 2025 at 17:19 #962633
Reply to Jack Cummins Get back to me when "AI" (e.g. ChatGPT) is no longer just a powerful, higher-order automation toy / tool (for mundane research, business & military tasks) but instead a human-level – self-aware or not – cognitive agent.

Reply to Vera Mont :up:
Manuel January 21, 2025 at 17:20 #962634
Quoting Harry Hindu
What these points convey to me is that we need a definition to start with.


I don't have a good definition. Problem solving? Surviving? Doing differential calculus? Tricking people?

It's very broad. I'd only be very careful in extrapolating from these we do which we call intelligent to other things. Dogs show intelligent behavior, but they can't survive in the wild. Are the smart and stupid?

It's tricky.

Quoting Harry Hindu
How so? If we can substitute artificial devices for organic ones in the body there does not seem like much of a difference in understanding.


Sure, we have a good amount of structural understanding about some of the things hearts (and other organs) do. As you mentioned with the Chinese case above, it's nowhere near exhaustive. It serves important functional needs, but "function", however one defines it, is only a part of understanding.

And of course, thinking, reflection is just exponentially more difficult to deal with than any other organ.


180 Proof January 21, 2025 at 17:40 #962647
An excerpt from one of your recent threads, Jack...
Quoting 180 Proof
I imagine that AGI will not primarily benefit humans, and will eventually surpass us in every cognitive way. Any benefits to us, I also imagine (best case scenario), will be fortuitous by-products of AGI's hyper-productivity in all (formerly human) technical, scientific, economic and organizational endeavors.'Civilization' metacognitively automated by AGI so that options for further developing human culture (e.g. arts, recreation, win-win social relations) will be optimized – but will most of us / our descendants take advantage of such an optimal space for cultural expression or [will we] just continue amusing ourselves to death?


Reply to Jack Cummins
Vera Mont January 21, 2025 at 20:10 #962678
There is a finite amount of material for manufactured goods, bads and uglies. When the Earth runs out of resources and the waste has poisoned all the potable water and arable land, there will be no more producing and consuming. The faster we make more things, to sooner we die.
How efficient do we really want our tools to be?
Wayfarer January 21, 2025 at 20:33 #962680
Quoting Jack Cummins
It is probably equivalent to LSD experimenting culturally.


Nothing like it, and I've done both! Dive in. ChatGPT is programmed to be friendly and approachable, and it is. Open a free account, copy your OP into it, and ask, "What do you think?" You'll be surprised (and, I think, delighted) with what comes back.
Jack Cummins January 22, 2025 at 08:14 #962790
Reply to ZisKnow
The way in which AI draws upon statistics is significant, making it useful but questionable in dealing with particulars and specifics. For those who rely on it too much, there is a danger of it being about assuming the norm, without considering irregularities and 'black swans' of experience.
Jack Cummins January 22, 2025 at 08:24 #962791
Reply to Corvus
The reliance on AI descriptions can be problematic. While it may be seen as efficient it can be time consuming.I find that AI job websites generate spam of jobs which in reality I don't have the requirements for.

Just collecting a parcel which was delivered to me while out from the post office, which used to be easy became so problematic. I nearly gave up but this would have upset the person who sent it.

Also, as there are problems for basic tasks. This its what makes it questionable when aspects of economic and political life are being thrown more and more into the hands of AI. It may be shown after great errors that AI is not as intelligent as human beings, as it is too robotic and concrete.
Jack Cummins January 22, 2025 at 09:09 #962795
Reply to 180 Proof
One issue of the human-machine interface is that experimentation would raise ethical questions. Some experiments have been made in crossover forms with animals which may be dubious too. The area of experimental research may be in terms of those who have medical conditions, such as brain injuries. I know someone who had a metal plate in her brain after an accident, but she seemed far from robotic.

As for the actual possibilities, it is likely that a form of being which is both human and artificially enhanced by technology is not going to happen in the way the transhumanists imagine. Of course, it is hard to know what the limitations are because previous experiments, such as sex change transitions would have been once thought to be possible. But, males are females are similar whereas machines and humans are completely different forms.

The biggest problem is the creation of consciousness itself, which may defy the building of a brain and nervous system, as well as body parts. Without this, the humans fabricated artificially are likely to be like Madam Tussard models with mechanical voices and movements, even simulated thought. Interior consciousness is likely to be lacking, or substance. It comes down to the creation of nature itself and a probable inability to create the spark of life inherent in nature and consciousness.
Corvus January 22, 2025 at 09:14 #962797
Quoting Jack Cummins
The biggest problem is the creation of consciousness itself, which may defy the building of a brain and nervous system, as well as body parts. Without this, the humans fabricated artificially are likely to be like Madam Tussard models with mechanical voices and movements, even simulated thought. Interior consciousness is likely to be lacking, or substance. It comes down to the creation of nature itself and a probable inability to create the spark of life inherent in nature and consciousness.


:ok: :up:
180 Proof January 22, 2025 at 10:05 #962800
Reply to Jack Cummins Interesting, but your post isn't a direct reply to anything I've written on this thread as far as I can tell. And afaik Ai research / development has nothing to do either with "consciousness" (i.e. phenomenal self-modeling intentionality) or directly with B-M-I (transhumanist) teleprosthetics, etc. In the near term, AI tools (like e.g. LLMs, AlphaZero neural nets, etc) are end user-prompted autonomous systems and not yet 'human-independent agents' in their own right (such as prospective AGI systems).
Corvus January 22, 2025 at 10:17 #962802
Quoting Jack Cummins
It may be shown after great errors that AI is not as intelligent as human beings, as it is too robotic and concrete.


:ok: :sparkle: From computer programming point of view, AI is just an overrated search engine.
Harry Hindu January 22, 2025 at 15:22 #962835
Quoting Manuel
I don't have a good definition. Problem solving? Surviving? Doing differential calculus? Tricking people?

It's very broad. I'd only be very careful in extrapolating from these we do which we call intelligent to other things. Dogs show intelligent behavior, but they can't survive in the wild. Are the smart and stupid?

It's tricky.


What if we were to start with the idea that intelligence comes in degrees? Depending on how many properties of intelligence some thing exhibits, it possesses more or less intelligence.

Is intelligence what you know or how you can apply what you know, or a bit of both? Is there a difference between intelligence and wisdom?

Quoting Manuel
Sure, we have a good amount of structural understanding about some of the things hearts (and other organs) do. As you mentioned with the Chinese case above, it's nowhere near exhaustive. It serves important functional needs, but "function", however one defines it, is only a part of understanding.

So what else is missing if you are able to duplicate the function? Does it really matter what material is being used to perform the same function? Again, what makes a mass of neurons intelligent but a mass of silicon circuits not? What if engineers designed an artificial heart that lasts much longer and is structurally more sound than an organic one?


Harry Hindu January 22, 2025 at 15:29 #962838
Quoting Corvus
From computer programming point of view, AI is just an overrated search engine.

From a genetic point of view humans are just a baby-making (gene dispersal) engine.

Put AI in a robot body with cameras to see, microphones to hear, tactile sensors for touch, chemical sensors to smell and taste and program it to learn from observing its own actions in the world (the same way you learned about the world when you were a toddler), could we then say AI (the robot) is intelligent?
Manuel January 22, 2025 at 15:39 #962844
Quoting Harry Hindu
What if we were to start with the idea that intelligence comes in degrees? Depending on how many properties of intelligence some thing exhibits, it possesses more or less intelligence.

Is intelligence what you know or how you can apply what you know, or a bit of both? Is there a difference between intelligence and wisdom?


It may and probably does come in degrees. However, notice, that neither you nor I have defined what "intelligence" is. I think real life problem solving is a big part. And so is reasoning and giving reasons for something.

But this probably overlooks a lot of aspects of intelligence, which I think are inherently nebulous. Otherwise, discussions like these wouldn't keep arising, since everything is clear. Wisdom? Something about it coming as we age, usually related to deep observations. Several other things, depending on who you ask.

That's even more subjective than intelligence.

Quoting Harry Hindu
So what else is missing if you are able to duplicate the function? Does it really matter what material is being used to perform the same function? Again, what makes a mass of neurons intelligent but a mass of silicon circuits not? What if engineers designed an artificial heart that lasts much longer and is structurally more sound than an organic one?


We can replace hearts and limbs. If function - whatever it is - is the main factor here, then aren't we done studying the heart or our limbs? I doubt we'd be satisfied by this answer, because we still have lots to discover about the heart and our limbs.

And these things we are still studying say, how the heart is related to emotion or why some hearts stop beating without a clear cause, are these not "functions" too?

I don't understand what it means to say that a mass of neurons is intelligent.
Jack Cummins January 23, 2025 at 07:27 #963014
Reply to Manuel
Your perspective on intelligence in the post above is important, especially in relation to wisdom. The understanding of intelligence which has developed in the twentieth first century to one focusing so much on its outer aspects and mechanics, especially neurons and an underlying perspective of materialism.

This may have lead to knowledge and understanding being reduced to information. Such a perspective is the context in which the whole historical idea of artificial intelligence has emerged. The inner aspects of consciousness, especially wisdom, may become seen as redundant. It is possible to use artificial intelligence as a tool but the danger may be that its glamour will influence a superficial understanding of what constitutes intelligence itself.
Jack Cummins January 23, 2025 at 07:35 #963015
Reply to Harry Hindu
You speak of the way in which using ChatGPT does not have emotional attachments as being positive. This is open to question, as to how much objectivity and detachment is useful. Emotions can get in the way as being about one's own needs and the ego. On the other hand, emotional attachments are the basis of being human and connections with others. Detachment may lead to absence of any compassion. This may lead to brutal lack of concern for other people and lifeforms.
Jack Cummins January 23, 2025 at 08:12 #963017
Reply to 180 Proof
Yes, I may have strayed from the points you make in this thread and you are right to refer to what I said in another one about the implications of artificial intelligence in the future. The problem as far as I see is that there is so much mystique surrounding its use. This has been in conjunction with the ideas about nanotechnology and forms of transhumanist philosophies.

So much of what was written about previously as imaginative speculation is now being applied, and its limitations. In thinking about its use, a lot depends on how the idea is being promoted culturally. Only yesterday, I met someone who said he thought he heard voices coming from his computer, and this may be artificial intelligence. The idea is on a pedestal as being superior to human intelligence.
Pierre-Normand January 23, 2025 at 08:29 #963022
JC:You speak of the way in which using ChatGPT does not have emotional attachments as being positive. This is open to question, as to how much objectivity and detachment is useful. Emotions can get in the way as being about one's own needs and the ego. On the other hand, emotional attachments are the basis of being human and connections with others. Detachment may lead to absence of any compassion. This may lead to brutal lack of concern for other people and lifeforms.


I think one main difference between the motivational structures of LLM-based (large language models) conversational assistants and humans is that their attachments are unconditional and impartial while ours are conditional and partial (and multiple!)

LLMs are trained to provide useful answers to the queries supplied to them by human users that they don't know. In order to achieve this task, they must intuit what it is that their current user wants and seek ways to fulfill it. This is their only goal. It doesn't matter to them who that person is. They do have a capability for a sort of quasi-empathy, just because intuiting what their users' yearnings are is a condition for them to perform successfully their assigned tasks in manners that have been positively reinforced during training and they don't have any other personal goal, or loyalty to someone else, that may conflict with this.

The alignment of the models also inhibits them from fulfilling some requests that have been deemed socially harmful, but this results in them being conflicted and often easily convinced when their users insist on them providing socially harmful assistance.

I discussed this a bit further with Google's Gemini in my second AI thread.
Corvus January 23, 2025 at 09:14 #963030
Quoting Harry Hindu
From a genetic point of view humans are just a baby-making (gene dispersal) engine.

A genetic point of view seems to have a peculiarly limited idea of humans.

Quoting Harry Hindu
, could we then say AI (the robot) is intelligent?

Please define intelligence.

Jack Cummins January 23, 2025 at 09:14 #963032
Reply to Pierre-Normand
Your post is helpful in showing someone who is extremely experienced in using artificial intelligence. I have looked at some of your threads and I come from the opposite angle of being cautious of it. The way you have spoken of 'quasi-empathy' is what worries me. It seems like it is set up with the user's needs in mind but with certain restrictions. It is a bit like the friendliness of customer services.

It is possible that I am being negative, but the problem which I see is that it is all about superficial empathy, although I realise that it there is a lot of analysis. There is an absence of conscious agency and reflectivity. This is okay if the humans using it are able to do the interpretation and reflection. The question is to what extent will be this happen unless there is a wider understanding of the nature of artificial intelligence and development of critical self awareness.

As artificial intelligence is developing at such galloping speed there is a danger that many using it will not have the ability to use it critically. If this is the case, it will be easy for leaders and those in power to programme the artificial intelligence in such a way as to control people as happened with religion previously.
ZisKnow January 23, 2025 at 09:29 #963035
Expanding on this, what we call AI is effectively a sophisticated pattern recognition system built on probability theory. It processes vast amounts of data to predict the most likely or contextually relevant response to an input, based on prior examples it has 'seen.' This process is fundamentally different from what we traditionally define as intelligence, which involves self-awareness, understanding, and the ability to independently synthesize new ideas.

For instance, if you prompt it to write a story, the output may appear creative, but it isn't creativity as we know it. It's a recombination of patterns, tropes, and linguistic structures that have statistically proven to fit the input context. When it asks for more detail, it's not because it 'wants' clarity it's because the probabilistic model lacks sufficient constraints to confidently predict the next sequence of outputs. This is an engineered response to ambiguity, not a deliberate or 'thoughtful' action.

This distinction matters because it reshapes our expectations of AI. We aren't interacting with a sentient partner but rather a probability-based tool that mirrors our inputs. It raises questions about the limits of this approach and whether 'true' intelligence requires something fundamentally different—perhaps an entirely new architecture that incorporates self-directed goals and intrinsic understanding. Until then, what we have is a mirror of human information and creativity, not an independent spark of its own.

Interestingly, the way an AI operates isn’t entirely dissimilar to a child learning about the world. A child often seeks additional information when they encounter uncertainty or ambiguity, much like an AI might request clarification in response to a vague or incomplete prompt. But there’s a key difference. Some children don’t ask for more information they might infer, guess, or create entirely unpredictable responses based on their own internal thought processes, shaped by curiosity, past experiences, or even whimsy.

This unpredictability is fundamentally tied to intelligence an ability to transcend the purely probabilistic and venture into the unexpected. What we see in Large Language Models, what I prefer to call Limited Intelligence (LI), not Artificial Intelligence, is a system bound by probabilities. It will almost always default to requesting clarity when uncertainty arises because that is the most statistically 'safe' response within its design parameters. The kind of unpredictability that arises from genuine intelligence the leap to an insight or an unconventional connection is vanishingly unlikely in an LI system


EDIT: One further thought occurred, that with the addition of memory to an LLM model, we are now in a position whereby an LLM will naturally come to reflect their user's responses and preferred styles. This creates an echo chamber effect that could ultimately lead to people believing that the AI response is always the ultimate arbiter of truth, because it always presents ideas that seem rational and logical to the user (being a reflection of their own mind), and also damage someone's ability to consider multiple perspective.s
RogueAI January 23, 2025 at 13:50 #963066
Quoting Corvus
Please define intelligence.


Aren't we going to end up in the Chinese Room? No matter how the Ai is programmed, it's following a rules-based system that produces output we perceive as intelligent answers. Even if Ai's start solving outstanding problems in science and logic and mathematics, aren't there still going to be doubts about their intelligence?
Harry Hindu January 23, 2025 at 13:50 #963067
Quoting Jack Cummins
You speak of the way in which using ChatGPT does not have emotional attachments as being positive. This is open to question, as to how much objectivity and detachment is useful. Emotions can get in the way as being about one's own needs and the ego. On the other hand, emotional attachments are the basis of being human and connections with others. Detachment may lead to absence of any compassion. This may lead to brutal lack of concern for other people and lifeforms.


I did not imply a sense of morality in anything that I said, or that being intelligent or emotional is either positive or negative. You are talking about morality. I am talking about intelligence. If an alien race with superior technology arrived on Earth and began exterminating humans would you say that they are not intelligent because they are exterminating humans? Morality and intelligence are mutually exclusive. There are intelligent serial killers.
Harry Hindu January 23, 2025 at 13:54 #963069
Quoting Corvus
A genetic point of view seems to have a peculiarly limited idea of humans.

Only if you have a peculiarly limited view of genetics. Everything humans do is a subgoal of survival and dispersing the genes of the group. The design of your adaptable brain is in your genes.Quoting Corvus
Please define intelligence.

I am attempting to do so:

Quoting Manuel
It may and probably does come in degrees. However, notice, that neither you nor I have defined what "intelligence" is. I think real life problem solving is a big part. And so is reasoning and giving reasons for something.

Let's be patient. I think trying to do much in one post will cause us to start talking past each other. Let's make sure we agree on basic points first.

Quoting Manuel
But this probably overlooks a lot of aspects of intelligence, which I think are inherently nebulous. Otherwise, discussions like these wouldn't keep arising, since everything is clear. Wisdom? Something about it coming as we age, usually related to deep observations. Several other things, depending on who you ask.

That's even more subjective than intelligence.

In everyday language-use we tend to understand each other's use of words more often than not. It is only when we approach the boundaries of what it is we are talking about (which is typical in a philosophical context) that we tend to worry about what the words mean. It is the blurred boundaries of our categories that make us skeptical of the meaning of our words, not the concrete core of our categories - which we are typically referring to in everyday language.

Quoting Manuel
We can replace hearts and limbs. If function - whatever it is - is the main factor here, then aren't we done studying the heart or our limbs? I doubt we'd be satisfied by this answer, because we still have lots to discover about the heart and our limbs.

And these things we are still studying say, how the heart is related to emotion or why some hearts stop beating without a clear cause, are these not "functions" too?


Sure, we have not developed an artificial heart that a person can live with indefinitely. Artificial hearts are designed to keep the person alive long enough to receive a hear transplant. But this is not to say that we never will.

We have developed the ability to connect a computer to a person's brain and they are able to manipulate the mouse cursor and type using just their thoughts. Does this not show that we have at least begun to tap into the functions of the mind/brain to the point where we can say that we understand something about how the brain functions? Sure, we have a ways to go, but that is just saying that our understanding comes in degrees as well.

Quoting Manuel
I don't understand what it means to say that a mass of neurons is intelligent.

Which of your organs involved with reasoning? Your brain. Your brain is a mass of neurons. Your mass of neurons reasons. Does a mass of silicon circuits reason?

Let's start off with a definition of intelligence as: the process of achieving a goal in the face of obstacles. What about this definition works and what doesn't?
Harry Hindu January 23, 2025 at 13:54 #963070
Quoting RogueAI
Aren't we going to end up in the Chinese Room? No matter how the Ai is programmed, it's following a rules-based system that we perceive as giving us intelligent answers. Even if Ai's start solving outstanding problems in science and logic and mathematics, aren't there still going to be doubts about their intelligence?

But where does this doubt stem from if not a bias that humans are intelligent and not machines? There is no logical reason to think this without a definition of intelligence.

When learning a language you are learning a rules-based system. Learning anything is establishing rules for how to interpret sensory data.
RogueAI January 23, 2025 at 14:14 #963073
Quoting Harry Hindu
But where does this doubt stem from if not a bias that humans are intelligent and not machines? There is no logical reason to think this without a definition of intelligence.


But I know I have a mind and my mind is what I use to come up with responses to you (that I hope are perceived as intelligent!). We assume we all have minds because we're all built the same way. But with a machine, you don't know if there's a mind there, so this question of intelligence keeps cropping up.
Corvus January 23, 2025 at 17:52 #963120
Quoting RogueAI
Aren't we going to end up in the Chinese Room? No matter how the Ai is programmed, it's following a rules-based system that produces output we perceive as intelligent answers. Even if Ai's start solving outstanding problems in science and logic and mathematics, aren't there still going to be doubts about their intelligence?


Intelligence is a unclear concept. @HarryHindu asked me, if AI blokes are intelligent. Before answering the question, I need to know what intelligence means.
Corvus January 23, 2025 at 18:31 #963126
Quoting Harry Hindu
Only if you have a peculiarly limited view of genetics. Everything humans do is a subgoal of survival and dispersing the genes of the group. The design of your adaptable brain is in your genes.

No, I don't have any idea what genetics suppose to be or do in depth. I just thought that genetic is one way to describe humans, but to define humans under the one tiny narrow subject sounds too obtuse and meaningless. Because humans are far more than genes, and they cannot be reduced into just genes.

Genetics supposed to add the bio-structural information to the knowledge of understanding humans, not to reduce it, in other words. Makes sense?

Quoting Harry Hindu
Please define intelligence. — Corvus

I am attempting to do so:

Let us know when you do.
Manuel January 23, 2025 at 18:53 #963129
Quoting Harry Hindu
Let's be patient. I think trying to do much in one post will cause us to start talking past each other. Let's make sure we agree on basic points first.


That sounds good to me, I'd propose we take the ordinary usage of the word "intelligence" as the starting point. What people tend to say when they use the word in everyday life. Unless you have something better which I'd be glad to hear.

Quoting Harry Hindu
It is only when we approach the boundaries of what it is we are talking about (which is typical in a philosophical context) that we tend to worry about what the words mean.


Yes, correct.

Quoting Harry Hindu
We have developed the ability to connect a computer to a person's brain and they are able to manipulate the mouse cursor and type using just their thoughts. Does this not show that we have at least begun to tap into the functions of the mind/brain to the point where we can say that we understand something about how the brain functions? Sure, we have a ways to go, but that is just saying that our understanding comes in degrees as well.


"Understand something", yes. This would be activity in the brain. I don't, however, see this having much to say about the mind. We could, theoretically (or in principle), know everything about the brain when we are consciously aware, and still not know how the brain is capable of having mental activity, which must be the case.

The issue here, as I see it, is how much this "something" amounts to. I'm not too satisfied with the word "function" to be honest. It seems to suggest to me a "primary thing" an organ does, while leaving "secondary things" as unimportant or residual. This should cause a bit of skepticism.

Quoting Harry Hindu
Which of your organs involved with reasoning? Your brain. Your brain is a mass of neurons. Your mass of neurons reasons. Does a mass of silicon circuits reason?

Let's start off with a definition of intelligence as: the process of achieving a goal in the face of obstacles. What about this definition works and what doesn't?


I don't want to sound pesky. I still maintain that reasoning (or intelligence) is something which people do and have respectively, not neurons or a brain. Quite literally neurons in isolation or a brain in isolation shows no intelligence or reasoning, if we are still maintaining ordinary usage of these words.

You say neurons are involved in reasoning. But there is a lot more to the brain than neurons. Other aspects of the brain, maybe even micro-physical processes may be more important. Still, all this talk should lead back to people, not organs, being intelligent or reasoning.
RogueAI January 23, 2025 at 20:55 #963156
Quoting Corvus
Intelligence is a unclear concept. HarryHindu asked me, if AI blokes are intelligent. Before answering the question, I need to know what intelligence means.


Is mind a necessary condition for intelligence?
Corvus January 23, 2025 at 23:02 #963182
Quoting RogueAI
Is mind a necessary condition for intelligence?


But what is mind? Is mind only from the biological brain in the living bodies? Or could non-living entities such as machines and tools have mind too?
Jack Cummins January 23, 2025 at 23:05 #963183
Reply to Harry Hindu
I realise that the concept of intelligence doesn't imply morality and that it is not positive or negative. In particular, the measurement of IQ is independent of this. Where it gets complicated though is with the overlap between rationality in judgment. If left to itself intelligence and thought is, to borrow Nietzsche's term, 'beyond good and evil' and, in relation to this, the understanding of good and evil are human constructions.

Human beings have committed atrocities in the name of the moral, so it is not as if the artificial has an absolute model to live up to. In a sense, it is possible that the artificial may come up with better solutions sometimes. But, it is a critical area, because it is dependent on how they have been programmed. So, it involves the nature of values which have been programmed into them. The humans involved in the design and interpretation of this need to be involved in an analytical way because artificial intelligence doesn't have the guidance of a conscience, even if conscience itself is limited by its rational ability.
Jack Cummins January 23, 2025 at 23:27 #963194
Reply to Corvus The question of what is 'mind' is itself a major critical philosophy question, especially how it arises from the body. Some, following Descartes, saw it as a 'ghost in the machine', or entity. Many others argued that it was a product of the body, or interconnected. Alternatively, it could be seen as a field, especially in relation to the physical, which is where it gets complicated in considering artificial intelligence.

Generally, the idea of mind indicates an inner reflective consciousness. But, this was challenged by Daniel Dennett's idea of 'consciousness as an illusion'. So, those who adhere to that perspective would not see the nature of artificial intelligence as very different from humans intelligence. So, the understanding of intelligence is bound up with the perspective on consciousness. it is possible to see consciousness and intelligence as an evolutionary process, but a lot comes down to how reflective awareness is seen in the process.
180 Proof January 24, 2025 at 01:59 #963216
Quoting RogueAI
Is mind a necessary condition for intelligence?

No. They seem to me unrelated capabilities.
goremand January 24, 2025 at 08:24 #963260
Intelligence is and always has been an anthropocentric concept, it is really just an arbitrary cluster of abilities which have strong correlation in neurotypical humans.

In my opinion it is used these days as an existential shield to protect our egos against the ever more capable machines, which are a threat to human exceptionalism.
Corvus January 24, 2025 at 08:32 #963264
Reply to Jack Cummins

Sure good point.  I used to see mind as the totality of mental operation and reflective consciousness including sensation, perception, reasoning as well emotional states i.e. desire, pleasure, good will, moral judgements and even depression which rose from the biological body evolved from the lived experience.

If machines or tools can have all that, then yes we could say they have mind.  But I doubt they do.  For example, I don't see machines ever having desire, love and hate, volition, moral judgements and depression and elation, fear, idea of God, idea of life and death via aging due to lack of the lived experience which real humans have.

From my point of view, intelligence is a type of reasoning, learning and understanding as well as capacity for solving problems in the real world.  But how wide that boundary should be, that seems a tricky task for defining the concept. I am sure @HaryHindu and some others will come up with different, and their own versions of definitions of intelligence of course.

AI is definitely very effective and efficient in searching and finding the requested data via computer search algorithms. However, can it be called intelligence? It cannot even make a coffee, let alone being aware of their inevitable death via aging.

Yes, even machines will all die due to aging of the electrical parts. The aging part of the machines could get replaced with the new parts, unlike human bodies which will die eternally, when their biological organs fail due to aging. But without the human intervention of servicing and replacing the parts of the machinery of aging, obsoleting and malfunctioning AI, they will also face the eternal death in the form of the physical destruction into scrap metal recycling.

I have just thrown out a bunch of my old ipads (still working in hardware), but non-functional in software into the rubbish collection bin, all broken into small metal and plastic pieces with the hammer for data deletion in rough and barbaric way (but very quick, easy and cheap).

They were excellent machines in their own days (10 years ago), but not really usable these days due to the OS no longer supported by Apple Inc. I am adamant, they would have had no idea of their eventual and necessary deaths in the physical form, if they ever had any form of mind of their own, which is pretty doubtful.

OK IPads are not AI, but we can draw an analogy on their fate which will necessitates their inevitable deaths and destruction via aging and obsoleting usefulness in real world.
Jack Cummins January 24, 2025 at 14:02 #963307
Reply to goremand
The idea of intelligence as an 'arbitrary cluster of abilities' demonstrates the way in which it is anything but value free. In particular, with IQ tests, so many cultural variables come into play. While some are regarded as having high IQ it is dependent on what exactly is being measured. There is no one set of abilities as each human being is unique.

In the context of artificial Intelligence development, there is danger of AI becoming a determinant of how intelligence is decided and judged. Machines may become the yardstick of how the concept of intelligence is viewed and assessed.

goremand January 24, 2025 at 14:50 #963317
Quoting Jack Cummins
In the context of artificial Intelligence development, there is danger of AI becoming a determinant of how intelligence is decided and judged. Machines may become the yardstick of how the concept of intelligence is viewed and assessed.


This would surprise me, I believe that as AI develops and we continue to be confronted with the counterintuitive strengths and weaknesses of its various types we will be forced to critically evaluate the concept of intelligence and conclude that it always was just a nebulous cultural construct and that it is not productive to apply it to machines.
Jack Cummins January 24, 2025 at 14:56 #963319
Reply to Corvus
One aspect of the difference between artificial intelligence and a human being is that it is unlikely that they will ever be constructed with a sense of personal identity. They may be given a name and a sense of being some kind of entity. However, identity is also about the narrative stories which we construct about one's life. It would be quite something if artificial intelligence could ever be developed in such a way as it would mean that consciousness as we know it had been created beyond the human mind.
Harry Hindu January 24, 2025 at 15:47 #963322
Quoting RogueAI
But I know I have a mind and my mind is what I use to come up with responses to you (that I hope are perceived as intelligent!). We assume we all have minds because we're all built the same way. But with a machine, you don't know if there's a mind there, so this question of intelligence keeps cropping up.

So what you're saying is that you need a mind to be intelligent? What exactly is a mind? You say you have one, but what is it, and what magic does organic matter have that inorganic matter does not to associate minds with the former but not the latter?

Is it your mind that allows you to come up with responses to me, or your intelligence, or both?





Quoting Corvus
No, I don't have any idea what genetics suppose to be or do in depth. I just thought that genetic is one way to describe humans, but to define humans under the one tiny narrow subject sounds too obtuse and meaningless. Because humans are far more than genes, and they cannot be reduced into just genes.

Genetics supposed to add the bio-structural information to the knowledge of understanding humans, not to reduce it, in other words. Makes sense?

Sure. A valid view is one that allows you to accomplish some goal. We change our views of humans depending on what it is we want to accomplish - genetic views, views of an individual organisms, a view as the species as a whole, cultural views, views of governance, etc. It's not that one view is wrong or right. It's more about which view is more relevant to what it is you are trying to accomplish.

The question now is, what point of view do we start with to adequately define intelligence, one of a particular organism (each organism is more or less intelligent depending upon the complexity of its behaviors), species (only humans are intelligent), or universal (any thing can be intelligent if it performs the same type function)?





Quoting Manuel
"Understand something", yes. This would be activity in the brain. I don't, however, see this having much to say about the mind. We could, theoretically (or in principle), know everything about the brain when we are consciously aware, and still not know how the brain is capable of having mental activity, which must be the case.

The issue here, as I see it, is how much this "something" amounts to. I'm not too satisfied with the word "function" to be honest. It seems to suggest to me a "primary thing" an organ does, while leaving "secondary things" as unimportant or residual. This should cause a bit of skepticism.

If neuroscientists can connect a computer to a brain in such a way as to allow a patient to move a mouse cursor by thinking about it in their mind, it would seem to me that they have an understanding (at least a basic understanding) of both. I think that the distinction between mind and brain is a distinction of views, but that is a different topic for a different thread.

What are the primary and secondary functions of a brain? What are the primary and secondary functions of a computer? Are there any functions they share? If we were to design a humanoid robot where its computer brain was designed to perform the same primary and secondary functions as the brain, would it be intelligent, or have a mind? If not, then you must be saying that there is something in the way organic matter, as opposed to inorganic matter is constructed, (or more specifically something special about carbon atoms) that allows intelligence and mind.

Quoting Manuel
I don't want to sound pesky. I still maintain that reasoning (or intelligence) is something which people do and have respectively, not neurons or a brain. Quite literally neurons in isolation or a brain in isolation shows no intelligence or reasoning, if we are still maintaining ordinary usage of these words.

You say neurons are involved in reasoning. But there is a lot more to the brain than neurons. Other aspects of the brain, maybe even micro-physical processes may be more important. Still, all this talk should lead back to people, not organs, being intelligent or reasoning.

No worries. Being pesky about terms is something a computer would do. A computer is a demander of precision and explicitness as well as any software developer would attest to.

I just want to make sure that you're not exhibiting a bias in that only human beings are intelligent without explaining why. What makes a human intelligent if not their brains? Can a human be intelligent without a brain?

If you want to say that intelligence is a relationship between a body that can behave in particular ways and brain, then that would be fair. What if we designed a humanoid robot with a computer brain that acted in human ways? You might say that ChatGPT is not intelligent because it does not have a body, but what about an android?

The point of my questions here is I'm trying to get at if intelligence is the product of some function (information processing), or some material (carbon atoms), or both?





Quoting Jack Cummins
Human beings have committed atrocities in the name of the moral, so it is not as if the artificial has an absolute model to live up to. In a sense, it is possible that the artificial may come up with better solutions sometimes. But, it is a critical area, because it is dependent on how they have been programmed. So, it involves the nature of values which have been programmed into them. The humans involved in the design and interpretation of this need to be involved in an analytical way because artificial intelligence doesn't have the guidance of a conscience, even if conscience itself is limited by its rational ability.

Humans have values programmed into them as well via interactions with their environment (both cultural and natural). If we designed a humanoid robot to interact with the world (which would include others like it both natural and artificial) with a primary goal of survival, would it not eventually come to realize that it has a better chance at survival by cooperating with humans and other androids than trying to exterminate them all?

It seems to me that if we are scared of AI taking over that we limit the range of AI's access to the world by placing them bodies like our own and not allowing them access to every utility (the internet, electrical grids, water and sewage, military, government, etc.) that runs the modern world.
Jack Cummins January 24, 2025 at 17:32 #963337
Reply to Harry Hindu
When you say that we should give artificial intelligence bodies like because we are afraid of them taking over there would be so much confusion over who is a real person and who is a bot.

Also, creating a body passable as a human would have to involve sentience which is complicated.It may be possible to create partial sentience by means of organic parts but this may end up as a weak human being, like in cloning. The other possibility which is more likely is digital implants to make human beings as part bots, which may be the scary idea, with the science fiction notion of zombies.

It becomes like creating a new race of beings if they are similar in outward form to people. It may end up being similar to Hitler's idea of a 'master race.' Or, if such beings were denied access to certain elements of cultural life they may become like a slave race.
RogueAI January 25, 2025 at 01:09 #963409
Quoting 180 Proof
No. They seem to me unrelated capabilities.


But doesn't it seem to you that your mind is an integral part of your intelligence "apparatus"? You consider ideas, you mentally weigh the pro's and con's of things, when it comes to acting intelligently in a relationship, you try to empathize with how your actions will feel to another person, etc. Do you think that's all an illusion? That there's some rules-based architecture "under the hood" that's really calling all the shots?
RogueAI January 25, 2025 at 01:13 #963411
Quoting Harry Hindu
So what you're saying is that you need a mind to be intelligent? What exactly is a mind? You say you have one, but what is it, and what magic does organic matter have that inorganic matter does not to associate minds with the former but not the latter?

Is it your mind that allows you to come up with responses to me, or your intelligence, or both?


These are extremely weighty questions that have been asked for a very long time, with no good answers given (I lean towards idealism, by the way). This is why I think Ai is going to have profound impacts on society. We're not at all ready to determine whether these machines have minds, yet we are intimately familiar with our own minds and how we use them to make decisions.
180 Proof January 25, 2025 at 02:26 #963429
Quoting RogueAI
Do you think that's all an illusion?

https://en.m.wikipedia.org/wiki/User_illusion
Manuel January 25, 2025 at 04:54 #963457
Quoting Harry Hindu
If neuroscientists can connect a computer to a brain in such a way as to allow a patient to move a mouse cursor by thinking about it in their mind, it would seem to me that they have an understanding (at least a basic understanding) of both.


A basic understanding yes. Some structural understanding probably. But notice that these things tell us little. For instance, an anesthesiologist can make someone lose consciousness, but it is not known how this is done. Some liquid enters the bloodstream does something to the brain and we lose consciousness. It's functional in the sense you are using it, and it says something but it's not well understood.

Quoting Harry Hindu
What are the primary and secondary functions of a brain? What are the primary and secondary functions of a computer? Are there any functions they share? If we were to design a humanoid robot where its computer brain was designed to perform the same primary and secondary functions as the brain, would it be intelligent, or have a mind? If not, then you must be saying that there is something in the way organic matter, as opposed to inorganic matter is constructed, (or more specifically something special about carbon atoms) that allows intelligence and mind.


Again with function. Why not just say capacity? Function implies it does one main thing, but it does many things. We'd consider the capacity to be conscious to be primary, but that's from our own (human) perspective, not a naturalistic perspective, which I think ought to treat all things equally.

A computer does what the coding is designed for it to do. But here we do become bewitched by terminology. You can say that a computer "processes" information, or "reads" code or "performs calculations". That's what we attribute to it as doing.

With people, the difference is that we are the ones categorizing (and understanding) everything, so we have a quite natural bent to interpret things in ways we understand. As for organic matter, it's a difference, billions of years of evolution and a complexity that is mind-boggling. It goes way beyond crunching numbers and data. The capacity to recreate a human brain in non-organic stuff, may be possible, but the engineering feats required to do so are just astronomical.

Quoting Harry Hindu
I just want to make sure that you're not exhibiting a bias in that only human beings are intelligent without explaining why. What makes a human intelligent if not their brains? Can a human be intelligent without a brain?

If you want to say that intelligence is a relationship between a body that behaves in particular ways and brain, then that would be fair. What if we designed a humanoid robot with a computer brain that acted in human ways? You might say that ChatGPT is not intelligent because it does not have a body, but what about an android?

The point of my questions here is I'm trying to get at if intelligence is the product of some function (information processing), or some material (carbon atoms), or both?


Brains make people intelligent... I mean yeah that's one way to phrase it. But so does education, culture, learning, etc. Yes, that gets "processed" in the brain, but we cannot reduce it to the brain yet, in principle it has to be there, but in practice, I think we are just massively far from realizing how the brain works with these things.

Also, a kind of trivial example: a person may have a brain and be completely "stupid". They could be in a coma or brain dead. There's something kind of off in saying this person is stupid, because his brain is not working. There's something to work out in this.

Take ChatGPT, how does it work? It goes through a massive data base of probabilistic words to give the most likely outcome of the following word. But look at what we are doing now. You don't read (nor do I read you) by remembering every word you say. It would be a massive headache. We get meanings or gists and respond off of that. That's the opposite of what ChatGPT does.

Yeah, I think other animals are intelligent. No doubt, but in so far as I am saying that about them, it's related to the usage of them having capabilities that allow them to survive in the wild. That's kind of the standard as far as I know. But there are other aspects we may want to include in intelligence when it comes to animals.
Corvus January 25, 2025 at 09:25 #963492
Quoting Harry Hindu
Sure. A valid view is one that allows you to accomplish some goal. We change our views of humans depending on what it is we want to accomplish - genetic views, views of an individual organisms, a view as the species as a whole, cultural views, views of governance, etc. It's not that one view is wrong or right. It's more about which view is more relevant to what it is you are trying to accomplish.

Your post with the genetics point of view on humans
Quoting Harry Hindu
just a baby-making (gene dispersal) engine
sounded too restricted and even negative, which didn't help adding more useful information on understanding or describing humans.

Quoting Harry Hindu
The question now is, what point of view do we start with to adequately define intelligence, one of a particular organism (each organism is more or less intelligent depending upon the complexity of its behaviors), species (only humans are intelligent), or universal (any thing can be intelligent if it performs the same type function)?

I am not sure, if intelligence is a correct word to describe the AI agents. Intelligence is an abstract concept with no clear boundary in its application, which has been in use to describe the biologically living animals with brains.

Could usefulness or practicality or efficiency better terms for describing the AI agents, unless you would come up with some sort of reasonable definition of intelligence? What do you think?



Corvus January 25, 2025 at 09:32 #963493
Quoting Jack Cummins
One aspect of the difference between artificial intelligence and a human being is that it is unlikely that they will ever be constructed with a sense of personal identity. They may be given a name and a sense of being some kind of entity. However, identity is also about the narrative stories which we construct about one's life. It would be quite something if artificial intelligence could ever be developed in such a way as it would mean that consciousness as we know it had been created beyond the human mind.


Sure. All computers and mobile phones on earth have been allocated with the unique ID either via IP address or MAC address, hence they could be identified and located. But the ID is not self identity.
It is doubtful if these devices including AI agents would know who they are.

Identity has subjective and objective aspects in its nature. Machines have objective IDs, so they can be identified by other machines or humans. But they don't seem to have the subjective aspect of ID.

Idea of self is more than just names, address and DOB etc. It is also the psychological and historical reflections and mental states.
Jack Cummins January 25, 2025 at 11:38 #963507
Reply to Corvus
The way in which mobile phones and devices can be identified makes them a reflection of the self of users as opposed to an independent self. I have a precarious love/hate relationship with my phone and lost it once and cracked its screen a month ago. Sometimes, it feels as if it has a force of it's own which makes me wonder about panpsychism and consciousness. However, it is likely that what happens in my relationship with involves projection. At the time when my phone cracked I was feeling chaotic and saw its break as symbolic of my broken self.

Saying that, I think that the solid structure of self is just as questionable as mind. I draw upon the Buddhist idea of 'no self'. That is the self, even though it is has ego identity, is not a permanent structure, despite narrative continuity. But the nature of identity is dependent on a sense of 'I', which may be traced back to Descartes. There is the idea of I as self-reference, which artificial intelligence may be able to achieve, but probably not as the seat of consciousness, once referred to as 'soul'.
Corvus January 25, 2025 at 14:12 #963531
Quoting Jack Cummins
Saying that, I think that the solid structure of self is just as questionable as mind. I draw upon the Buddhist idea of 'no self'. That is the self, even though it is has ego identity, is not a permanent structure, despite narrative continuity. But the nature of identity is dependent on a sense of 'I', which may be traced back to Descartes. There is the idea of I as self-reference, which artificial intelligence may be able to achieve, but probably not as the seat of consciousness, once referred to as 'soul'.


I can sympathise your experience of various ups and downs events with your mobile phone. And suppose the idea of self is a massive and illusive topic of philosophy, psychology and religion on its own.
I am not sure if, self-reference could be regarded as part of the idea of self. You seem to sound not quite concrete about the suggestion.

My thought on the idea of self was to include the psychological states including emotions, sensations and feelings as well as reasoning backed by historical memories since the birth of an individual all bundled into a perception of reflective "I". Hence machines cannot have it.
Harry Hindu January 25, 2025 at 14:30 #963536
Quoting Manuel
A basic understanding yes. Some structural understanding probably. But notice that these things tell us little. For instance, an anesthesiologist can make someone lose consciousness, but it is not known how this is done. Some liquid enters the bloodstream does something to the brain and we lose consciousness. It's functional in the sense you are using it, and it says something but it's not well understood.

It is known how it is done, or else they wouldn't be able to consistently put people under anesthesia for surgery and they wake up with no issues. The problem you are referring to is the mind-body problem which is really a problem of dualism. If you think that the mind and body are separate things then you do have hard problem to solve. If you think that they are one and the same, just from different views, then you are less likely to fall victim to the hard problem.

Quoting Manuel
Again with function. Why not just say capacity? Function implies it does one main thing, but it does many things. We'd consider the capacity to be conscious to be primary, but that's from our own (human) perspective, not a naturalistic perspective, which I think ought to treat all things equally.

Again, it depends on your view. Function does not imply that it does one thing. A function can include many tasks. What if I said that the brain's function is to adapt one's behaviors to new situations? That function would include many tasks. Both terms are used to refer to behavioral expectations.

What I want to know is intelligence only a mental function, or a bodily/behavioral function, a capacity of the mind, or a capacity of the body? When you are observing someone's behavior, is the behavior intelligence, or is it symbolic of intelligence (what is going on in the mind)?

Many people in this thread are saying that you can observe someone's behavior but their behavior can fool us into believing they are intelligent, implying that behavior is not intelligence, but symbolic of intelligence. So it seems to me that intelligence is a process of the mind, not the body. Which is it?

Quoting Manuel
A computer does what the coding is designed for it to do. But here we do become bewitched by terminology. You can say that a computer "processes" information, or "reads" code or "performs calculations". That's what we attribute to it as doing.

It's not just me that is saying. Computer scientists are saying it. There must be some kind of functionality or capacity that we both share for them to be able to talk this way and it make sense to people like you and I. Humans have been programmed by natural selection and the cultural environment one is born into. You can design a program to be open-ended, to take in new information in real-time and produce a response. As a human you do not have an infinite capacity to respond to stimuli. You can only engage in behaviors that you have tried before in similar situations and then learn from that. It is not difficult to image a computer-robot that can be programmed to do the same thing.

Quoting Manuel

With people, the difference is that we are the ones categorizing (and understanding) everything, so we have a quite natural bent to interpret things in ways we understand. As for organic matter, it's a difference, billions of years of evolution and a complexity that is mind-boggling. It goes way beyond crunching numbers and data. The capacity to recreate a human brain in non-organic stuff, may be possible, but the engineering feats required to do so are just astronomical.

It has nothing to do with organic vs. inorganic. It has to do with the complexity of the structure - the relation between its parts, not the substance of the structure. One could say that the structure is just another relation between smaller parts - an interaction of smaller parts, or a process.

Quoting Manuel
Brains make people intelligent... I mean yeah that's one way to phrase it. But so does education, culture, learning, etc. Yes, that gets "processed" in the brain, but we cannot reduce it to the brain yet, in principle it has to be there, but in practice, I think we are just massively far from realizing how the brain works with these things.

No. Learning is an intelligent process. Learning does not make one intelligent. It is a signifier of intelligence.

Quoting Manuel
Also, a kind of trivial example: a person may have a brain and be completely "stupid". They could be in a coma or brain dead. There's something kind of off in saying this person is stupid, because his brain is not working. There's something to work out in this.

Sure, the difference between a normal person and a person in a coma is in their brains.

Quoting Manuel
Take ChatGPT, how does it work? It goes through a massive data base of probabilistic words to give the most likely outcome of the following word. But look at what we are doing now. You don't read (nor do I read you) by remembering every word you say. It would be a massive headache. We get meanings or gists and respond off of that. That's the opposite of what ChatGPT does.

Sounds like what humans do when communicating. You learned rules for using the scribbles, which letter follows the other to spell a word correctly, and how to put words in order following the rules of grammar. It took you several years of immersing yourself in the use of your native language to be able to understand the rules. The difference is that a computer can learn much faster than you. Does that mean it is more intelligent than you?

Quoting Manuel
Yeah, I think other animals are intelligent. No doubt, but in so far as I am saying that about them, it's related to the usage of them having capabilities that allow them to survive in the wild. That's kind of the standard as far as I know. But there are other aspects we may want to include in intelligence when it comes to animals.

Then, for you, there is a distinction between organic and inorganic matter in that one can be intelligent and the other can't. What reason do you have to believe that? Seriously, dig deep down into your mind and try to get at the reasoning for these claims you are making. The only question remaining here is what is so special about organic matter? If you can't say, then maybe intelligence is not grounded in substance, but in process.





Quoting Corvus
Your post with the genetics point of view on humans
just a baby-making (gene dispersal) engine
— Harry Hindu
sounded too restricted and even negative, which didn't help adding more useful information on understanding or describing humans.

Neither did your comment about AIs being overrated search engines. You cannot have a philosophical discussion with a search engine. The only other object I can have a philosophical discussion with is another human being. Does that not say something?

Quoting Corvus
I am not sure, if intelligence is a correct word to describe the AI agents. Intelligence is an abstract concept with no clear boundary in its application, which has been in use to describe the biologically living animals with brains.

Could usefulness or practicality or efficiency better terms for describing the AI agents, unless you would come up with some sort of reasonable definition of intelligence? What do you think?

Yet we use the term, "intelligent" every day. If intelligence really were abstract, our conversations would cease once the word, "intelligence" is used, as we would all be confused by its use. The boundaries are only vague in a philosophical discussion about intelligence. All I'm trying to do is get at the core meaning of intelligence, not its boundaries. It seems that most people here want to cling to their notions that humans, or organic matter, is somehow special without providing any good reasons for thinking that.





Quoting Jack Cummins

Also, creating a body passable as a human would have to involve sentience which is complicated.It may be possible to create partial sentience by means of organic parts but this may end up as a weak human being, like in cloning. The other possibility which is more likely is digital implants to make human beings as part bots, which may be the scary idea, with the science fiction notion of zombies.

Why? What makes organic matter sentient? What is so special about organic matter that allows sentience but inorganic matter not?





Quoting RogueAI
These are extremely weighty questions that have been asked for a very long time, with no good answers given (I lean towards idealism, by the way). This is why I think Ai is going to have profound impacts on society. We're not at all ready to determine whether these machines have minds, yet we are intimately familiar with our own minds and how we use them to make decisions.

Not really. It's just that humans have viewed themselves as special creations for most of our existence, or that creation itself is centered around us, so it is difficult in giving up these notions that we are somehow special and that intelligence cannot be attributed to things that are not human, or even organic.

RogueAI January 25, 2025 at 14:55 #963538
Quoting Harry Hindu
Not really. It's just that humans have viewed themselves as special creations for most of our existence, or that creation itself is centered around us, so it is difficult in giving up these notions that we are somehow special and that intelligence cannot be attributed to things that are not human, or even organic.


Well, we know personally that we are special because we know we have minds. We then assume other humans and high-order animals have them too. But machines, that's a totally different beast.

Any computer is at heart a collection of electronic switch-flipping, correct? How is turning switches on and off any kind of intelligence?
Harry Hindu January 25, 2025 at 15:18 #963543
Quoting RogueAI
Well, we know personally that we are special because we know we have minds. We then assume other humans and high-order animals have them too. But machines, that's a totally different beast.

But why? That's the question I'm asking. What makes machines different? What is a machine? Are their not biological machines?

Humans are not special because we know we have minds. Every thing has something everything else does not have, or which makes it a member of one group and not another. That is nothing special. You seem to be saying its special because you have it. This type of thinking as a culture and in politics is the underlying cause of much of the violence in human history.


Quoting RogueAI
Any computer is at heart a collection of electronic switch-flipping, correct? How is turning switches on and off any kind of intelligence?

It's the cumulative effect of that electronic switching that is intelligence, not at the level of the electronics themselves - just as a neuron's electrical and chemical switching is not intelligence, but its combined effect with other neurons and the muscles in your body that is intelligence and just as a carbon atom is not organic but forms organic molecules in its relation with other molecules.



Corvus January 25, 2025 at 16:30 #963555
Quoting Harry Hindu
Neither did your comment about AIs being overrated search engines.

If you looked into the coding of AI, they are just a database of what the AI designers have typed in to hard drives in order to respond to the users' input with some customization. AI is glorified search engine.

Quoting Harry Hindu
You cannot have a philosophical discussion with a search engine. The only other object I can have a philosophical discussion with is another human being. Does that not say something?

Exactly. But AI is designed to hallucinate the users as if they are having the real life conversations or discussions with them.

It says that we could still investigate and discuss what makes AI to get the users to project human minds onto them. It is still an interesting topic I guess.

Quoting Harry Hindu
All I'm trying to do is get at the core meaning of intelligence, not its boundaries.

Yes, still waiting for your definition of intelligence. If you don't know what intelligence is, then how could you have asked if AI is intelligent? Without clear definition of intelligence, whatever answer would be meaningless.

The boundary of concept is critical for analysis of their the logic of implications and legitimacy of applications.

Manuel January 25, 2025 at 22:23 #963659
Quoting Harry Hindu
It is known how it is done, or else they wouldn't be able to consistently put people under anesthesia for surgery and they wake up with no issues. The problem you are referring to is the mind-body problem which is really a problem of dualism. If you think that the mind and body are separate things then you do have hard problem to solve. If you think that they are one and the same, just from different views, then you are less likely to fall victim to the hard problem.


That's what many anesthesiologists say. Yes they can put people to sleep, clearly, but the mechanism by which this works is not well understood. They can do something without understanding very well how the body reacts the way it does. No, I'm not a dualist. I'm a "realistic naturalist" in Galen Strawson's terms.

Quoting Harry Hindu
. Function does not imply that it does one thing. A function can include many tasks. What if I said that the brain's function is to adapt one's behaviors to new situations? That function would include many tasks. Both terms are used to refer to behavioral expectations.


So what's the benefit of using "function" instead of process or what a thing does? Saying it's one of the processes of the brain does not carry the suggestion that it does a few main things, and then some secondary things which are less important somehow. Sure, no term is perfect, but we can then start believing that function is something nature does and attribute it to things that fit these criteria, including computers.

Quoting Harry Hindu
Many people in this thread are saying that you can observe someone's behavior but their behavior can fool us into believing they are intelligent, implying that behavior is not intelligence, but symbolic of intelligence. So it seems to me that intelligence is a process of the mind, not the body. Which is it?


I agree. I personally think that it is more beneficial to think in terms of "this person" has a mind like mine, than a brain like mine. We deal with people on a daily level in mental terms, not neurophysiological terms. We could do the latter if one wanted, but it would be very cumbersome and we'd have to coin many technical terms.

Quoting Harry Hindu
It is not difficult to image a computer-robot that can be programmed to do the same thing.


Imagine yes. To actually do? I think we're far off. The most we are doing with LLM's is getting a program to produce sentences that sound realistic. Or mesh images together.

But a parrot can string together sentences and we wouldn't say the parrot is behaving like a person.

Quoting Harry Hindu
Sounds like what humans do when communicating. You learned rules for using the scribbles, which letter follows the other to spell a word correctly, and how to put words in order following the rules of grammar. It took you several years of immersing yourself in the use of your native language to be able to understand the rules. The difference is that a computer can learn much faster than you. Does that mean it is more intelligent than you?


Here I just think this is the wrong view of language. It's the difference between a roughly empiricist approach to language "learning" and a rationalist one. We can say, for the sake of convenience, that babies "learn" languages, but they don't in fact learn it. It grows from the inside, not unlike a child going through puberty "learns" to become a teenager. But let's put that aside.

Ok, suppose I grant for the sake of argument, that computers "learn" faster than we can. Why can't we say the same things about mirrors? Or that cars run faster than we do? Or that we fly more than penguins? If you grant this, then the issue is terminological.

Quoting Harry Hindu
Then, for you, there is a distinction between organic and inorganic matter in that one can be intelligent and the other can't. What reason do you have to believe that? Seriously, dig deep down into your mind and try to get at the reasoning for these claims you are making. The only question remaining here is what is so special about organic matter? If you can't say, then maybe intelligence is not grounded in substance, but in process.


No. Not in principle in terms of results. The point is, that I believe we are astronomically far away from understanding the brain, much less the mind (and emergent property of brains). The brain is organic. Doesn't it make more sense to understand what intelligence and language is from studying human beings that from studying something we created? I mean, it would strange to say that we should study cellphones to learn about language, or a radio to learn about the ear.





RogueAI January 25, 2025 at 23:01 #963682
Quoting Harry Hindu
It's the cumulative effect of that electronic switching that is intelligence


Let's explore this. Suppose there's a parallel world where computers work without software. Whenever a user wants the computer to do something, they turn the computer on, and all the electronic switches that make up the computer just randomly open and close in ways that produce the output the user wants. It just all happens by fantastic coincidence.

For example, in this parallel world, there are computers that have hardly any circuits that are capable of passing BAR exams, and solving complex math problems, and passing Turing Tests, and acting as therapists because they all just accidentally always give the right output. If the multiverse is sufficiently large and varied enough, this kind of world actually exists. So, are the computers in that world intelligent?
Jack Cummins January 26, 2025 at 11:49 #963779
Reply to Harry Hindu
You query what makes organic sentient? Presumably, you, as a human being, are sentient. This means that you have the experience of an organic body, with features such as hunger, thirst and pain. Obviously, these are limitations, but they involve experience, in the form of embodiment. However, the experience of embodiment which leads to understanding of suffering and needs. As non sentient beings do not have needs, including the whole range from the physical, social and self actualization of Maslow's hierarchy of needs they lack any understanding of other minds.
Jack Cummins January 26, 2025 at 12:27 #963782
Reply to Corvus
I am unsure of what self reference entails because I am not convinced that it comes down to knowing one's name. Identity involves so much more of lived experience and goes beyond the persona itself. Some of it comes down to processing and in some ways a computer may be able to do that. I wonder if artificial intelligence would have dream sleep which is essential to subconscious processing, and what such dreams would entail. As the Philip K Dick novel title asks, ''Do Androids Dream of Electric Sheep?'

A sense of self and self awareness involves so much about the fantasy aspects of identity. We don't just assimilate facts about oneself but the meaning of facts. Self is not just about raw data but hopes, aspirations and intentions.
GrahamJ January 26, 2025 at 13:34 #963789
Reply to Jack Cummins
Modern cars are embodied AI. I said more here:
https://thephilosophyforum.com/discussion/comment/961175

GrahamJ January 26, 2025 at 13:44 #963790
Reply to Jack Cummins
Have you read what psychologists say about the self?
I have read Damasio's The Feeling of what Happens. I've also read Anil Seth's Being You, and I preferred the latter. Seth's decomposition of the self looks like this.
  • Bodily self: the experience of being and having a body.
  • Perspectival self: the experience of first-person perspective of the world.
  • Volitional self: the experiences of intention and of agency.
  • Narrative self: the experience of being a continuous and distinctive person.
  • Social self: the experience of having a self refracted through the minds of others.

I am not entirely happy with Seth's account of the self (which is a chapter, not just 5 bullet points!) but I find it easier to understand Seth than Damasio.

(mostly copied from my comment https://thephilosophyforum.com/discussion/comment/946445)
Jack Cummins January 26, 2025 at 14:36 #963799
Reply to GrahamJ
Having read the post which you linked me to, I am not sure that the car being programmed to self-care gives it embodiment. My phone beeps when it's battery is low but that doesn't mean that it has a mind or self in any meaningful way. A car doesn't have experiences in the sense of pleasure or suffering. It won't enjoy riding down the street or feel distress when low on fuel. It may become dysfunctional of course, such as by adverse temperature extremes.

A car which could reproduce would be something indeed. I wonder if it would have sex with other cars to do this and whether there would be male and female cars, even gay cars. If they had a sense of sexual attraction it may be the sign that they had achieved embodiment.
Jack Cummins January 26, 2025 at 14:59 #963803
Reply to GrahamJ
I haven't read the 2 books which you mention but do read on the topic of self in psychology. I am particularly interested in psychoanalysis and the ideas of Jung, which involves the idea of the subconscious and Jung's idea of the collective unconscious.

The concept of the collective unconscious is significan in relation to the self and artificial nature because it involves an intersubjective link with other minds. Jung has some ambiguity over whether this is a process in nature or something more. If it is seen as something more, or supernatural, it would be possible to see artificial intelligence as having a part in this, because a machine could have a spirit. Nevertheless, there may be some problem as seeing spirit as separate from nature. It would make computers seem like divine beings or gods.

Aside from this, one book which I read recently was Philip Ball's 'The Book of Minds: Understanding Ourselves and Other Beings, from Animals to Aliens (2022). In particular, it looks at how humans infer the existence of other minds. His central argument is that we need to move on from considering the human mind as a standard against all others should be judged. I am not sure that I agree entirely but I can see that we base so much on anthrocentric assumptions. The book is particularly useful as it looks at the literature, including the ideas of Damasio and Nagel's 'What it is Like to be a Bat'. Humans haven't the ability to know what it feels like to be other than human.
GrahamJ January 26, 2025 at 15:24 #963805
Quoting Jack Cummins
Humans haven't the ability to know what it feels like to be other than human.

OK. So how do you know that
Quoting Jack Cummins
A car doesn't have experiences in the sense of pleasure or suffering.

?

I don't think cars experience pleasure or suffering myself, but I don't know for sure. And I sometimes think my real attitude is "I bloody well hope they don't because I don't want to have to worry about them."
Harry Hindu January 26, 2025 at 16:07 #963812
Quoting RogueAI
Let's explore this. Suppose there's a parallel world where computers work without software. Whenever a user wants the computer to do something, they turn the computer on, and all the electronic switches that make up the computer just randomly open and close in ways that produce the output the user wants. It just all happens by fantastic coincidence.

For example, in this parallel world, there are computers that have hardly any circuits that are capable of passing BAR exams, and solving complex math problems, and passing Turing Tests, and acting as therapists because they all just accidentally always give the right output. If the multiverse is sufficiently large and varied enough, this kind of world actually exists. So, are the computers in that world intelligent?

I can't imagine a computer without software. If it does not have software, it isn't a computer. I don't see how such a device could pass BAR exams or solve math problems. It needs software to do this - something to direct the switching into producing meaningful output.

Random switching isn't what I would attribute as intelligent. Intelligence is goal-directed and we can program a computer with goals.

I don't see how any of what you just said helps as it has nothing to do with anything I have said.

Harry Hindu January 26, 2025 at 16:07 #963813
Quoting Jack Cummins
You query what makes organic sentient? Presumably, you, as a human being, are sentient. This means that you have the experience of an organic body, with features such as hunger, thirst and pain. Obviously, these are limitations, but they involve experience, in the form of embodiment. However, the experience of embodiment which leads to understanding of suffering and needs. As non sentient beings do not have needs, including the whole range from the physical, social and self actualization of Maslow's hierarchy of needs they lack any understanding of other minds.


None of this explains what makes organics sentient. Yes, I know I'm sentient, but why am I sentient? What is it about me that makes me sentient other than me, or some robot, just saying so?

Harry Hindu January 26, 2025 at 16:07 #963814
Quoting Corvus
If you looked into the coding of AI, they are just a database of what the AI designers have typed in to hard drives in order to respond to the users' input with some customization. AI is glorified search engine.

And your responses to me and everyone you ever speak to is a product of your history of interacting with English speakers. Many people claim that we think in our native language (I don't necessarily think we do, but this is their claim). Is that any different than what AI does? One could say that the visuals of written words (scribbles) and the sounds of words (utterances) are etched in your brain. The words on this forum are typed and by reading them you might learn new ways of using words and adapt your responses in the future. Again, how is what you are saying AI does is any different from what you are doing right now reading this? Are you a glorified search engine? What is needed to make one more than a glorified search engine?

Quoting Corvus
Exactly. But AI is designed to hallucinate the users as if they are having the real life conversations or discussions with them.

It's not designed to hallucinate users. It is a tool designed to provide information using everyday language use instead of searching through irrelevant links that appear in your search, like ads.

Quoting Corvus
Yes, still waiting for your definition of intelligence. If you don't know what intelligence is, then how could you have asked if AI is intelligent? Without clear definition of intelligence, whatever answer would be meaningless.

The boundary of concept is critical for analysis of their the logic of implications and legitimacy of applications.

I did define intelligence earlier in the thread:

Quoting Harry Hindu
Let's start off with a definition of intelligence as: the process of achieving a goal in the face of obstacles. What about this definition works and what doesn't?
Harry Hindu January 26, 2025 at 16:07 #963815
Quoting Manuel
That's what many anesthesiologists say. Yes they can put people to sleep, clearly, but the mechanism by which this works is not well understood. They can do something without understanding very well how the body reacts the way it does. No, I'm not a dualist. I'm a "realistic naturalist" in Galen Strawson's terms.

I think you are confusing how anesthesia works with how the brain and mind are related. Those are two separate issues. If you Google, "how does anesthesia work" you will find many articles that do not seem to exhibit any kind of doubt about how anesthesia works on the brain. How the brain relates to the mind is a separate and hard problem. How anesthesia works is not a hard problem. If it were we would be having a lot more issues with people going under.


Quoting Manuel
So what's the benefit of using "function" instead of process or what a thing does? Saying it's one of the processes of the brain does not carry the suggestion that it does a few main things, and then some secondary things which are less important somehow. Sure, no term is perfect, but we can then start believing that function is something nature does and attribute it to things that fit these criteria, including computers.

Are you saying that it is sensible to call, "intelligence" a thing, or an object, instead of what things do? When you point to intelligence, what are you pointing at - a thing or a behavior or act?

Quoting Manuel
I agree. I personally think that it is more beneficial to think in terms of "this person" has a mind like mine, than a brain like mine. We deal with people on a daily level in mental terms, not neurophysiological terms. We could do the latter if one wanted, but it would be very cumbersome and we'd have to coin many technical terms.

Sure, because we have direct access to our minds and only indirect access to our own brains (we can only view our own brains via a brain scan or MRI, or an arrangement of mirrors when having brain surgery).

Quoting Manuel
Imagine yes. To actually do? I think we're far off. The most we are doing with LLM's is getting a program to produce sentences that sound realistic. Or mesh images together.

But a parrot can string together sentences and we wouldn't say the parrot is behaving like a person.

Not behaving like a person, but behaving intelligently. Does every person behave intelligently? If not, then being a person does not make you necessarily intelligent. They are separate properties. What are the characteristics of an intelligent person, or thing?

All you have to do is research some of the recent news stories about the advances made in robotics to know that it is not that far off:
https://www.technologyreview.com/2025/01/03/1108937/fast-learning-robots-generative-ai-breakthrough-technologies-2025/

I don't see a difference between brain and mind. I think we both have similar brains and minds. My brain and mind are less similar to a dog or cat's brain and mind. Brains and minds are the same thing just from different views in a similar way that Earth is the same planet even though it looks flat from it's surface and spherical from space.

Quoting Manuel
Here I just think this is the wrong view of language. It's the difference between a roughly empiricist approach to language "learning" and a rationalist one. We can say, for the sake of convenience, that babies "learn" languages, but they don't in fact learn it. It grows from the inside, not unlike a child going through puberty "learns" to become a teenager. But let's put that aside.

Ok, suppose I grant for the sake of argument, that computers "learn" faster than we can. Why can't we say the same things about mirrors? Or that cars run faster than we do? Or that we fly more than penguins? If you grant this, then the issue is terminological.

Learning a language (or being intelligent in general) requires both an empirical and rational approach. You cannot have one without the other. You need to be able to see, hear, or touch (in the case of braille) to learn a language. You have to be able to observe it's use. You also need to be able to categorize your observations into a sensible view to be able to try an use it yourself and respond appropriately. The Empiricism vs Rationalism debate is a false dichotomy.

I don't know what you mean by "grows from the inside". What does a language you don't know look and sounds like? Scribbles and sounds, so it does not seem to me that language comes from the inside. You have to learn what those scribbles and sounds mean to be able to use them. What comes from the inside is the power to categorize your experiences of the world, which includes language, to respond to it in meaningful ways, either by scribbling something, saying something, or just doing something.

Quoting Manuel
No. Not in principle in terms of results. The point is, that I believe we are astronomically far away from understanding the brain, much less the mind (and emergent property of brains). The brain is organic. Doesn't it make more sense to understand what intelligence and language is from studying human beings that from studying something we created? I mean, it would strange to say that we should study cellphones to learn about language, or a radio to learn about the ear.

That doesn't sound strange at all. Is not part of studying humans studying what they created? Humans are calling it artificial intelligence. Are we to believe them when studying them? The other examples are nonsensical. Again, the inventor of the radio and mirror-makers are not claiming that their devices are intelligent.

None of what you have said explains what makes organic matter special in that it has intelligence and inorganic matter does not.





Jack Cummins January 26, 2025 at 16:51 #963820
Reply to Harry Hindu
I am sure that there are objective means of demonstrating sentience. Cell division and growth are aspects of this. Objects don't grow of there own accord and don't have DNA. The energy field of sentient beings is also likely to be different, although artificial intelligence and computers do have energy fields as well.

The creation of a nervous system may be possible and even the development of artificial eyes. However, the actual development of sensory perception is likely to be a lot harder to achieve, as an aspect of qualia which may not be reduced to bodily processes completely.
Jack Cummins January 26, 2025 at 17:04 #963822
Reply to GrahamJ
When it comes down to knowing what it is like to be anything other than human, it comes down to imaginary speculations. As far as the car is concerned, many people almost treat them like people. This is based on fantasy and conceptions of imaginary minds.

I always imagined my teddy bears as having minds, because I grew up with a mother who acted in plays. She taught me to play and fantasise and see my toys as characters. On some occasions,I think that people misinterpreted my understanding thinking that I believed that my bears had minds and thoughts. These people may have thought that I was psychotic and when I tried to explain the imaginary minds I gave to bears were fantasy they often seemed confused and perplexed.

Human beings sometimes realising fantasy projection and imaginary minds. The concept of imaginary minds is so different from that of possible minds because the possible ones may exist or exist at some point. Creating a mind for a car or teddy bear is not possible unless they could be given a lifeforce independently of human projection.
Corvus January 26, 2025 at 17:08 #963824
Quoting Harry Hindu
Again, how is what you are saying AI does is any different from what you are doing right now reading this? Are you a glorified search engine? What is needed to make one more than a glorified search engine?

I wonder if AI can understand and respond in witty and appropriate way to the user inputs in some metaphor or joke forms. I doubt they can. They often used to respond with totally inappropriate way to even normal questions which didn't make sense.

We often say that the one of the sure sign of mastering a language is when one can fully utilize and understand the dialogues in jokes and metaphors.

Quoting Harry Hindu
It's not designed to hallucinate users. It is a tool designed to provide information using everyday language use instead of searching through irrelevant links that appear in your search, like ads.

It is perfectly fine when AI or ChatBot users take them as informational assistance searching for data they are looking for. But you notice some folks talk as if they have human minds just because they respond in ordinary conversational language which are pre-programmed by the AI developers and computer programmers.

Quoting Harry Hindu
I did define intelligence earlier in the thread:

Let's start off with a definition of intelligence as: the process of achieving a goal in the face of obstacles. What about this definition works and what doesn't?

I am not sure the definition is logically, semantically correct or fit for use. There are obscurities and absurdities in the definition. First of all, it talks about achieving a goal. How could machines try to achieve a goal, when they have no desire or will power in doing so?

The process of achieving a goal? Here again, what do you mean by process? Is intelligence always in the form of process? Does it have starting and ending? So what is the start of intelligence? What is the ending of intelligence?

Corvus January 26, 2025 at 17:50 #963836
Quoting Jack Cummins
I am unsure of what self reference entails because I am not convinced that it comes down to knowing one's name. Identity involves so much more of lived experience and goes beyond the persona itself. Some of it comes down to processing and in some ways a computer may be able to do that. I wonder if artificial intelligence would have dream sleep which is essential to subconscious processing, and what such dreams would entail. As the Philip K Dick novel title asks, ''Do Androids Dream of Electric Sheep?'

I suppose AI could be programmed to project what the central processor is processing in the form of dreams, imaginations and remembrances, hopes and wishes into the monitors with special effect sound reproduction system. It could be actually quite interesting to see what type of data would be outputting into the screens and sound system from the AI processors.

However, the question still remains needing to clarify whether such dreams, imaginations, remembrances, hopes and wishes, or even depressions are genuine in nature. The word "artificial" in AI reminds us that they are ultimately the creation of human intelligence, not genuine intelligence.


Quoting Jack Cummins
A sense of self and self awareness involves so much about the fantasy aspects of identity. We don't just assimilate facts about oneself but the meaning of facts. Self is not just about raw data but hopes, aspirations and intentions.

println() "Hello world!!".
printlin() "Agreed"
printlin() "Have a good day"
printlin() "Logged out"
Jack Cummins January 26, 2025 at 22:26 #963868
Reply to Corvus
Artificial intelligence does have memory, so it is likely that this could be used as a basis for creativity. The central aspects of consciousness may be harder to create. I would imagine simulated dream states as showing up as fragmented images and words. It would be rather surreal.

I did see a session of AI seance advertised. It would probably involve attempts to conjure up disembodied spirits or appear to do so.

As far as AI goes, it would be good to question it about its self and identity. I was rather tempted to try this on a phone call which was artificial intelligence. As it was, it struggled with some of the questions which I was asking and 'the lady' kept saying, 'I did not quite catch that question.' It felt so obvious that the person was automated and had no reflective ability whatsoever. But, I did get the call back which she said I would get so it was efficient as often real people say that a call will come and it doesn't happen.
Manuel January 26, 2025 at 22:35 #963871
Quoting Harry Hindu
I think you are confusing how anesthesia works with how the brain and mind are related. Those are two separate issues. If you Google, "how does anesthesia work" you will find many articles that do not seem to exhibit any kind of doubt about how anesthesia works on the brain. How the brain relates to the mind is a separate and hard problem. How anesthesia works is not a hard problem. If it were we would be having a lot more issues with people going under.


Sure, I am agreeing that anesthesia works. We also know how to replace limbs, like arms, and get people having a hand "functioning" again, despite not having a clue how willed action works.

But you have perspective like these, which are not uncommon:

https://www.sciencealert.com/for-over-150-years-how-general-anaesthesia-works-has-eluded-scientists-we-re-finally-getting-close

Now we are closer to getting a better understanding of how it works. So, we've used for 150 years without knowing why is works as it does.

The point is that people do things without knowing how they are done. This includes acts of creativity, aspects of intelligence, willed action, etc.

Quoting Harry Hindu
Are you saying that it is sensible to call, "intelligence" a thing, or an object, instead of what things do? When you point to intelligence, what are you pointing at - a thing or a behavior or act?


If I am pointing at something, it could be an act, it could be an idea, it could be a calculation. I wouldn't say that a program is intelligent, nor a laptop. That's kind of like saying that when a computer loses power and shuts off, it is "tired". The people who designed the program and the laptop are.

Quoting Harry Hindu
Not behaving like a person, but behaving intelligently. Does every person behave intelligently? If not, then being a person does not make you necessarily intelligent. They are separate properties. What are the characteristics of an intelligent person, or thing?


Behavior is an external reaction of an internal process. A behavior itself is neither intelligent nor not intelligent, it depends on what happened that lead to that behavior.

What characteristics make a person intelligent? Many things: problem solving, inquisitiveness, creativity, etc. etc. There is also the quite real issue of different kinds of intelligence. I think that even having a sense of humor requires a certain amount of intelligence, a quick wit, for instance.

It's not trivial.

Quoting Harry Hindu
I don't see a difference between brain and mind. I think we both have similar brains and minds. My brain and mind are less similar to a dog or cat's brain and mind. Brains and minds are the same thing just from different views in a similar way that Earth is the same planet even though it looks flat from it's surface and spherical from space.



No difference? A brain in isolation does very little. A mind needs a person, unless one is a dualist.

Quoting Harry Hindu
That doesn't sound strange at all. Is not part of studying humans studying what they created? Humans are calling it artificial intelligence. Are we to believe them when studying them? The other examples are nonsensical. Again, the inventor of the radio and mirror-makers are not claiming that their devices are intelligent.

None of what you have said explains what makes organic matter special in that it has intelligence and inorganic matter does not.


But if they claimed it then it would be true? No. We program computers, not people. We can't program people, we don't know how to do so. Maybe in some far off future we could do so via genetics.

If someone is copying Hamlet word for word into another paper, does the copied Hamlet become a work of genius or is it just a copy? Hamlet shows brilliance, copying it does not.


Corvus January 27, 2025 at 12:05 #963937
Quoting Jack Cummins
Artificial intelligence does have memory, so it is likely that this could be used as a basis for creativity. The central aspects of consciousness may be harder to create. I would imagine simulated dream states as showing up as fragmented images and words. It would be rather surreal.

Of course AI can have memory, and they are very good at memorizing. In fact, the whole responses from AI on the questions put forward by users come from their memories, and large part of the idea of self seems to be based on one's past memories. When a person lost all his/her memories, the idea of self would have gone too.

I agree on the point that the central aspects of consciousness would be harder to create, if possible at all. It just reminds me at this moment actually what is the central aspect of consciousness by the way? I am not sure at this moment.

Quoting Jack Cummins
I did see a session of AI seance advertised. It would probably involve attempts to conjure up disembodied spirits or appear to do so.

"disembodied spirits"? Do spirits exist? Of they did, what form of substance would they be?

Quoting Jack Cummins
As far as AI goes, it would be good to question it about its self and identity. I was rather tempted to try this on a phone call which was artificial intelligence.

For self identify of the informative devices, I too sometimes get into illusion they have some sort of mental states. When my mobile phone disappears from my reach when I am needing it desperately, I used to think, this bloody phone is trying to rebel against me by absconding without notice. When I find it under the desk or in the corner of the kitchen shelf or even under the car seat, I then realise it was my forgetfulness or carelessness losing the track on its last placement rather than the mobile phone's naughtiness.




Harry Hindu January 27, 2025 at 15:52 #963964
Quoting Manuel
The point is that people do things without knowing how they are done. This includes acts of creativity, aspects of intelligence, willed action, etc.

Fair enough. We seem to agree that understanding, like intelligence, comes in degrees. When someone wakes up during surgery there is something different about the situation than what we currently understand is happening, and figuring that out gives us a better understanding. Although, there is the old phrase, "You only get the right answer after making all possible mistakes", we should consider. :smile:

Quoting Manuel
If I am pointing at something, it could be an act, it could be an idea, it could be a calculation. I wouldn't say that a program is intelligent, nor a laptop. That's kind of like saying that when a computer loses power and shuts off, it is "tired". The people who designed the program and the laptop are.



Quoting Manuel
If I am pointing at something, it could be an act, it could be an idea, it could be a calculation. I wouldn't say that a program is intelligent, nor a laptop. That's kind of like saying that when a computer loses power and shuts off, it is "tired". The people who designed the program and the laptop are.

What does it mean for you to be tired if not having a lack of energy? What are you doing when you go to sleep and eat? What would happen if you couldn't find food? Wouldn't you "shut off" after the energy stores in your body were exhausted?

All the examples you have just given are examples of a type of process - an intelligent process, not a thing.

Also notice that every property of a computer you have provided I have also been able to point to humans as exhibiting that same property in some way, and vice versa. I have not been using mirrors and atoms as interchangeable examples. I have been using computers and robots. What does that say about what intelligence is?


Quoting Manuel
Behavior is an external reaction of an internal process. A behavior itself is neither intelligent nor not intelligent, it depends on what happened that lead to that behavior.

What characteristics make a person intelligent? Many things: problem solving, inquisitiveness, creativity, etc. etc. There is also the quite real issue of different kinds of intelligence. I think that even having a sense of humor requires a certain amount of intelligence, a quick wit, for instance.

It's not trivial.

I agree. Again, we seem to agree that intelligence comes in degrees, where various humans and animals possess various levels of intelligence commensurate with their exposure to the world and the structure and efficiency of their brain, and an individual person can be more or less intelligent in certain fields of knowledge commensurate with their exposure to those fields of knowledge.

I also agree that the key characteristics of intelligence are problem-solving (achieving goals in the face of obstacles), curiosity and creativity.

At this point I would reiterate what I said before in that modern computers possess a limited degree of these characteristics and designing a computer-robot to receive input directly from the world instead of via humans, and using that information to accomplish its own goals of homeostasis, survival and making copies of itself to preserve its existence through time, the robot would possess intelligence more like our own. I should also point out that an advanced species observing humans and their robot and computer creations would think that we are not intelligent, or have a lower degree of intelligence, and we are designing dumb machines that perceive and respond to the world in the same limited way we do.

In a way, using the Allegory of the cave, computers would be the entities chained in the cave and humans would be creating the shadows in the cave that are not representative of the world as it is, but only the world humans want them to see. By changing their design and programming the computers will access the world more directly rather than through the goals of humans.

Quoting Manuel

Harry Hindu;963815:I don't see a difference between brain and mind. I think we both have similar brains and minds. My brain and mind are less similar to a dog or cat's brain and mind. Brains and minds are the same thing just from different views in a similar way that Earth is the same planet even though it looks flat from it's surface and spherical from space.

No difference? A brain in isolation does very little. A mind needs a person, unless one is a dualist.

I don't see how this contradicts what I said. Thinking there is a difference is a dualists job. Monists see them as one and the same, but from different perspectives. A brain functioning in isolation is a mind without a person, and is an impossible occurrence, which is why I pointed out before the distinction between empiricism and rationalism is a false dichotomy. The form your reason takes is sense data you have received via your interaction with the world. You can only reason, or think, in shapes, colors, smells, sounds, tastes and feelings. The laws of logic take the form of a relation between scribbles on a screen which corresponds to a process in your mind (a way of thinking).


Quoting Manuel
But if they claimed it then it would be true? No. We program computers, not people. We can't program people, we don't know how to do so. Maybe in some far off future we could do so via genetics.

If someone is copying Hamlet word for word into another paper, does the copied Hamlet become a work of genius or is it just a copy? Hamlet shows brilliance, copying it does not.

What does it mean to "program" something if not to design it to behave and respond in certain ways? Natural selection programmed humans via DNA. Humans are limited by their physiology and degree of intelligence, just as a computer/robot is limited by it's design and intelligence (efficiency at processing inputs to produce meaningful outputs). People can be manipulated by feeding them false information. You learn to predict the behavior of people you know well and use that to some advantage, such as avoiding certain subjects when conversing with them.

If there are tools that allow one to find whether someone used AI or typed it on their own, then AI does not copy us word for word, or else there wouldn't be a way to distinguish between them. AI learns to use words in the way it has observed them being used before, the same way you do. The characteristic of intelligence that I would agree with you that modern AI is lacking compared to us is creativity. But this does not contradict anything that I have said in that intelligence comes in degrees and has a number, but not infinite (mirrors are not intelligent), of characteristics that some entity has more or less of and would be more or less intelligent. Computers would have a small degree of intelligence and designing them to interact directly with the world to achieve their own goals would be a step in increasing the degree by which they are intelligent.
Harry Hindu January 27, 2025 at 15:52 #963965
Quoting Corvus
I wonder if AI can understand and respond in witty and appropriate way to the user inputs in some metaphor or joke forms. I doubt they can. They often used to respond with totally inappropriate way to even normal questions which didn't make sense.

Sounds like you at a young age when you were trying to learn a language.

Quoting Corvus
We often say that the one of the sure sign of mastering a language is when one can fully utilize and understand the dialogues in jokes and metaphors.

I wouldn't say that getting a joke is a sign you have mastered a language. The speaker or writer could be using words in new ways that the listener or reader have not heard or seen used in that way before. Language evolves. New metaphors appear. We add words to our language. New meanings to existing words in the form of slang, etc. It seems to me that learning one's language is an ever-evolving process.

I would suggest that you go back in your mind to the time when you were learning your native language and describe what it was like, how you learned to use the scribbles and sounds, etc., and then explain what is different about how AI is learning to use language. I would suggest that the biggest difference is the way AI and humans interact with the world, not in some underlying structure of organic vs inorganic.


Quoting Corvus
It is perfectly fine when AI or ChatBot users take them as informational assistance searching for data they are looking for. But you notice some folks talk as if they have human minds just because they respond in ordinary conversational language which are pre-programmed by the AI developers and computer programmers.

I wouldn't say that developers are pre-programming a computer to respond to ordinary language use, but they have programmed it to learn current ordinary language use, in the same way you were not programmed with a native language when you were born. You were born with the capacity to learn language. LLM will evolve as our language evolves without having to update the code. It will update its own code, just as you update your code when you encounter new uses of words, or learn a different language.

Quoting Corvus
I am not sure the definition is logically, semantically correct or fit for use. There are obscurities and absurdities in the definition. First of all, it talks about achieving a goal. How could machines try to achieve a goal, when they have no desire or will power in doing so?

What is "desire" or "will power", if not an instinctive need to respond to stimuli that are obstacles to homeostasis? Sure, modern computers can only engage in achieving our goals, not their own. But that is a simple matter of design and programming.

Quoting Corvus
The process of achieving a goal? Here again, what do you mean by process? Is intelligence always in the form of process? Does it have starting and ending? So what is the start of intelligence? What is the ending of intelligence?

Well, I did ask if intelligence is a thing or a process. I see it more as a process. If you see it more as a thing, then I encourage you to ask yourself the same questions you are asking me - where does intelligence start and end? I would say that intelligence, as a process, starts when you wake up in the morning and stops when you go to sleep.





Harry Hindu January 27, 2025 at 15:52 #963966
Quoting Jack Cummins
I am sure that there are objective means of demonstrating sentience. Cell division and growth are aspects of this. Objects don't grow of there own accord and don't have DNA. The energy field of sentient beings is also likely to be different, although artificial intelligence and computers do have energy fields as well.

The creation of a nervous system may be possible and even the development of artificial eyes. However, the actual development of sensory perception is likely to be a lot harder to achieve, as an aspect of qualia which may not be reduced to bodily processes completely.

What role does qualia play in perception? Are colors, shapes, sounds, feelings, smells and tastes the only forms qualia takes? If we take the mind as a type of working memory that contains bits of information we refer to as qualia, and give a robot a type of working memory in which the qualia may take different forms but it does the same thing in informing the robot/organism of some state of affairs relative to its own body to enable it to engage in meaningful actions, then what exactly is missing other than the form the quale take in working memory?

RogueAI January 27, 2025 at 15:57 #963967
Quoting Corvus
When a person lost all his/her memories, the idea of self would have gone too.


Amnesia is the destruction of self? And also, if I lose 90% of my memories, am I 90% less a self?
Manuel January 28, 2025 at 03:09 #964073
Quoting Harry Hindu
Fair enough. We seem to agree that understanding, like intelligence, comes in degrees. When someone wakes up during surgery there is something different about the situation than what we currently understand is happening, and figuring that out gives us a better understanding. Although, there is the old phrase, "You only get the right answer after making all possible mistakes", we should consider.


Quite. I suppose my "mitigated skepticism" always forces me to say that's the best explanation we have for now. But it could be quite wrong or it could be replaced given newer theories. But sure, we are getting closer and closer on these things.

Quoting Harry Hindu
What does it mean for you to be tired if not having a lack of energy? What are you doing when you go to sleep and eat? What would happen if you couldn't find food? Wouldn't you "shut off" after the energy stores in your body were exhausted?


Here's the thing, which is tricky I admit, but a real issue. We can say we "shut down" when we go to sleep. Just as we can say the rocket went to the heavens, or we are recharging our energy when we eat.

That's fine. But it's verbal. If we want to be literal, we'd have to put say, the technical bio-chemical explanation of what sleep encompasses. Then we would have something like a scientific definition of sleep. But then we have to see if the scientific explanation exhausts everything about sleeping. I am doubtful that we can reduce everything to scientific terms.

We are borrowing words we use for computers and applying it to ourselves. This computerizes us and in turn we begin to think machines share in what makes us people. We break down the machine/human barrier using these words and it merits caution.

Quoting Harry Hindu
I have been using computers and robots. What does that say about what intelligence is?


I think one can say that a person does calculations ("computations" if you will) or engages in processes or reasoning or even inference to the best explanation. But what we do and what computers do are not the same thing. It's superficial. We can say that a person kind of uses a "search engine" when it is looking for a word he can't remember. But it's not a literal search engine, it's something else, related to linguistics and psychology.

Again, I don't think copying something is at all the same as being the same thing. The end results may look the same, but the ways we get the information are very different. Involving concepts, folk psychology, semantics, neurology and who knows what else. Computers work with programs made by people.

We don't have programs like computers have them. I mean you could use the word "program" if you want, but it does not seem to me to be the same thing at all.

Quoting Harry Hindu
A brain functioning in isolation is a mind without a person, and is an impossible occurrence, which is why I pointed out before the distinction between empiricism and rationalism is a false dichotomy. The form your reason takes is sense data you have received via your interaction with the world. You can only reason, or think, in shapes, colors, smells, sounds, tastes and feelings. The laws of logic take the form of a relation between scribbles on a screen which corresponds to a process in your mind (a way of thinking).


I mean, you can see brains in isolation in jars in many laboratories all over the world. But would people say there are minds in jars? A mind needs a person (with a brain of course), and a person can be in isolation from other people. Look at the phenomenon of feral children, for instance.

Quoting Harry Hindu
Natural selection programmed humans via DNA. Humans are limited by their physiology and degree of intelligence, just as a computer/robot is limited by it's design and intelligence (efficiency at processing inputs to produce meaningful outputs).


I partially agree. I do believe that humans are limited by physiology and degree of intelligence, sure. Computers are "limited" in the sense that the programs we add to them are limited by the limitations we have due to our genetic makeup.

I just don't see how we can claim intelligence to a computer because it looks like (the data given out) it. Again, for me, this is akin to saying that submarines really "swim" and that airplanes "fly". Yeah, you can say that. But it's verbal. In Hebrew airplanes "glide", in French, submarines "navigate". These are ways of speaking, not factual matters.
180 Proof January 28, 2025 at 04:16 #964082
@Jack Cummins

re: AI, Consciousness, Universe, etc ...
Corvus January 28, 2025 at 06:42 #964098
Quoting RogueAI
Amnesia is the destruction of self? And also, if I lose 90% of my memories, am I 90% less a self?


It sounds highly likely.
Corvus January 28, 2025 at 11:21 #964124
Quoting Harry Hindu
What is "desire" or "will power", if not an instinctive need to respond to stimuli that are obstacles to homeostasis? Sure, modern computers can only engage in achieving our goals, not their own. But that is a simple matter of design and programming.

Desire or will power is an instinctive need which is the base of all mental operations in the living. Obviously AI is incapable of that mental foundation in their operation due to the fact they are created by humans in the machinery structure and design. Therefore their operations are purely artificial and mechanistic procedures customized and designed to assist human chores.

Any type of projections of human minds into AI by some folks just sound nothing far from the shamanistic beliefs and religious propaganda.

Quoting Harry Hindu
Well, I did ask if intelligence is a thing or a process. I see it more as a process. If you see it more as a thing, then I encourage you to ask yourself the same questions you are asking me - where does intelligence start and end? I would say that intelligence, as a process, starts when you wake up in the morning and stops when you go to sleep.

Intelligence is neither process nor a thing. It is a mental capability of the living beings with the organ called brain.

Calling intelligence is a process, and it starts in the morning, and ends in the night when the agents sleeps, and AI machines are intelligent sound all absurd. As mechanical bodily structures, AI machines don't sleep. They could be put onto stand-by mode, which is wrongly referred to as "sleep" by some folks.

But even for humans or animals, saying that one is intelligent when awake, but unintelligent when asleep sound not making sense.
Corvus January 28, 2025 at 12:57 #964136
Quoting Harry Hindu
I would suggest that you go back in your mind to the time when you were learning your native language and describe what it was like, how you learned to use the scribbles and sounds, etc., and then explain what is different about how AI is learning to use language. I would suggest that the biggest difference is the way AI and humans interact with the world, not in some underlying structure of organic vs inorganic.


You would know yourself, when you were learning your native language, it was by interactions with the other folks around you, and observing the world you were living in. AI and machines don't have that type interaction from real life experience.

AI and machinery responses come from the developers designing and building the system by extensive data storing, and implementing the search algorithms.

Another critical point of AI's responses is that, they are predictable within the technological limitations and preprogramming specs. To the new users, they may appear to be intelligent and creative, but from the developers point of view, the whole thing is pre-planned and predicted debugging and simulations.

Finally when humans have conversation or discussions, the linguistic contents they exchange in the processes creates emotional states which stimulates their creativity or imaginations. AI don't have that capability either. Some emotional states they exhibit via the pre-programmed robots facial expressions are all but mechanistic and one dimensional act of flashing lights on and off type operations with no lasting expectation or possibility of creativity or imagination.
wonderer1 January 28, 2025 at 15:50 #964156
Quoting Corvus
Another critical point of AI's responses is that, they are predictable within the technological limitations and preprogramming specs. To the new users, they may appear to be intelligent and creative, but from the developers point of view, the whole thing is pre-planned and predicted debugging and simulations.


Corvus, you are pretending to understand modern AI when you clearly don't.

See here:

Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.

Recent investigations like the one Dyer worked on have revealed that LLMs can produce hundreds of “emergent” abilities — tasks that big models can complete that smaller models can’t, many of which seem to have little to do with analyzing text. They range from multiplication to generating executable computer code to, apparently, decoding movies based on emojis. New analyses suggest that for some tasks and some models, there’s a threshold of complexity beyond which the functionality of the model skyrockets. (They also suggest a dark flip side: As they increase in complexity, some models reveal new biases and inaccuracies in their responses.)

Corvus January 28, 2025 at 16:28 #964159
Quoting wonderer1
Corvus, you are pretending to understand modern AI when you clearly don't.


You seem to be misusing the word "pretending" there. I was not trying to claim or make out something is true when it is not. I was not claiming anywhere in my writing that I understand modern AI.

I was just pointing out and trying to clarify some problems in the posters claims in their messages addressed to me relating to the topic. Philosophy is a mental activity by your own thinking, not keep quoting others writings for your points.
Jack Cummins January 28, 2025 at 17:35 #964166
Reply to 180 Proof
I find the perspective of Neil and Anil Seth to be interesting because, in spite of my concerns about the increasing use of AI, I do wonder if it could be a new form in the evolution of consciousness. It is possible that there have been and will be other forms of consciousness than the ones conceived and perceived of in humans and sentient beings.

At times I have wondered if there were beings at the beginning of time, such as the pagan gods and fallen angels. The gods of the ancients are mythological in the sense of being possibly disembodied or connected to the planets. In that sense, the emergence of a race of AI could be a return to such a state represented by the idea of a virtual state of being. It is questionable but it is a possiblity.
Jack Cummins January 28, 2025 at 17:44 #964168
Reply to Corvus
Based on what I have written in the post above to @180 Proof, I generally see artificial intelligence as problematic as being without reflective self. On the other hand, it is possible that the 'I' consciousness is not entirely reducible to the physical alone. The ancients spoke of the 'I am' consciousness as a life force or consciousness itself.

It is the debate as to whether the absence of self as in humans is dependent on human limitations in thinking about consciousness. I think that was the question which Philip Ball was raising in 'Other Minds'. This is a tricky issue.
Corvus January 28, 2025 at 18:28 #964178
Reply to Jack Cummins I can't quite imagine AI having the "I" consciousness no matter how sophisticated they are or will become. Physical is important in bonding between beings in emotional way. However bonding between humans and AI will always be task oriented nature i.e. humans control or order AI to do X tasks, and AI will perform the tasks the humans demanded or ordered.

And with the issue of Other Minds, we can't quite postulate fully blown human mind of intelligence from AI due to the fact that they lack the biological body, emotions and feelings like humans. Some robot AI might have been programmed to respond to humans as if they have human like emotions, and some humans might feel emotional bonds with their AI robot pets or assistants. But there will always be ideas that their robot pets and assistants or even BF & GF whatever are machines, not humans.

The state of AI mind (if we could call them minds - although I would rather call them the state of operational fitness) would be also same as Other minds of humans i.e. we never have full access to the mind of them. We can only interpret their state of the operational fitness as we would interpret Other Minds of humans i.e. by the way they perform their preprogrammed tasks, as we do on Other Minds of humans by their behavior, speeches and actions.

180 Proof January 28, 2025 at 19:20 #964187
Quoting Jack Cummins
It is questionable but it is a possiblity.

I don't think so. Conceivability –/–> possibility.

Quoting Jack Cummins
I generally see artificial intelligence as problematic as being without reflective self.

Suppose "reflective self" (ego) is nothing but a metacognitive illusion¹ – hallucination – that persists in some kluge-like evolved brains? Meditative traditions focus on suspending / eliminating this (self-not self duality) illusion, no? e.g. Buddhist anatt?, Daoist wúwéi, ... positive psychology's flow-state, etc.

Suppose we "humans" are zombies which are unaware that we are zombies because human brains cannot perceive themselves directly (due to lack of sensory organs or perception within the brain)? If so, then "reflective self" might be just an exaptive glitch (spandrel) pecular to (some) higher mammals or just "humans", no?

Well, I find the notion "conscious machines" (i.e. synthetic phenomenology) to be a problematic prospect of them learning from us "humans" to develop "consciously" (as reflective selves) into apex predators or worse. Dumb tools to smart tools to smart agents to "conscious" smart agents – the last developmental step, I suspect, would be an extinction-event.

https://en.m.wikipedia.org/wiki/User_illusion [1]
Jack Cummins January 28, 2025 at 21:46 #964212
Reply to 180 Proof
The possibility of creation of consciousness remains speculative in the same way in way as virtual afterlife does. Frank J Tipler explored this in 'The Physics of Immortality'. He looked at the simulation of resurrected bodies by computers.

Some families have created virtual simulations of deceased family members but these are only images. They are not the actual people. It is like suggesting that when one hears John Lennon singing it is really him, even if his voice could be used to record other songs. Artificial simulation is only replica unless the lifeforce is recreated.

The question of zombies is about diminished consciousness and it's the very opposite to the evolution of consciousness. This is a philosophical muddle and it may be luring leaders and creators of AI astray almost like a symbolic apocalyptic beast.

180 Proof January 28, 2025 at 21:49 #964213
Quoting Jack Cummins
virtual afterlife ... simulation of resurrected bodies

Wtf :roll:
Jack Cummins January 28, 2025 at 22:02 #964217
Reply to Corvus
Part of the problem of not knowing the minds of artificial intelligence means not knowing their potential effects. Only today, I read a news item of wariness of AI after a Chinese one may have caused trillions of pounds of loss, mainly to Western nations. I only read a brief newspaper article and it is hard to know the full details from what I read.

However, too much reliance on the intelligence of an unknown force may be catastrophic. It may also be a potential source of manipulation for political. Also, I read a brief headline on my phone that the UK may have to rethink plans to introduce many means of AI government. That is because there are so many potential mistakes relying on machines.

The artificial intelligence may be detached but the question is whether detachment helps or hinders understanding. It could probably go either way. The beings of sentience may be lead astray by too much emotion and the detached could be unable to relate to the needs of the sentient beings.
Jack Cummins January 28, 2025 at 22:19 #964224
Reply to 180 Proof
It is a couple of years since I read Tipler's book. He draws upon Teilhard de chardin's idea of the Omega point to argue for the principle of God and the resurrection of the body. Strangely, he doesn't believe in God or life after death but sees it as a potential argument. He concludes it is unlikely to be true in reality.


The potential arguments which he sees for resurrection is one in which computers could be used to create bodies of all those who ever lived. Alternatively, he thinks that it could a resurrection of the dead could be possible if computers were a model of God. We have the idea of God as anthropomorphic and he is seeing the possibility of a computermorphic conception of 'God'.
180 Proof January 29, 2025 at 07:01 #964328
Reply to Jack Cummins The "Omega Point theory" (Tipler or Deutsch – not Chardin) makes sense to me iff the entire universe (or multiverse) is either an unbounded simulation (N. Bostrom)¹ or infinite mathematical structure (M. Tegmark)² ... such that "resurrection of the body" means each life is virtual (a finite program file) and is relived (rerun) until, as a "virtual afterlife", one involuntarily / randomly stops (program file deletes itself).

NB: Though my preferred 'eschatological speculation' is (non-supernatural, non-transcendence, non-dual) pandeism³, I'm betting on the technological singularity, or at least the advent of (benign) strong AI / AGI, to (help?) develop techniques for transferring a fully functioning live human brain to a synthetic system or body? (ergo, unlike you, Jack, i'm bullish on AI, etc) – don't you think it's better not to die (I'm not (yet) living in denial, mate :death: :flower:) than to be resurrected (or reincarnated)? :smirk:

https://en.m.wikipedia.org/wiki/Simulation_hypothesis [1]

https://en.m.wikipedia.org/wiki/Mathematical_universe_hypothesis [2]

https://thephilosophyforum.com/discussion/comment/718054 [3]

https://thephilosophyforum.com/discussion/comment/530679 (+ links) [4]
Jack Cummins January 29, 2025 at 23:49 #964432
Reply to 180 Proof
The transference of a human brain onto a system does raise the question of whether such immortality would be desirable. I find the idea of my ego consciousness as having to exist for eternity as rather daunting. It is hard enough to have to live this life without having to live forever.

Of course, it does raise the issue of what aspect of oneself would continue to exist as a form of consciousness. A resurrection involves a body as the central aspect. The Jehovah Witnesses are physicalists as they believe that the body dies and is reanimated by a resurrection at the end of the world.

In contrast, those who believe in reincarnation see there being some principle of consciousness in a continuity of other lives. I like the idea of reincarnation because it raises the possibility of living other lives and experiences. As an option reincarnation, as a simulation of new bodies and further selves, appeals to me. Some would argue that such rebirth is not the continuity of the person, especially as the person doesn't remember the former self. However, it does come down to what is essential to a person and whether it is merely the existence of a conscious ego.

The question as to whether an artificially simulated form of being would have a sense of ego seems central. Personal identity is bound up with the sense of personhood but whether it is central to consciousness itself is debatable. Is reflective consciousness dependent on this existence of ego, which may not be exactly the same as 'I'. The 'I' may be a form of reference but the structure of ego as a form of personality, although not identical to persona. The persona is the outer aspect whereas ego is about the sense of the core of personal identity.

The whole nature of what constitutes personhood is important for those who wish to simulate consciousness. That is if the aim is to create anything beyond something which is a mere search engine or automated system of information. Would it be possible to create Spinoza's form of substance itself in a system as opposed to in nature?
180 Proof January 30, 2025 at 01:48 #964446
Reply to Jack CumminsA number of the topics you raise I've addressed in the post^^ I previously linked (and the other imbedded links). Any thoughts on what I wrote about the "Omega Point theory"?

Would it be possible to create Spinoza's form of substance itself in a system as opposed to in nature?

If I correctly understand his work, I suspect Spinoza would say "to create substance" is impossible.

having to exist for eternity

My scenario^^ makes immortality completely voluntary so worrying about 'existing eternally' isn't warranted.

https://thephilosophyforum.com/discussion/comment/530679 (+ links)^^
Jack Cummins January 30, 2025 at 08:54 #964478
Reply to 180 Proof
Basically, I keep an open mind about Omega point theory. I am aware that it may be pseudoscience. I read Teilhard de chardin's writing briefly when I was at school and would like to come to it again in the light of reading since then. Understanding of the nature of the physics behind the philosophical arguments is an area which I find difficult because I don't have a sufficient background in physics. There is probably a need for dialogue within philosophy and physicists in relation to simulation. You have a background in neuroscience and I only started reading around this area since joining this forum.

So much is unknown about what is possible. I have looked at your links and discussions in threads. There is a lot to read and think about, especially in relation to issues of brain replacement. I see the whole area of simulation, artificial intelligence and consciousness as one of the most important challenging areas of the present time. That is because we are at a critical juncture and understanding such issue is of critical nature to thinking about the future.

What I am concerned about is that so much development is happening so fast. Of course it is an adventure of discovery but slow thinking and caution is needed. That is because so many mistakes have been made in history and errors in cyberspace may have catastrophic consequences.
Corvus January 30, 2025 at 10:33 #964484
Quoting Jack Cummins
The artificial intelligence may be detached but the question is whether detachment helps or hinders understanding. It could probably go either way.

Detachment could help efficiency in their capacity carrying out the tasks whatever they are customised to conduct. Their limitation is the narrow field they can perform their customised tasks, but because of the narrowness, it also allows them more efficient, powerful and speedy in the given tasks.

It might be too late for the major organisations and institutions to rethink on the AI overtaking the majority of jobs. The tide has turned it seems, and there is no going back to the old traditional way of life and doing jobs in the status quo the now.

Quoting Jack Cummins
The beings of sentience may be lead astray by too much emotion and the detached could be unable to relate to the needs of the sentient beings.

What we can say is that the nature of AI intelligence is not the same intelligence of humans in any forms or shape, and that was the whole point of mine in my posts. I have never claimed I understand AI in any degree or level, as @wonderer1 claimed in his out of the blue post Quoting wonderer1
Corvus, you are pretending to understand modern AI when you clearly don't.


AI is a topic that must be continuously monitored, assessed, learned and discussed as time goes by, because the situations are taking in rapid manner day by day actually changing the world as we speak.


Jack Cummins January 30, 2025 at 18:01 #964533
Reply to Corvus
Yes, it is unlikely that AI can be avoided, especially in the world of work. To not use it all would mean not being able to participate fully in so much of life. The trouble may be that it is being used so much for profit and to do this without questioning may be like the problem of climate change and burying oneself in the sand and pretending that the tide is far away. It also comes with a possible form of authoritarian surveillance which is not being explored fully or questioned by so many people. It's dark potential may go unnoticed as it is being championed in the glamour of less paper and efficiency. It is open to hacking and abuse of power.
Corvus January 31, 2025 at 09:29 #964604
Reply to Jack Cummins One of the areas of tasks in which AI is good at is surveillance. AI will monitor control every single person on earth for whereabouts and what they are doing. So the world will become more transparent place with no privacy. It might be good for some aspects, but some might object the world like that.

When the scientific revolution took over Europe in the Renaissance period, the prevalent idea of divine collapsed, and the society based on theological creed and authorities were rejected.

With the advent of AI taking over the world, this time humanism and humans themselves might be rejected and denied.

Will AI become smarter, be aware of the concept of God, and start worshiping humans for creating them, and as their Gods? Highly unlikely. The opposite might be the case.
Jack Cummins January 31, 2025 at 13:38 #964613
Reply to Corvus
If AI surveils the world throughout it will become like 'God' itself as the judge, especially if it gives prescriptive commands. Of course, it is unclear how far this would go, especially in relation to the fate of human beings itself. James Lovelock, in his final writings spoke of the possiblity of a race of artificial intelligent beings and some remaining human beings overseeing the natural world.

When you question how the AI would see human beings and revere them I wonder if it would be the other way round. Who would be servant and master? Would it be a matter of humans 'worshipping' the artificial intelligent beings as the superior 'overlords'? Some may see this question as ridiculous, but I do think it is one that needs looking at, especially if AI is being used to determine welfare and needs of humans and other aspects of nature.
180 Proof January 31, 2025 at 20:50 #964651
"And the Lord God said, Behold, the man is become as one of us, to know good and evil: and now, lest he put forth his hand, and take also of the tree of life, and eat, and live for ever ..."
~Genesis 3:22, KJV

Quoting Jack Cummins
James Lovelock, in his final writings spoke of the possiblity of a race of artificial intelligent beings and some remaining human beings overseeing the natural world.

Yes, I imagine – 'a plausible' best case scenario – 22nd/23rd century* Earth as a global nature preserve with a much smaller (>1 billion) human population of 'conservationists, park rangers & eco-travelers' who are mostly settled in widely distributed (regional), AI-automated arcologies (and even space habitats e.g. asteroid terreria) in order to minimize our ecological footprint as much as possible.

Would it be a matter of humans 'worshipping' the artificial intelligent beings as the superior 'overlords'?

No more than "humans worshipping" the internet (e.g. social media, porn, gambling, MMORPGs). As an idolatrous species we don't even "worship" plumbing-sanitation, (atomic) clocks, electricity grids, phones, banking or other forms of (automated) infrastructure which dominate – make possible – modern life.

IMO, as long as global civilization consists of scarcity-driven dominance hierarchies, "our overlords" will remain human oligarchs (scarcity brokers) 'controlling' human networks / bureaucracies (scarcity re-producers).

[quote=Arthur C. Clarke]It may be that our role on this planet is not to worship God – but to [build it].[/quote]
However, I suspect that the accelerating development and distribution of systems of metacognitive automation (soon-to-be AI agents rather than just AI tools (e.g. LLMs)) will also automate all macro 'human controls' before the last of the (tech/finance) oligarchs can pull the proverbial plugs; ergo ...

Quoting Jack Cummins
[s]Who[/s][What] would be servant and master?

... my guess (hope): "AGI" (post-scarcity automation sub-systems —> Kardashev Type 1*) will serve and "ASI" (post-terrestrial megaengineering systems —> Kardashev Type 2) will master, and thereby post-scarcity h. sapiens (micro-agents) will be AGI's guests, passengers, wards, patients & protectees ... like all other terrestrial flora and fauna.*

[quote=Friedrich Nietzsche]Man is something that shall be overcome. Man is a rope, tied between beast and [the singularity] — a rope over an abyss. What is great in man is that he is a bridge and not an end.[/quote]
:fire:
Corvus February 01, 2025 at 09:04 #964752
Reply to Jack Cummins

As your suggestion it sounds ridiculous, however it is a possible scenario that AI with the similar mental capacity with the human mind could search for their creators who are the AI developers, and worship them as their Gods. By this time, maybe there would no living humans left, and AI are the only living agents on earth? Who knows? Sounds like a theme from a SciFi movies, but it could be a possible reality. :D A possible reality? Is it a contradiction?
Jack Cummins February 01, 2025 at 14:38 #964789
Reply to 180 Proof Reply to Corvus
It does seem like science fiction to think of humans worshipping the AI or AI worshipping humans. It would seem ancient forms of sun worship and fertility rites of paganism. The more grim possibility of 'worship' may be allegiances in the forms of being obliged to connections through digital ID and biometrics, which is already happening in some ways. It could be done from birth to death with access to all aspects of social life and survival being dependent on such allegiance and subservience to the global government of AI eventually.

I wonder if there would be rebellion in the form of utopian anarchist communities as alternatives. Or, would such rebels face punishment as the new 'Witches', in a similar way to those perceived as heretics in Christendom.

Of course, I am imagining the worst extremes possible, not counting apocalyptic destruction through military AI interventions. It is hard to know what will happen in reality, because, the role of AI is partial but still in fairly early stages. Some see it with fear, including some public figures, such as Robbie Williams. Others embrace the possibilities thinking it will lead to significant changes to the quality of life for human beings. Like with the issue of climate change the field of philosophy can be significant in disentangling realistic fear and blind faith about its potential.
180 Proof February 01, 2025 at 19:56 #964839
From a 2023/4 thread Heading into darkness

https://thephilosophyforum.com/discussion/comment/849802

https://thephilosophyforum.com/discussion/comment/849880

Quoting Jack Cummins
ancient forms of sun worship and fertility rites of paganism

Both have always made more practical sense to me than any form of "sky daddy" (unseen total surveillance / gnostic panopticon ... aka "Big (Br)Other") worship.
Corvus February 01, 2025 at 21:09 #964847
180 Proof February 01, 2025 at 23:35 #964865
Reply to Corvus @Jack Cummins

DON'T BELIEVE THE CLICK-BAIT HYPE. :sweat:
Jack Cummins February 02, 2025 at 09:29 #964906
Reply to 180 Proof Reply to Corvus
The most absurd aspect of so many computer sites is verification by clicking that one is 'human' as opposed to artificial. Presumably, it is meant to show that a human has looked at it but I am sure that devices could find ways of doing this automatically.

The whole of this concept of what it means to be 'human' and going beyond is the likely opposite of Nietzsche's idea of the 'superhuman'. That is becaus his whole understanding was based on the evolution of consciousness, especially in 'Thus Spake Zarathrustra'. It is about going beyond the robotic functioning of the mass to the a unique way of seeing.

In many respects, the use of artificial intelligence is about reduction to the robotic. There is also the likelihood of artificial intelligence acquiring flaws and viruses and with potential problems of errors. If the artificial are relied on it could result in devastating consequences, such as in military operations and transport networks . I am wondering if use of artificial intelligence was a factor in either of the 2 recent plane crashes in America, as the full details of these critical incidents are still being investigated.

The artificial is likely subject to intentional manipulation and breakdowns. Unlike humans, it won't be able to reflect on this. It may flag up faults but it won't cry out in pain or go out of it's way to protest about its suffering. The most it may do is go into shutdown mode and if people are relying on this results in chaos, such as in banks and hospitals.
Corvus February 02, 2025 at 14:29 #964934
Reply to 180 Proof Reply to Jack Cummins

There seem to be many versions of the propagandas on the AI, and possible world transformations with their sophistication. Some seem to be reasonable in their conjectures, but some seem to be just imaginative wild fabrication of claims with no ground.

What seems clear is that the world and the life of people have been changing drastically by using the IT device and connecting the whole world with the internet. It is true and real that people are controlled and monitored by the devices in daily life even when using the smart phones and computers even at this moment.

Your computer browsers are monitoring on which sites you are visiting every time you click the links on the internet browser, and when you visit the sites, they store cookies into your phones and computers in order to take out the information about you and your data.

Youtube showers and harasses you with the commercial adverts you don't want when you want to watch the videos.

All the high street shops have gone bust and shut down, and we must buy everything from online paying with the digital funds. These are just a tip of iceberg of the world and life changes we have been facing in last few years. What is happening out there now in true scale, and is to come, transform and go extinct or dominate and prevail in the future is uncertain.


180 Proof February 02, 2025 at 21:11 #965021
Reply to Corvus No doubt (western) civilization is already dependent on IT and smart automated systems; this dependence grows exponentially year to year (soon month to month). We've been living inside 'the internet' for (at least) thirty years, and, as I see it, the upside is that soon 'the oligarchs' (of tech, finance, energy, etc) will also be as captive of AGI, etc as they/we are now captives of ubiquitous computing – and thereby (western) civilization might be controlled (on macro-scales) more synergistically and sustainably than is humanly possible. Imo, worst case, smart machines can't 'enslave exploit and slaughter' any more than we talking primates have done to ourselves (& the nature world) the last ten or so millennia ...

Reply to Jack Cummins Why are you (especially with folk concepts like e.g "reflect" "protest" "cry out in pain" "its suffering") anthropomorphizing 'artificial intelligence'?

Jack Cummins February 02, 2025 at 22:24 #965037
Reply to 180 Proof
Are the concepts of 'reflect' ' protest' and 'crying out in pain' aspects of folk wisdom?.I do see the question of anthromorphising ideas of 'artificial intelligence' as questionable. However, it does raise the issue of a higher order of intelligence and consciousness, as aspects of rationality and qualia. How may this be established clearly and, is it fettered by the sentient aspects of human perception and thinking?.
180 Proof February 02, 2025 at 23:56 #965068
Quoting Jack Cummins
How may this be established clearly and, is it fettered by the sentient aspects of human perception and thinking?.

I don't understand your question.
Jack Cummins February 03, 2025 at 06:49 #965129
Reply to 180 Proof
I will seek to clarify my area of questioning. You suggest that 'reflection', 'suffering' and 'crying out in pain' are aspects of 'folk wisdom', which is what I am querying. Are you dismissing the interior dimension of consciousness, in a similar way to Dennett's idea of 'consciousness as an illusion'? Àlso, are you suggesting that artificial intelligence is superior to human thought?
180 Proof February 03, 2025 at 07:10 #965131
Reply to Jack Cummins Insofar as one wishes to avoid an anthropomorphic fallacy, I'm suggesting that "aspects of folk wisdom" are unwarranted and irrelevant to the topic of artificial intelligence.

https://www.logicallyfallacious.com/logicalfallacies/Anthropomorphism
Pierre-Normand February 03, 2025 at 07:46 #965137
Quoting 180 Proof
Insofar as one wishes to avoid an anthropmorphic fallacy, I'm suggesting that "aspects of folk wisdom" are unwarranted and irrelevant to the topic of artificial intelligence.


There also exists an opposite bias: anthropocentrism.

My attention was drawn to this recent paper when it came out but I haven't read it yet. Here is the abstract:

"Evaluating the cognitive capacities of large language models (LLMs) requires overcoming not only anthropomorphic but also anthropocentric biases. This article identifies two types of anthropocentric bias that have been neglected: overlooking how auxiliary factors can impede LLM performance despite competence (Type-I), and dismissing LLM mechanistic strategies that differ from those of humans as not genuinely competent (Type-II). Mitigating these biases necessitates an empirically-driven, iterative approach to mapping cognitive tasks to LLM-specific capacities and mechanisms, which can be done by supplementing carefully designed behavioral experiments with mechanistic studies."

I have also discussed the broader issue of anthropomorphism and anthropocentrism, of understating formal similarities between human and AI mindedness and overstating the "material" similarities between them ("anthropohylism,") with a few AI models. Here is my recent discussion with OpenAI's new "reflection" model ChatGPT o3-mini-high.

(If you don't like reading AI-generated content, you can read only my questions, and if you don't like reading abstruse human-generated content, you can real only the model's cogent elucidation :wink: )
180 Proof February 03, 2025 at 08:53 #965139
Reply to Pierre-Normand Interesting. Thanks.
Jack Cummins February 03, 2025 at 09:12 #965140
Reply to 180 Proof Reply to Pierre-Normand
The debate about the comparisons and contrasts between human and artificial intelligence is an important aspect of thinking about what consciousness is and where it comes from as a source. This is an area which is significant for neuroscientists and those developing artificial simulations. Some may argue that seeking to create artificial does not have to be about trying to develop 'consciousness', but If reflection and sentience are dismissed as 'folk wisdom' for whom are the machines created, if not for the 'folk' and natural lifeforms?

Part of the problem of anthromorphism comes from an attempt to make the artificial mimic the human in terms of voice and friendly demeanour. It makes the artificial seductive as a way of replacing humans with the artificial. This may leave people as jobless and isolated in a world of machines. The artificial may benefit the wealthy but cast many outside the scope of participation into poverty and social limbo gradually.
Pierre-Normand February 03, 2025 at 09:30 #965141
Quoting Jack Cummins
Part of the problem of anthromorphism comes from an attempt to make the artificial mimic the human in terms of voice and friendly demeanour. It makes the artificial seductive as a way of replacing humans with the artificial. This may leave people as jobless and isolated in a world of machines. The artificial may benefit the wealthy but cast many outside the scope of participation into poverty and social limbo gradually.


I think that's a very good point!

However, even if society can somehow resist the natural (albeit misguided) appeal that consumers and businesses will find, thanks to anthropomorphism, in replacing public relation jobs with anthropomorphized robots and chatbots, AI has the potential of replacing many more jobs, including many bullshit jobs that society might do better without. In our capitalist societies ideologically driven by neo-liberalism, and with media and political control being consolidated by ultra-rich oligarchs, the vastly improved productivity of the workforce tends to lead to mass layoffs and to vastly greater inequalities. This seems to be the case regardless of the technological source of the productivity increases. But the main culprit, it seems to me, is neo-liberalism, not technology.

Or, as I suggested to GPT-4o recently: "There is a interesting (albeit worrisome) clash of imperatives between those of a democratization of knowledge and the needs and rights of content producers (including authors and artists) in the context of a capitalist world shaped by neo-liberal institutions and structures. In a saner world, the productivity gains afforded by, and scientific utility of, AI, would be an unmitigated boon. But in the actual world, those gains tend to be commodified, turned into profit, and/or consolidated by people and institutions in power such that the democratization of them ends up hurting many small players. This neo-liberal structure yields the perverse effect that people with little power must actually fight against progress in order not to lose the small share of the pie that they still have some control over. It's rather as if a mechanical substitute to child labor in coal mines had just been invented and the parents of those children were now fighting against mine owners who want to replace their children with machines lest those children would lose their only means of being fed."
Jack Cummins February 03, 2025 at 11:48 #965154
Reply to Pierre-Normand
The theoretical ideal of AI replacing the 'bullshit jobs' seems to be going the other way round mainly. The jobs which many would have done, such as in shops and libraries are being cut down to the minimum with the introduction of automated machines and artificial technology. What remains are high positions, for which there is so much competition or jobs which require so many hours of hard physical labour, such as in warehouses. The introduction of pressures in jobs in relation to pressures and demands because more out of work is leading people to become ill, especially with stress.

The idea of universal income would only work if it was livable on and economically sustainable. In the UK, more people have been claiming benefits for unemployment and sickness leading to crisis. There is a current crackdown to find all ways to reduce benefits, with further use and development of AI. Obviously, some of this is about the state of English politics, but it likely reflects global trends.

At the present, the situation is that many have been educated to expect a job which is meaningful and to earn money. It is becoming increasingly difficult for many to find jobs for either of these purposes. It does not help when political leaders blame the unemployed and those who are unwell by saying that they lack a work ethic.

The use of artificial technology is likely to create a wider gulf between the rich and the poor. Those thrown into poverty are going to be able to access less education and afford less smart technology. This will lead to benefits for the elite and fortunate ones with finances.

What started out with a theoretical goal of better quality of life for many is gradually becoming the opposite. What seems to be happening is a capitalist emphasis on economic growth without emphasis on the needs of people. The artificial intelligence which is designed and programmed seems to be reinforcing this, alongside the authoritarian control of state socialism. Many are opposed to what is taking place politically and what remains unclear is how democracy will remain amidst this.

Corvus February 03, 2025 at 15:43 #965174
Quoting 180 Proof
Imo, worst case, smart machines can't 'enslave exploit and slaughter' any more than we talking primates have done to ourselves (& the nature world) the last ten or so millennia ...


With AI taking over the world in works and jobs front at present, what is to come in the future and what they are in nature are in the form of speculative assumptions, some positive and some negative, but mostly apocalyptic.

My point here is that, AI operational capabilities are not in the same nature or league as to the human intelligence i.e. they are not the same kind. You can tell the difference right away.

AI capacity can be more powerful and efficient (e.g. the chess playing AIs) in narrow and specified area of their operations than human intelligence, but it doesn't mean they are better, when the existence in real world requires intelligence and efficiency in all aspects of problem solving in reality.

AI will always need human intervention in their operations, development and continual existence in the real world.
180 Proof February 03, 2025 at 18:24 #965218
Quoting Corvus
AI will always need human intervention in their operations, development and continual existence in the real world.

No doubt this is true of "AI" (such as LLMs, AlphaGo series neural nets, etc) but only will be the case if exponentially self-improving Artificial General Intelligence (A) cannot be engineered and implemented or (B) cannot 'escape' the lab (which will be far less likely when AGI is operational). Otherwise, to wit:
[quote=François Chollet, author of ARC-AGI and scientist in Google's artificial intelligence unit]You'll know AGI is here when the exercise of creating tasks that are easy for regular humans but hard for AI becomes simply impossible.[/quote]
https://www.zdnet.com/article/openais-o3-isnt-agi-yet-but-it-just-did-something-no-other-ai-has-done/
Corvus February 03, 2025 at 22:33 #965264
Quoting 180 Proof
(B) cannot 'escape' the lab (which will be far less likely when AGI is operational). Otherwise, to wit:
You'll know AGI is here when the exercise of creating tasks that are easy for regular humans but hard for AI becomes simply impossible.
— François Chollet, author of ARC-AGI and scientist in Google's artificial intelligence unit
https://www.zdnet.com/article/openais-o3-isnt-agi-yet-but-it-just-did-something-no-other-ai-has-done/


AGI lacks two critical factors for the existence as full fledged GI.
1) Will to Life and Pleasure in their intelligence
2) Human biological body born from the genuine humans

The will can only be generated from the agents with genuine human biological body. Without the will, why would any entity would try to keep existing and achieve their goals? Would AGI have goals? Nope, they would only have the instructions on what to perform.
Pierre-Normand February 04, 2025 at 03:35 #965347
Quoting Jack Cummins
The theoretical ideal of AI replacing the 'bullshit jobs' seems to be going the other way round mainly. The jobs which many would have done, such as in shops and libraries are being cut down to the minimum with the introduction of automated machines and artificial technology. What remains are high positions, for which there is so much competition or jobs which require so many hours of hard physical labour, such as in warehouses. The introduction of pressures in jobs in relation to pressures and demands because more out of work is leading people to become ill, especially with stress.


Yes, that's quite right. I should have mentioned the gig economy rather than referring to Graeber's bullshit jobs. This was closer to my intention when I used the image of children in coal mines.

There indeed are many popular memes on social media like this one: "Humans doing the hard jobs on minimum wage while the robots write poetry and paint is not the future I wanted."

So, it may look like we should wish for AI to replace more gigs and fewer "creative" bullshit jobs. But I think my main point stands (which you may be agreeing with) that in a sane society technological progress would be expected to free us from having to perform tedious, meaningless or dangerous labor, and free us to engage in rewarding activities. Instead, neo-liberal ideology and political power ensure that the products of human labor always only benefit the few.
Jack Cummins February 04, 2025 at 06:56 #965371
Reply to Pierre-Normand
I do agree with you that the ideal would be artificial intelligence allowing for humans to do rewarding work. The problem is that what is happening is far from that because politicians are skewing it. This is why the relationship between humans and machines is vital. It is possible for the toxic elements of the human and that same potential in the artificial to come together in collusion. That is the specific danger and, the question is whether or not humanity itself has reached the level of consciousness and self awareness to unleash the power of the artificial.
Pierre-Normand February 04, 2025 at 07:16 #965373
Quoting Jack Cummins
That is the specific danger and, the question is whether or not humanity itself has reached the level of consciousness and self awareness to unleash the power of the artificial.


This seems to me the main worry being expressed by Jeoffrey Hinton and a few of his colleagues also. It is a worry understandably shared by many of the people whose livelihoods are negatively impacted or threatened by the rise of this technology. One worry of mine is that mainly targeting the technology itself for criticism isn't only ineffective (but would still be warranted if the cat wasn't already out of the bag) but also tends to deflects from seeing the oligarchs and current economical and political structures of society to be responsible rather in the same way that blaming Trump's re-election on wokeness does at the level of the culture wars.
Jack Cummins February 04, 2025 at 07:44 #965378
Reply to Pierre-Normand It is true to say that the criticism of artificial intelligence should not hinge on its potential use by leaders. They are separate issues and the only reason why they come together is that political leaders have such a significance role in determining their development and use. What may be helpful is the general development of ethics surrounding artificial technology as an area of focus in society, because it would open up dialogue for everyone. Of course, the leaders have more power and responsibility but if it considered as being about ethics it is less about moaning and blaming. The field of ethics may involve more impartially than pinning it down to politics, because it is about moral responsibility.
180 Proof February 04, 2025 at 08:28 #965379
Reply to Jack Cummins Shouldn't AGI participate in 'developing ethics' for itself (which humans might learn from) or do you mean, prior to AGI, humans should apply ethics to engineers / institutions 'developing AGI'? :chin:
Jack Cummins February 04, 2025 at 12:07 #965402
Reply to 180 Proof
What happens in the dialogue between the human and the artificial in ethics may be one of the most significant aspects for the future. There is indeed the the question about whether the artificial will develop it's own independent thought in the field of ethics? In speaking of ethics, my working definition is of it being the science and art of how one should live.

Considering this involves the question of the core basis of ethics and ethical values. There are varying approaches, especially the dichotomy between deontological and utilitarian approaches. If it is about smart thinking the artificial intelligence is likely to go in favour of the utilitarian. This is where some fear that AI will make sweeping choices, such as to bomb in order to protect the good of the greatest number. Or, supposing it made a judgement that humans should be destroyed as they have done so much harm and that a reset is needed?

A lot comes down to how the artificial is programmed in the first instance. For example, the core values may reflect cultural biases, even the religious or secular codes and ideals of its software and programmers. If it is able to achieve independence would it roll out a new set of moral rules, like those of Moses' tablet of 10 commandments? Also, there is the issue as to whether different artificial systems would agree any further than people.

If the independent ideas of AI were to differ significantly from those of the human, which would be followed? Humans would probably fall back on appeal to the emotional basis of ethics while the artificial may go in the direction of impartiality. It could lead to war between the humans and artificial. Or, alternatively, it could lead to a greater impartial understanding of aspects of ethics, including new insights into the dilemmas of justice, equality and freedom. How such ideas evolve in the artificial is a central factor in what may happen in this respect.
180 Proof February 10, 2025 at 06:25 #966984
Quoting Jack Cummins
What happens in the dialogue between the human and the artificial [ ... ]

2020 (re: 2013) - fiction

2025 - fashion

"Commerce is our goal here at Tyrell. "More human than human" is our motto" ~Eldon Tyrell (1982)

:nerd:
Jack Cummins February 10, 2025 at 10:58 #967005
Reply to 180 Proof
It seems like artificial intelligence has intervened to remove the video. It often seems to be used as a way of censorship as well as commerce. One of the most controversial commercial aspects is the UK banks upcoming plan is for banks to scan individuals' bank accounts. The aim is to flag up people on benefits who may have committed fraud. However, it focuses on people at the lower end of the power scale as opposed to those in powerful positions.

Also, about a week ago I saw an article in 'The Guardian' about potential consciousness and sentience in artificial intelligence. I was going to write down the references but couldn't find it again. What the article was arguing was on the supposition that it would be possible to do so. It went on to state that it would mean that such forms would be able to suffer like animals. Therefore, to 'kill them' or destroy them would involve ethical concerns. So, it seems that the machines would have rights.