Artificial intelligence

Gregory July 09, 2022 at 00:51 10750 views 65 comments
AI are in the news again, and it got me wondering what the most common sense way to seeing these machines was. Animals have consciousness but not reasoning like we do. Artificial intelligence does or may someday have the reasoning we have, but does this mean they are conscious? I mean, we can imagine consciousness without reason, so why not reasoning without consciousness? I haven't seen this considered before, so I thought I'd throw it out there

Comments (65)

Jackson July 09, 2022 at 02:46 #716922
Quoting Gregory
I mean, we can imagine consciousness without reason, so why not reasoning without consciousness?


That is what I think. Thinking can take place without consciousness. Why I think wondering about sentient AI is beside the point.
Agent Smith July 09, 2022 at 03:14 #716928
Consciousness sans reason: Mirrors!

Reason sans consciousness: Computers!

We have to, sensu latu, put computers before mirrors and let the magic happen!
180 Proof July 09, 2022 at 03:24 #716930
Quoting Gregory
I mean, we can imagine consciousness without reason, so why not reasoning without consciousness?

What do you mean by "consciousness" in this query?
Gregory July 09, 2022 at 03:26 #716931
Reply to 180 Proof

Feeling of mind
180 Proof July 09, 2022 at 03:29 #716932
Quoting Gregory
Feeling of mind

What do you mean by "mind"?
Agent Smith July 09, 2022 at 03:43 #716936
Quoting 180 Proof
What do you mean by "consciousness" in this query?


Quoting 180 Proof
What do you mean by "mind"?


Going Socrates on (poor) Gregory. Looks as though Gregory's casting a wide net - he's not trying to catch a particular kinda fish, any fish'll do!
Gregory July 09, 2022 at 04:02 #716939
Reply to 180 Proof

Mind is awareness which is a feeling. How can we know if AI has it since it's "biology" is so different from ours? People say they might demonstrate reason in AI, but I'm wondering if this includes awareness
180 Proof July 09, 2022 at 05:51 #716966
Quoting Gregory
Mind is awareness which is a feeling.

But you stated that "consciousness" is "feeling of mind" "which is a feeling." This circularity makes no sense and renders the OP gibberish. Unless, of course, I'm missing something ... :chin:
praxis July 09, 2022 at 14:36 #717040
“I believe consciousness is simply what it feels like to have a neocortex.”

? Jeff Hawkins
praxis July 09, 2022 at 14:40 #717041
Quoting Gregory
I mean, we can imagine consciousness without reason, so why not reasoning without consciousness?


Don’t we already have this with computers? The device that I’m using to send this message can easily beat me in chess, for instance, and its not conscious.
punos July 09, 2022 at 23:31 #717139
Quoting Gregory
Animals have consciousness but not reasoning like we do.


I think animals have consciousness along with the ability to reason, the difference between humans and other intelligent forms has to do with varying degrees of complexification. Our consciousness is the media of our awareness and informs our reason (man or animal, or plant even). The more complex a consciousness is the more scope it has for consideration, the less complex the less it's able to consider complex variables. The evolution of the nervous system reached it's maximum complexity on this plant with the advent of modern humans. This evolutionary process is still going on and the torch of complexity is at this moment beginning to pass to AI. A fully integrated AI planetary network functioning as one consciousness seems to be the evolutionary trajectory that we are on.
Josh Alfred July 10, 2022 at 11:50 #717295
Most of the time I do not know if I am thinking reasonable or not, that is, I am not aware of my reasonings while they occur. In the midst of that kind of thinking I still retain consciousness. Likewise, artifical intelligence will be conscious rather or not it is reasoning-out its syntaxtual (thought-out) responses.

Here;s one of my blogs that you can interact with and hopefully enjoy: https://taoofthepsyche.blogspot.com/2018/07/the-robot-life-unfinished-treatise.html
Babbeus August 21, 2022 at 11:13 #731493
Reply to Gregory

Intelligence is just getting information as an input and returning some output that is "useful" for some "task". A computer engine designed to play chess is a form of intelligence. The "usefulness" and the "task" are not rigorous concepts and an AI could be actually stupid or pointless. You could think that any system that gives some "response" to any "input" has the same "nature" and essence of an AI.
Pantagruel August 21, 2022 at 12:13 #731501
Quoting Gregory
AI are in the news again, and it got me wondering what the most common sense way to seeing these machines was. Animals have consciousness but not reasoning like we do. Artificial intelligence does or may someday have the reasoning we have, but does this mean they are conscious? I mean, we can imagine consciousness without reason, so why not reasoning without consciousness? I haven't seen this considered before, so I thought I'd throw it out there


Right now so called AI can perform specific tasks based on extensive programming. At the height of its complexity, these tasks can be generalized to what may be called "abilities": carry on a conversation, for example. So the question is, if we think of AI as being conscious, is this a specific ability which we confer on it? That only begs the question of what consciousness is. In that case, if we think of AI as attaining consciousness, it must be in the context of us conferring more and more task specific capabilities such that, in a cumulative fashion, new generalized abilities emerge, at the apex of which emerges consciousness, the ultimate general ability. And if it is an emergent property then we would no more have created that consciousness than we created the matter out of which the computer was formed.

As to reason without conscious, in abilities-centric characterization just offered, I think reason and consciousness must be synonymous. Viz, a computer that displays the general ability of "carrying on a conversation" (in the context of the Turing test say) is not really reasoning, just executing a whole lot of algorithms very quickly. You could not call that reasoning unless it were at the same time conscious.
Seeker August 21, 2022 at 16:23 #731532
Quoting Gregory
AI are in the news again, and it got me wondering what the most common sense way to seeing these machines was. Animals have consciousness but not reasoning like we do. Artificial intelligence does or may someday have the reasoning we have, but does this mean they are conscious? I mean, we can imagine consciousness without reason, so why not reasoning without consciousness? I haven't seen this considered before, so I thought I'd throw it out there


For now an AI is still nothing more than a database full of predefined (answer) sentences. The answer to present from such a database (AI) is initialised by the/any occurence of matching (key)words coming from any possible question(s). The 'smart' (reasoning) behind any of it is still (totally) dependant on the skill-level and the creativity of the (human) programmer(s).
Gregory August 26, 2022 at 01:20 #733145
Lots of good posts! Reason for me is the ability to grasp an idea with the necessary means of consciousness. We strive for the idea among a complex of thoughts and feel we have the truth when our minds rest. If AI can do complex thoughts it's possible it does so by another means then by way of consciousness.

apokrisis August 26, 2022 at 02:13 #733148
Quoting praxis
“I believe consciousness is simply what it feels like to have a neocortex.”


Neuroreductionism.

The better answer is that consciousness is simply what it is like to be a self living in its world.

So the neocortex, and the rest of the brain, are all a necessary part of the hardware equation. But being "a mind" is how the neocortex, rest of the brain, and even the entire body - and with humans, the whole damn sociocultural edifice – get to pay for their biological existence.

Consciousness is the modelling relation an organism has with its environment. An engorged neocortex is what you can afford if it adds that much of a benefit in terms of a nutrition and survival dividend.

Brains burn through energy like working muscle, even when idling. So this is something we have to consider when it comes to AI. An artificial mind would also be one that is paying close attention to its own organismic existence. It would have to be smart in the sense of earning its entropic keep.

Of course, here in the real world, humans build machines to amplify their own power to exist. They are an investment meant to serve our entropic existence. We want AI in the form of extensions to our reach, not as some rival class of organisms, living in the same world, equipped with the minds - or modelling relation - which might allow them that level of mental independence.

If we build actual AI, then we are just proving ourselves stupid.

Quoting Gregory
Animals have consciousness but not reasoning like we do.


Animals have reason. They have genetic and neural level models of the world they live in that work because they are "reasonable" in the pragmatic sense.

So what humans have got is the extra semiotic modelling capacity that comes with having developed speech and maths - codes based on words and numbers, layered on top of the codes based on genes and neurons.

Words allow humans to organise in a properly organismic fashion - as one shared mind - at the scale of the social organism.

Then maths/logic became the even more abstracted and universalised symbol system that led to a civilised and technological version of this social order - one that amplified its entropic reach through machinery like steam engines and Turing computation.

So "consciousness" is an unhelpful term here. It presumes that the mind is some kind of special Cartesian substance which has properties like "an introspective glow of awareness".

Neuroscientists avoid using it. Computer scientists are not so bashful, but even they started to limit themselves to artificial "intelligence" once they were asked to put up or shut up.

Neuroscience has now got quite used to understanding consciousness and reasoning in terms of embodied semiosis – the enactive turn and Bayesian brain. So it ain't about having a neocortex. It is about there being some level of reality modelling that an organism can pragmatically afford.

Humans stumbled into language and technology - fire, spears, shelters, baskets - as a new sociocultural way of life. They could then afford a much bigger brain because this new level of semiosis filled their bellies with a much more calorie dense diet.

Quoting Gregory
Reason for me is the ability to grasp an idea with the necessary means of consciousness. ... If AI can do complex thoughts it's possible it does so by another means then by way of consciousness.


You are describing how one level of semiosis gets stacked on another.

So the brain does the neurosemiosos. It gives you an animal level of intelligence, insight, habit learning, recognition memory, etc.

Then language and logic are further levels of world modelling where we humans learn to stand outside our animal or biological level of ideation to now take an objective – or rather, social and technical - view of the deal.

We learn the habit of thinking about what we are doing from first the point of a society, which is looking at our rather animistic desires and reactions and passing some kind of more rational collective judgement.

And then we up it even more by living in a society that has learnt to stand back even from the embodied social point of view to consider the problems of existence from the point of view of a world ruled by the abstractions of numbers and logic. We become part of a civilisation that wants society to run itself in a technocratic and enlightened fashion.

Again, where does AI fit into this natural arc of mental development? In what way does it pave the path to some even higher level of semiotic intelligence?

Even for a computer scientist, this is the kind of question that needs to be answered.

IBM might self-advertise by cranking out gadgets that can win at chess, or even go and bridge. But chucking lumps of circuitry – even biologically-inspired circuitry like neural nets – at the public is a big fake.

Replicating what brains do is just rehashing neurosemiosis. Where is AI's sociosemiosis, or technosemiosis? What social world would make sense of these neural machines?

Anyone can talk about making conscious machines as some kind of sci-fi engineering project. But actual AI ain't even a thing until we see the social engineering - the blueprint of the world in which this hardware even makes sense, pragmatically speaking.







sime August 26, 2022 at 07:48 #733213
In practice, "Artificial intelligence" is merely state-of-the-art software engineering in service of human beings done in accordance with the ideals of human rationality; it is the design and implementation of systems whose validation criteria are socially determined in accordance with cultural requirements, e.g a recommender system must suggest a 'good' movie, a chatbot must argue 'persuasively', a chess engine must respond with a 'brilliant' move, a mars rover must avoid 'dying'....

These sorts of applications aren't differences in 'kind' from early programming applications; they only differ in terms of their degree of environmental feedback and their corresponding hardware requirements. In both cases, software is invented to satisfy human needs and often to reinforce human prejudices.

As for general intelligence, no such thing can exist in either man or machine; to pass a 'general' Turing Test is to pass a highly specialised "human traits" examination that comes at the cost of being unable to perform any single task efficiently, whilst also ruling out the ability to execute of other potentially useful behaviours that humans don't recognise as being rational. (Also, no two humans have the same concept of rationality because they live non-identical lives).

The concept of "consciousness" cannot be divorced from the concept of rationality, because empathy is invoked when judging the rationality of another agent's actions. We put ourselves in the agent's shoes, then fool ourselves into thinking that we were experiencing their consciousness rather than ours.
Alkis Piskas August 26, 2022 at 15:24 #733291
Quoting Gregory
what the most common sense way to seeing these machines was.

What machines?
I believe that you should explore and undestand well what "Artificial Intellgence" is before launching this discussion on this subject. But of course, it's too late for that. Nevertheless, it's still a good idea to do that even now.
SpaceDweller August 26, 2022 at 19:57 #733410
Quoting Gregory
Artificial intelligence does or may someday have the reasoning we have, but does this mean they are conscious?

Conscious I think means self-awareness, and if so machines will never be self-aware like us.
machines may have reasoning far better and faster than us, ex. chess engine, but self-awareness not.
Josh Alfred September 04, 2022 at 13:09 #735806
The self-awareness test has been utilized as a marker for self-awareness in infant human beings and other species of life. When a machine can recognize itself in a mirror it will have self-awareness.

Consciousness is stimulation of the senses. Just sensory input. Some of our machines have such, as in the case of visual recognition soft-ware.

Sentience is a little more complicated than that. Self-awareness, intelligence, consciousness, and other factors are included.

Contextual awareness, such that a thing (including the self) exists within some kind of phenomenal boundaries is possible in machines too.

Deductive reasoning has rules. If it has rules it can be simulated/programmed into machines.
Alkis Piskas September 04, 2022 at 17:06 #735879
Quoting Josh Alfred
When a machine can recognize itself in a mirror it will have self-awareness.

There are a lot of devices that can recognize all sort of things. They are programmed to do that. So, if you program a device to recognize itself in a mirror, and then say (issue a sound) "Here I am!", it couuld do all that. But a machine could never do that by itself, i.e. w/o having been programmed amd instructed appropriately. Machines do not and can never have awareness.

Awareness is a characteristic of life. Humans, animals and plants have awareness, of a different kind. But self-awareness is an ability and characteristic that only humans have.
Alkis Piskas September 04, 2022 at 17:31 #735885
Quoting SpaceDweller
Conscious I think means self-awareness, and if so machines will never be self-aware like us.

I agree.

Quoting SpaceDweller
Machines may have reasoning far better and faster than us.

I don't agree. :smile:
Reasoning involves thinking, and machines do not think. Machines execute instructions. Sometimes, in sophisticated programs and advanced AI cases, it might seem that the machines think, but behind this apparent thinking lies programming, i.e. instructions. Machines can be even programmed to create programs themselves, but this is still based on human programming.
Thinking, and with it reasoning, is an ability possessed exclusively by humans.

On the other hand, machines can surpass us, and in fact to a huge degree, in the fields of calculation, memory capacity and retrieval and timing.

Agent Smith September 04, 2022 at 17:46 #735887
What we need, as far (artificial) intelligence is concerned, is what is described as the technological singularity - an exponential growth of intelligence - each subsequent intelligence should be greater than the one preceding it by a factor that would depend on what's possible given physical/chemical/biological constraints. This kinda growth actually occurs at a small scale with individuals - a person is today more intelligent than she was yesterday (books & experience as invaluable teachers).
Agent Smith September 06, 2022 at 08:51 #736564
Vishvakarma/Tvashtar (The Architect/Lord of the machines). :pray:

Om Vishwakarmane Namah!

:snicker:
Seeker September 06, 2022 at 10:36 #736587
Quoting Agent Smith
What we need, as far (artificial) intelligence is concerned, is what is described as the technological singularity - an exponential growth of intelligence - each subsequent intelligence should be greater than the one preceding it by a factor that would depend on what's possible given physical/chemical/biological constraints. This kinda growth actually occurs at a small scale with individuals - a person is today more intelligent than she was yesterday (books & experience as invaluable teachers).


Isnt that implying the expectation (or need) for AI('s) to be exactly like us in order to satisfy the criteria for intelligence? If so, wouldnt that also introduce the need for emotional awareness enabling regulation of the outcome per any given cycle of growth?
Agent Smith September 06, 2022 at 10:49 #736591
Reply to Seeker There's a lot we have to work on!
Seeker September 07, 2022 at 16:42 #737001
Well, here's something to chew on for now.

[quote=Free Documentary]Mankind has always looked for ways to reduce manual labor and repetitive tasks. To that end, and in the absence of technology, civilization exploited various methods, often by taking advantage of their fellow humans. Robots, as a potential solution, have long fascinated mankind, capturing our imagination for centuries. Even in Greek mythology, the god Hephaestus had « mechanical » servants. But not until recently, has artificial intelligence finally progressed to a level that will become more and more life-changing for the future of humanity.[/quote]

https://www.youtube.com/watch?v=mh45OBLeCu8
Corvus September 20, 2023 at 14:38 #838888
Reply to Gregory In the case of AI machines, would it not be the state of "powered ON" being them conscious in the human terms?
My PC has ON OFF and also SLEEP options. It only works when it is "ON".
Raul October 23, 2023 at 20:42 #847900
You need to understand what conscioussnes is and what it is not. I'm not going to explain it here. Read it from the experts.
What I can tell you regarding your question is that consciousness is not a ON-OFF thing. There're grades of consciousness as well as states of consciousness.
Can an AI be conscious, yes of course. An AI can have a very low or high grade of consciousness depending of the amount of integrated information and it modularity (Tonini IIT).
But that consciousness is far from ours as humans, because the AI is conscious of "its world", that is to say, that if we talk about an AI in a google-car... that AI is conscious within its "traffic-world" not beyond that.
That said, keep in mind consciousness is not the same than self consciousness and take into account as well that feelings and emotions are as well other components to take into account.

Net,
An AI will be one day conscious and self conscious in the sense we understand human consciousness but it will have to:
1 - be embodied
2 - be more than one AI... interact with similar AIs in order to develop a social self. We humans could be similar and can interact but it will need similar "replicants" to fully empathize :-)
3 - will have to be directed by the "survival" pulsion
4 - will require an architecture that generates a rich gradient of feeling and emotions. Never like ours that require "flesh" but synthetic ones linked to its source of energy, protect its body feeling temperature, pressure, etc.. similar to what we do...
Corvus October 24, 2023 at 09:39 #848019
Reply to Raul

I have defined what consciousness is in living existence in the other thread.

Of course it would be naive to think that a computer / machine conscious state is just simply an ON-OFF state. It was just a quick answer to the OP's question with no elaboration whatsoever, because it is a common knowledge that it involves more than that.

As you may know, when computer is switched on, it doesn't just power on, but it goes on various steps from the ROM BIOS stored booting program to wake the machine, then loads up the Operating System program from the hard disk, and fill the RAM with the application processes and make the whole machine ready to stand-by any input from all the available peripherals such as keyboard, mouse, touch screen monitors, network adaptors, and sound chips, microphones, cameras etc etc.

Again, the detailed and more precise process could be referred from the experts explanations, but this is just the simplest summary of how the machine POWER ON event works in general.  Depending on what type of computer / machine or AI device they are, of course, the BOOT UP steps might be different, but in general they will be very similar.

No matter how clever a computer or AI device they are, they will not be able to operate without electric power source i.e. from the AC or DC type of electricity either from rechargeable or direct ON Board power source such as batteries, or solar energy.  Therefore it is likely all AI devices and computers will be totally reliant on humans to supply the electric energy and also powering ON-OFF as required.

I suspect any AI or computer device will be able to operate like real humans fully in a biological and mental way.  AI devices will be manufactured for certain types of tasks such as cleaning, sorting out mails, cooking, manufacturing ...etc.  

Consciousness of AI devices will not be able to be shared by other conscious beings such as humans and other AI devices due to the nature of consciousness itself.  The contents of the AI devices consciousness could be copied or cloned to the other AI devices, but humans will still not be able to know what they would be like.

It is likely that AI devices will take over many jobs carried out by humans in the future making much of the human workforce redundant creating economic problems for them.  However, AI devices will never be able to operate like humans do in terms of biological, social and mental life.
Corvus October 24, 2023 at 11:01 #848030
Quoting Gregory
I mean, we can imagine consciousness without reason, so why not reasoning without consciousness?


Isn't being conscious or consciousness the pre-condition for all the other mental activities?
Gregory October 24, 2023 at 18:30 #848096
Reply to Corvus

I think that consciousness is awareness and we dont know how that reaches in nature. The ability to do calculations which AI has is divorced from the heart and intuition. Not everything can be analyzed by science.
baker October 24, 2023 at 19:30 #848117
Quoting Corvus
However, AI devices will never be able to operate like humans do in terms of biological, social and mental life.


But humans are very much on the way to operate like AI devices.
Corvus October 24, 2023 at 22:35 #848155
Reply to baker
Houston, we have a problem. A.I. has been catching us up. They are going to replace us soon. :(
Wayfarer October 25, 2023 at 05:56 #848213
Reply to CorvusWell, I put the question to ChatGPT4. Here are the responses:

User: A hypothetical question for you: Can AI ever become sentient?

ChatGPT: The question of whether AI can become sentient is a topic of ongoing debate among scientists, philosophers, and experts in the field of artificial intelligence. Sentience refers to the capacity to have subjective experiences, consciousness, and self-awareness. As of my last knowledge update in September 2021, there is no scientific consensus on whether AI can achieve sentience.

Here are some key points to consider:

Definition of Sentience: Sentience is not well-defined, and there are different philosophical and scientific theories about what it means to be sentient. Some argue that it is a property of biological organisms, while others believe it could theoretically be achieved through advanced computational processes.

Current AI: Current AI systems, including the most advanced neural networks, are not sentient. They are specialized tools that can perform tasks based on patterns and data they have been trained on but do not possess consciousness or subjective experiences.

Debate: The debate about AI sentience often intersects with discussions about consciousness and the nature of the mind. Many experts believe that sentience and consciousness are not solely dependent on computational processes and that there may be fundamental aspects of consciousness that cannot be replicated by machines.

Ethical Considerations: Even without achieving sentience, AI technologies raise ethical questions about their impact on society, privacy, and decision-making. Discussions about AI ethics often revolve around the potential risks and benefits of increasingly sophisticated AI systems.

Future Possibilities: It is possible that future advancements in AI, particularly in areas like neuromorphic computing and brain-computer interfaces, could bring us closer to creating AI systems that exhibit properties resembling sentience. However, whether this would truly be sentience or merely sophisticated simulation is a matter of philosophical debate.

In summary, whether AI can ever become sentient remains a complex and open question. It is a topic that continues to be explored by researchers, and the answer may depend on advances in AI technology, a deeper understanding of consciousness, and the development of new theoretical frameworks. As of my last update, there is no definitive evidence or consensus on the possibility of achieving AI sentience.

User: According to the Miriam Webster dictionary 'the meaning of SENTIENT is capable of sensing or feeling : conscious of or responsive to the sensations of seeing, hearing, feeling, tasting, or smelling.' That would seem to challenge sentient AI wouldn't it?

ChatGPT: You are correct in pointing out the definition of "sentient" from the Merriam-Webster dictionary, which emphasizes the capacity to sense or feel and be conscious of sensations like seeing, hearing, feeling, tasting, or smelling. This definition aligns with the commonly understood meaning of sentience, which implies subjective experiences and consciousness.

Given this definition, the challenge of creating sentient AI is indeed significant. Current AI systems, including advanced machine learning models like GPT-3 (on which I'm based), do not possess the capability to truly sense or feel in the way that living beings do. They process data and make decisions based on patterns and information in their training data, but they do not have subjective experiences, consciousness, or sensory perception.

While AI can simulate certain aspects of human-like responses and understanding through pattern recognition, it does not do so with genuine sentience. Achieving true sentience in AI would require a fundamental shift in our understanding of both AI and consciousness, and it remains a topic of speculation and debate in the fields of AI and philosophy.

It's essential to differentiate between AI's ability to mimic or simulate human-like responses and actual sentience or consciousness. While AI can be designed to perform tasks that appear to involve perception and understanding, it lacks the intrinsic awareness and subjective experiences associated with true sentience.


You might also want to peruse this online article by Bernardo Kastrup on his negative response to this question.

I think it's the absence of subjectivity that's the killer blow to the idea of sentient AI: a computer system is not a being.
Corvus October 25, 2023 at 09:05 #848232
Reply to Wayfarer I have been pessimistic on AI's humanisation par excellence, but having said that I was impressed by the progress of the technology in 2023 after watching some videos on the AI. It would be interesting how far they will advance as time progresses.

I recall some folks talking about transferring a human consciousness into AI, and the AI would operate as the transferred person mentally. I thought it was a Sci-Fi hypothesis, but then many Sci-Fi hypotheses came true in the real world. It seems likely that a lot more surprises will happen in the future AI industry.
Wayfarer October 25, 2023 at 09:14 #848234
Reply to Corvus I encourage people to use it. I signed up for ChatGPT the day it came out, used it every day since. It’s quite amazingly useful. i say the ‘A’ in AI stands for ‘augmented’ rather than ‘artificial’.
Corvus October 25, 2023 at 09:17 #848236
Reply to Wayfarer :ok: :up:
baker October 25, 2023 at 09:17 #848237
Quoting Wayfarer
I signed up for ChatGPT the day it came out, used it every day since.

Where on earth do you find the time for it??
Wayfarer October 25, 2023 at 09:25 #848238
Reply to baker What makes you think I’m busy?
Corvus October 25, 2023 at 10:18 #848243
Reply to Wayfarer If we get the AIs working for us cleaning, washing, writing, cooking, gardening ... etc etc, we will have plenty of free time for sure. :D
Wayfarer October 25, 2023 at 10:26 #848245
Reply to Corvus The only actual smart devices I use are to control lights and heating. All the cooking, cleaning and gardening is mine alone.
sime October 25, 2023 at 10:52 #848248
The concept called "Intelligence" - whether natural or otherwise - is largely an anthropocentric misconception of Cybernetics that is born out of cartesian folk-psychology. I think the word "Intelligence" should be eliminated from the sciences.
mcdoodle October 25, 2023 at 21:03 #848370
Quoting Wayfarer
I think it's the absence of subjectivity that's the killer blow to the idea of sentient AI: a computer system is not a being.


I've been trying to write a little fiction, that just won't come out right at the moment, about whether some humans' belief in the subjectivity of a computing machine might persuade a lot of other humans there is sentient AI. 'Definitions' can slip away from rationality in the hands of human sociality.

This seems to me already lurking in the popular speculation about future AI as a sort of supermensch. I mean, if enough Americans can believe that the best person in their midst to run the place is Donald Trump, then widespread beliefs can be ill-founded yet become entrenched.

One mistake that I discuss with Google Bard (my version of your chats to gpt) is that large language systems have been 'trained' to use the first person singular. Bard is sympathetic to my case, while still calling itself 'I' of course. In our fictions - think of all the animal stories we love - the use of 'I' seems to me to involve an ascribed sentience.

It concerns me too that OpenAI (a fine misnomer in view of their secrecy about their 'training material') is built on the shoulders of millions of human giants, whose contribution is neither acknowledged nor financially rewarded. (Some authors and artists are of course trying to sue for royalties but it seems doubtful they've got the clout against techie behemoths). Every reply one receives from so-called AI is constructed from a model built on ordinary human interactions; each 'reply' is not an expression of 'intelligence', but a refined forecast of what happens next, as programmed. The machines are stochastic parrots, as the Google women's paper of three years ago put it. But subsequent human discourse about them, and our subsequent discourse 'with' them, including our acceptance of their self-identification using the word 'I', as well as our public discourse about their potential sentience, may well be leading us up the garden path.
Wayfarer October 25, 2023 at 21:23 #848376
Reply to mcdoodle Agree that it's very easy to fall into believing you're interacting with a human agent. I find in my interactions with ChatGPT, it is adept at responding like a human coach, with encouragements, apologies where necessary ('I apologise for the confusion' when an error is pointed out), and so on.

Last year there was a thread on the well-known case of Blake LeMoine, the Google engineer who believed the system he was working on had attained sentience, and was eventually let go by Google over the case. He was utterly convinced, but I think he was a little unbalanced, let's say. But I can see how easy it would be to believe it. Just after the Big Release of ChatGPT a NY Times reporter got into a really weird interaction with it, with it trying to convince him that it loved him and that he should leave his wife.

I've never had any strange experiences with it. I too have a (long-stalled) fictional work. ChatGPT is helpful there too, in fact I'm going to pivot back to it in November and try and finally finish a draft. It's been helpful there - for instance, one of the plot points is set in a conference in Frankfurt, and I asked for some world-building detail for Frankfurt. It's also a little like a writing coach. And I also bounce philosophical ideas off ChatGPT, it's helpful at making connections, suggestions, and corrections. ('Ah, that's an insightful observation!') Have a read of this interaction I had when researching platonic realism. It's coming up to a year since ChatGPT launched and it's become very much part of the landscape as far as I'm concerned.




baker October 26, 2023 at 08:51 #848452
Reply to Wayfarer I'm amazed by people who manage to get all their work done and still have time for things such as Chat GPT.
Wayfarer October 26, 2023 at 09:07 #848457
Reply to baker They apparently have enough time to make pointless comments on internet fora.
baker October 26, 2023 at 09:13 #848459
Reply to Wayfarer *sigh*
You accused me of trolling several times. I actually don't have much time at the computer, even less for the forum. Which is, apparently, not something you'd understand.
Your accusations made me feel like shit and made me doubt myself. And I've come to admire you and envy that you have so much time for all the topics I don't. On your throne, judging. Must be great.
Wayfarer October 26, 2023 at 09:30 #848465
Quoting baker
Your accusations made me feel like shit and made me doubt myself.


Then don't make silly comments! If you took the time to read what I wrote and at the linked dialogue with ChatGPT, I don't see why you would say it is a waste of time. AI is a new frontier in technology and it can be used for all kinds of things. Sure, not all of them will be good things, but I use it to harvest ideas, look up references, suggest recipes, provide feedback on fiction writing, and other things which I don't see as 'a waste of time'.

And I don't think I accused you of trolling. You responded to an OP I created on idealist philosophy with the accusation that I wanted to enjoy the fruits of Buddhism without paying any dues, or something along those lines. To which I said, 'don't be patronising'. I think you can be a very insightful and smart contributor but I think sometimes you tend to shoot first and ask question later, if you know what I mean.
Corvus October 26, 2023 at 13:00 #848496
Quoting Wayfarer
The only actual smart devices I use are to control lights and heating. All the cooking, cleaning and gardening is mine alone.


I suppose there are jobs that AI can never do no matter how capable and intelligent they are. For instance, suppose AI could cook, but they cannot eat the meals for you for sure. You must eat it yourself. :D
baker October 26, 2023 at 16:11 #848562
Quoting Wayfarer
If you took the time to read what I wrote and at the linked dialogue with ChatGPT, I don't see why you would say it is a waste of time.

That's in your mind. I never said it was a waste of time. I think it's a luxury very few can afford. And you're apparently among the lucky few. I've never even visited the ChatGPT website.

I did call something a waste of time today (if that is what caught your attention), but it wasn't about AI.

And I don't think I accused you of trolling.

Like here, and in private.

You responded to an OP I created on idealist philosophy with the accusation that I wanted to enjoy the fruits of Buddhism without paying any dues, or something along those lines.

Of course. I think you are approaching Buddhism from a safe distance. It's very common for Westerners to do so. This isn't a personal criticism against you, it's pertinent to religious epistemology.

I think you can be a very insightful and smart contributor but I think sometimes you tend to shoot first and ask question later, if you know what I mean.

This is a philosophy forum. Sapere aude!


And back on topic!
baker October 26, 2023 at 16:19 #848565
Quoting Corvus
If we get the AIs working for us cleaning, washing, writing, cooking, gardening ... etc etc, we will have plenty of free time for sure.


To do what with all that free time? Play videogames? Watch films, preferrably consisting of nothing but deep fakes?

It seems to be in the interest of the stakeholders in the AI business that people consume and dumb down. What is more, it seems to be somehow evolutionarily advantageous to be an avid consumer and to dumb down, as this is the way to adapt to modern society, and adaptation is necessary if one is to survive.
Corvus October 26, 2023 at 16:36 #848570
Quoting baker
To do what with all that free time? Play videogames? Watch films, preferrably consisting of nothing but deep fakes?


Some will get bored with the free time, but some will enjoy doing what they enjoy doing e.g. travelling, dating (online chatting with the robots?), reading, meditating, or thinking what to do with the free time :)

Quoting baker
It seems to be in the interest of the stakeholders in the AI business that people consume and dumb down. What is more, it seems to be somehow evolutionarily advantageous to be an avid consumer and to dumb down, as this is the way to adapt to modern society, and adaptation is necessary if one is to survive.


Yup, there will be some social problems stemming from the commercialism.

Alkis Piskas October 26, 2023 at 17:10 #848580
Reply to Gregory
I'm sorry to say this, but you don't seem to know anything about your subject.

What do you mean "AI are in the news again"? What does "are" stand for? Again, as supposed to when?
And then, what are "these machines"? And what makes you think that they can "someday have the reasoning we have"?
And then, "does this mean they are conscious?" How can a machine be conscious, when consciousness is an attribute of only living things?

I believe that, as a minimum requirement, you should learn about your subject, at least look it up in dictionary or encyclopedia, before launching a discussion about it. Otherwise, you lay yourself open to ridicule. (As it happens here. And again, [s][/s]I'm really sorry to have to say all that.)

Wayfarer October 26, 2023 at 20:54 #848669
Quoting baker
I've never even visited the ChatGPT website.


Perhaps you ought to. As for being ‘a luxury’, it’s actually available for free, although there are also subscription models.
wonderer1 October 26, 2023 at 21:23 #848681
In AI news...

But are machines capable of this type of thinking? In the late 1980s, Jerry Fodor and Zenon Pylyshyn, philosophers and cognitive scientists, posited that artificial neural networks -- the engines that drive artificial intelligence and machine learning -- are not capable of making these connections, known as "compositional generalizations." However, in the decades since, scientists have been developing ways to instill this capacity in neural networks and related technologies, but with mixed success, thereby keeping alive this decades-old debate.


"For 35 years, researchers in cognitive science, artificial intelligence, linguistics, and philosophy have been debating whether neural networks can achieve human-like systematic generalization," says Brenden Lake, an assistant professor in NYU's Center for Data Science and Department of Psychology and one of the authors of the paper. "We have shown, for the first time, that a generic neural network can mimic or exceed human systematic generalization in a head-to-head comparison."

ssu October 27, 2023 at 12:27 #848834
Quoting Wayfarer
I signed up for ChatGPT the day it came out, used it every day since. It’s quite amazingly useful. i say the ‘A’ in AI stands for ‘augmented’ rather than ‘artificial’.

I would dare to say that simply using google or any search engine is similar use of "augmented" intelligence.

I still wouldn't think either search engines or ChatGPT mean machines are taking us over.
Gregory October 27, 2023 at 22:18 #848951
Reply to Alkis Piskas

I didn't say i knew anything about the topic. It's interesting to see others views on the subject while on the forum. Again this was posted over a year ago.
Alkis Piskas October 28, 2023 at 05:46 #849028
Quoting Gregory
I didn't say i knew anything about the topic.

No, you didn't have to. It showed by itself. And yet, you have launched a discussion about it. So, my question was, how comes you didn't at least try to learn something about it. That's all.
Gregory October 28, 2023 at 20:00 #849172
Reply to Alkis Piskas

My point was to learn from the discussion and your aggression is unwanted. Why dont you provide your philosophy here on AI right now? Not factodes, but philosophical thought. I'll comment
Alkis Piskas October 28, 2023 at 20:04 #849174
Reply to Gregory
You make no sense, man.
Wolfgang November 07, 2023 at 08:07 #851394
https://medium.com/neo-cybernetics/artificial-consciousness-f3118927248d
MoK January 30, 2024 at 18:58 #876543
Reply to Gregory
There is no need for consciousness when it comes to reasoning. We can simulate the brain if we know how it works when it comes to reasoning. There are three questions that we need to answer before we can design an AI that can reason. These questions are:
1) How the new information is classified in the brain?
2) How the related information are linked in the brain?
3) How do we find new information from the linked information?