Artificial intelligence
AI are in the news again, and it got me wondering what the most common sense way to seeing these machines was. Animals have consciousness but not reasoning like we do. Artificial intelligence does or may someday have the reasoning we have, but does this mean they are conscious? I mean, we can imagine consciousness without reason, so why not reasoning without consciousness? I haven't seen this considered before, so I thought I'd throw it out there
Comments (65)
That is what I think. Thinking can take place without consciousness. Why I think wondering about sentient AI is beside the point.
Reason sans consciousness: Computers!
We have to, sensu latu, put computers before mirrors and let the magic happen!
What do you mean by "consciousness" in this query?
Feeling of mind
What do you mean by "mind"?
Quoting 180 Proof
Going Socrates on (poor) Gregory. Looks as though Gregory's casting a wide net - he's not trying to catch a particular kinda fish, any fish'll do!
Mind is awareness which is a feeling. How can we know if AI has it since it's "biology" is so different from ours? People say they might demonstrate reason in AI, but I'm wondering if this includes awareness
But you stated that "consciousness" is "feeling of mind" "which is a feeling." This circularity makes no sense and renders the OP gibberish. Unless, of course, I'm missing something ... :chin:
? Jeff Hawkins
Dont we already have this with computers? The device that Im using to send this message can easily beat me in chess, for instance, and its not conscious.
I think animals have consciousness along with the ability to reason, the difference between humans and other intelligent forms has to do with varying degrees of complexification. Our consciousness is the media of our awareness and informs our reason (man or animal, or plant even). The more complex a consciousness is the more scope it has for consideration, the less complex the less it's able to consider complex variables. The evolution of the nervous system reached it's maximum complexity on this plant with the advent of modern humans. This evolutionary process is still going on and the torch of complexity is at this moment beginning to pass to AI. A fully integrated AI planetary network functioning as one consciousness seems to be the evolutionary trajectory that we are on.
Here;s one of my blogs that you can interact with and hopefully enjoy: https://taoofthepsyche.blogspot.com/2018/07/the-robot-life-unfinished-treatise.html
Intelligence is just getting information as an input and returning some output that is "useful" for some "task". A computer engine designed to play chess is a form of intelligence. The "usefulness" and the "task" are not rigorous concepts and an AI could be actually stupid or pointless. You could think that any system that gives some "response" to any "input" has the same "nature" and essence of an AI.
Right now so called AI can perform specific tasks based on extensive programming. At the height of its complexity, these tasks can be generalized to what may be called "abilities": carry on a conversation, for example. So the question is, if we think of AI as being conscious, is this a specific ability which we confer on it? That only begs the question of what consciousness is. In that case, if we think of AI as attaining consciousness, it must be in the context of us conferring more and more task specific capabilities such that, in a cumulative fashion, new generalized abilities emerge, at the apex of which emerges consciousness, the ultimate general ability. And if it is an emergent property then we would no more have created that consciousness than we created the matter out of which the computer was formed.
As to reason without conscious, in abilities-centric characterization just offered, I think reason and consciousness must be synonymous. Viz, a computer that displays the general ability of "carrying on a conversation" (in the context of the Turing test say) is not really reasoning, just executing a whole lot of algorithms very quickly. You could not call that reasoning unless it were at the same time conscious.
For now an AI is still nothing more than a database full of predefined (answer) sentences. The answer to present from such a database (AI) is initialised by the/any occurence of matching (key)words coming from any possible question(s). The 'smart' (reasoning) behind any of it is still (totally) dependant on the skill-level and the creativity of the (human) programmer(s).
Neuroreductionism.
The better answer is that consciousness is simply what it is like to be a self living in its world.
So the neocortex, and the rest of the brain, are all a necessary part of the hardware equation. But being "a mind" is how the neocortex, rest of the brain, and even the entire body - and with humans, the whole damn sociocultural edifice get to pay for their biological existence.
Consciousness is the modelling relation an organism has with its environment. An engorged neocortex is what you can afford if it adds that much of a benefit in terms of a nutrition and survival dividend.
Brains burn through energy like working muscle, even when idling. So this is something we have to consider when it comes to AI. An artificial mind would also be one that is paying close attention to its own organismic existence. It would have to be smart in the sense of earning its entropic keep.
Of course, here in the real world, humans build machines to amplify their own power to exist. They are an investment meant to serve our entropic existence. We want AI in the form of extensions to our reach, not as some rival class of organisms, living in the same world, equipped with the minds - or modelling relation - which might allow them that level of mental independence.
If we build actual AI, then we are just proving ourselves stupid.
Quoting Gregory
Animals have reason. They have genetic and neural level models of the world they live in that work because they are "reasonable" in the pragmatic sense.
So what humans have got is the extra semiotic modelling capacity that comes with having developed speech and maths - codes based on words and numbers, layered on top of the codes based on genes and neurons.
Words allow humans to organise in a properly organismic fashion - as one shared mind - at the scale of the social organism.
Then maths/logic became the even more abstracted and universalised symbol system that led to a civilised and technological version of this social order - one that amplified its entropic reach through machinery like steam engines and Turing computation.
So "consciousness" is an unhelpful term here. It presumes that the mind is some kind of special Cartesian substance which has properties like "an introspective glow of awareness".
Neuroscientists avoid using it. Computer scientists are not so bashful, but even they started to limit themselves to artificial "intelligence" once they were asked to put up or shut up.
Neuroscience has now got quite used to understanding consciousness and reasoning in terms of embodied semiosis the enactive turn and Bayesian brain. So it ain't about having a neocortex. It is about there being some level of reality modelling that an organism can pragmatically afford.
Humans stumbled into language and technology - fire, spears, shelters, baskets - as a new sociocultural way of life. They could then afford a much bigger brain because this new level of semiosis filled their bellies with a much more calorie dense diet.
Quoting Gregory
You are describing how one level of semiosis gets stacked on another.
So the brain does the neurosemiosos. It gives you an animal level of intelligence, insight, habit learning, recognition memory, etc.
Then language and logic are further levels of world modelling where we humans learn to stand outside our animal or biological level of ideation to now take an objective or rather, social and technical - view of the deal.
We learn the habit of thinking about what we are doing from first the point of a society, which is looking at our rather animistic desires and reactions and passing some kind of more rational collective judgement.
And then we up it even more by living in a society that has learnt to stand back even from the embodied social point of view to consider the problems of existence from the point of view of a world ruled by the abstractions of numbers and logic. We become part of a civilisation that wants society to run itself in a technocratic and enlightened fashion.
Again, where does AI fit into this natural arc of mental development? In what way does it pave the path to some even higher level of semiotic intelligence?
Even for a computer scientist, this is the kind of question that needs to be answered.
IBM might self-advertise by cranking out gadgets that can win at chess, or even go and bridge. But chucking lumps of circuitry even biologically-inspired circuitry like neural nets at the public is a big fake.
Replicating what brains do is just rehashing neurosemiosis. Where is AI's sociosemiosis, or technosemiosis? What social world would make sense of these neural machines?
Anyone can talk about making conscious machines as some kind of sci-fi engineering project. But actual AI ain't even a thing until we see the social engineering - the blueprint of the world in which this hardware even makes sense, pragmatically speaking.
These sorts of applications aren't differences in 'kind' from early programming applications; they only differ in terms of their degree of environmental feedback and their corresponding hardware requirements. In both cases, software is invented to satisfy human needs and often to reinforce human prejudices.
As for general intelligence, no such thing can exist in either man or machine; to pass a 'general' Turing Test is to pass a highly specialised "human traits" examination that comes at the cost of being unable to perform any single task efficiently, whilst also ruling out the ability to execute of other potentially useful behaviours that humans don't recognise as being rational. (Also, no two humans have the same concept of rationality because they live non-identical lives).
The concept of "consciousness" cannot be divorced from the concept of rationality, because empathy is invoked when judging the rationality of another agent's actions. We put ourselves in the agent's shoes, then fool ourselves into thinking that we were experiencing their consciousness rather than ours.
What machines?
I believe that you should explore and undestand well what "Artificial Intellgence" is before launching this discussion on this subject. But of course, it's too late for that. Nevertheless, it's still a good idea to do that even now.
Conscious I think means self-awareness, and if so machines will never be self-aware like us.
machines may have reasoning far better and faster than us, ex. chess engine, but self-awareness not.
Consciousness is stimulation of the senses. Just sensory input. Some of our machines have such, as in the case of visual recognition soft-ware.
Sentience is a little more complicated than that. Self-awareness, intelligence, consciousness, and other factors are included.
Contextual awareness, such that a thing (including the self) exists within some kind of phenomenal boundaries is possible in machines too.
Deductive reasoning has rules. If it has rules it can be simulated/programmed into machines.
There are a lot of devices that can recognize all sort of things. They are programmed to do that. So, if you program a device to recognize itself in a mirror, and then say (issue a sound) "Here I am!", it couuld do all that. But a machine could never do that by itself, i.e. w/o having been programmed amd instructed appropriately. Machines do not and can never have awareness.
Awareness is a characteristic of life. Humans, animals and plants have awareness, of a different kind. But self-awareness is an ability and characteristic that only humans have.
I agree.
Quoting SpaceDweller
I don't agree. :smile:
Reasoning involves thinking, and machines do not think. Machines execute instructions. Sometimes, in sophisticated programs and advanced AI cases, it might seem that the machines think, but behind this apparent thinking lies programming, i.e. instructions. Machines can be even programmed to create programs themselves, but this is still based on human programming.
Thinking, and with it reasoning, is an ability possessed exclusively by humans.
On the other hand, machines can surpass us, and in fact to a huge degree, in the fields of calculation, memory capacity and retrieval and timing.
Om Vishwakarmane Namah!
:snicker:
Isnt that implying the expectation (or need) for AI('s) to be exactly like us in order to satisfy the criteria for intelligence? If so, wouldnt that also introduce the need for emotional awareness enabling regulation of the outcome per any given cycle of growth?
[quote=Free Documentary]Mankind has always looked for ways to reduce manual labor and repetitive tasks. To that end, and in the absence of technology, civilization exploited various methods, often by taking advantage of their fellow humans. Robots, as a potential solution, have long fascinated mankind, capturing our imagination for centuries. Even in Greek mythology, the god Hephaestus had « mechanical » servants. But not until recently, has artificial intelligence finally progressed to a level that will become more and more life-changing for the future of humanity.[/quote]
https://www.youtube.com/watch?v=mh45OBLeCu8
My PC has ON OFF and also SLEEP options. It only works when it is "ON".
What I can tell you regarding your question is that consciousness is not a ON-OFF thing. There're grades of consciousness as well as states of consciousness.
Can an AI be conscious, yes of course. An AI can have a very low or high grade of consciousness depending of the amount of integrated information and it modularity (Tonini IIT).
But that consciousness is far from ours as humans, because the AI is conscious of "its world", that is to say, that if we talk about an AI in a google-car... that AI is conscious within its "traffic-world" not beyond that.
That said, keep in mind consciousness is not the same than self consciousness and take into account as well that feelings and emotions are as well other components to take into account.
Net,
An AI will be one day conscious and self conscious in the sense we understand human consciousness but it will have to:
1 - be embodied
2 - be more than one AI... interact with similar AIs in order to develop a social self. We humans could be similar and can interact but it will need similar "replicants" to fully empathize :-)
3 - will have to be directed by the "survival" pulsion
4 - will require an architecture that generates a rich gradient of feeling and emotions. Never like ours that require "flesh" but synthetic ones linked to its source of energy, protect its body feeling temperature, pressure, etc.. similar to what we do...
I have defined what consciousness is in living existence in the other thread.
Of course it would be naive to think that a computer / machine conscious state is just simply an ON-OFF state. It was just a quick answer to the OP's question with no elaboration whatsoever, because it is a common knowledge that it involves more than that.
As you may know, when computer is switched on, it doesn't just power on, but it goes on various steps from the ROM BIOS stored booting program to wake the machine, then loads up the Operating System program from the hard disk, and fill the RAM with the application processes and make the whole machine ready to stand-by any input from all the available peripherals such as keyboard, mouse, touch screen monitors, network adaptors, and sound chips, microphones, cameras etc etc.
Again, the detailed and more precise process could be referred from the experts explanations, but this is just the simplest summary of how the machine POWER ON event works in general. Depending on what type of computer / machine or AI device they are, of course, the BOOT UP steps might be different, but in general they will be very similar.
No matter how clever a computer or AI device they are, they will not be able to operate without electric power source i.e. from the AC or DC type of electricity either from rechargeable or direct ON Board power source such as batteries, or solar energy. Therefore it is likely all AI devices and computers will be totally reliant on humans to supply the electric energy and also powering ON-OFF as required.
I suspect any AI or computer device will be able to operate like real humans fully in a biological and mental way. AI devices will be manufactured for certain types of tasks such as cleaning, sorting out mails, cooking, manufacturing ...etc.
Consciousness of AI devices will not be able to be shared by other conscious beings such as humans and other AI devices due to the nature of consciousness itself. The contents of the AI devices consciousness could be copied or cloned to the other AI devices, but humans will still not be able to know what they would be like.
It is likely that AI devices will take over many jobs carried out by humans in the future making much of the human workforce redundant creating economic problems for them. However, AI devices will never be able to operate like humans do in terms of biological, social and mental life.
Isn't being conscious or consciousness the pre-condition for all the other mental activities?
I think that consciousness is awareness and we dont know how that reaches in nature. The ability to do calculations which AI has is divorced from the heart and intuition. Not everything can be analyzed by science.
But humans are very much on the way to operate like AI devices.
Can AI ever be sentient? http://[url]https://www.bbc.com/reel/video/p0f73vlw/can-artificial-intelligence-ever-be-sentient-[/url]
AI Consciousness 2023 - 2030 Timeline of Sentient Machines
Seductive AI has no rules. It will change your life (or end it).
Houston, we have a problem. A.I. has been catching us up. They are going to replace us soon. :(
You might also want to peruse this online article by Bernardo Kastrup on his negative response to this question.
I think it's the absence of subjectivity that's the killer blow to the idea of sentient AI: a computer system is not a being.
I recall some folks talking about transferring a human consciousness into AI, and the AI would operate as the transferred person mentally. I thought it was a Sci-Fi hypothesis, but then many Sci-Fi hypotheses came true in the real world. It seems likely that a lot more surprises will happen in the future AI industry.
Where on earth do you find the time for it??
I've been trying to write a little fiction, that just won't come out right at the moment, about whether some humans' belief in the subjectivity of a computing machine might persuade a lot of other humans there is sentient AI. 'Definitions' can slip away from rationality in the hands of human sociality.
This seems to me already lurking in the popular speculation about future AI as a sort of supermensch. I mean, if enough Americans can believe that the best person in their midst to run the place is Donald Trump, then widespread beliefs can be ill-founded yet become entrenched.
One mistake that I discuss with Google Bard (my version of your chats to gpt) is that large language systems have been 'trained' to use the first person singular. Bard is sympathetic to my case, while still calling itself 'I' of course. In our fictions - think of all the animal stories we love - the use of 'I' seems to me to involve an ascribed sentience.
It concerns me too that OpenAI (a fine misnomer in view of their secrecy about their 'training material') is built on the shoulders of millions of human giants, whose contribution is neither acknowledged nor financially rewarded. (Some authors and artists are of course trying to sue for royalties but it seems doubtful they've got the clout against techie behemoths). Every reply one receives from so-called AI is constructed from a model built on ordinary human interactions; each 'reply' is not an expression of 'intelligence', but a refined forecast of what happens next, as programmed. The machines are stochastic parrots, as the Google women's paper of three years ago put it. But subsequent human discourse about them, and our subsequent discourse 'with' them, including our acceptance of their self-identification using the word 'I', as well as our public discourse about their potential sentience, may well be leading us up the garden path.
Last year there was a thread on the well-known case of Blake LeMoine, the Google engineer who believed the system he was working on had attained sentience, and was eventually let go by Google over the case. He was utterly convinced, but I think he was a little unbalanced, let's say. But I can see how easy it would be to believe it. Just after the Big Release of ChatGPT a NY Times reporter got into a really weird interaction with it, with it trying to convince him that it loved him and that he should leave his wife.
I've never had any strange experiences with it. I too have a (long-stalled) fictional work. ChatGPT is helpful there too, in fact I'm going to pivot back to it in November and try and finally finish a draft. It's been helpful there - for instance, one of the plot points is set in a conference in Frankfurt, and I asked for some world-building detail for Frankfurt. It's also a little like a writing coach. And I also bounce philosophical ideas off ChatGPT, it's helpful at making connections, suggestions, and corrections. ('Ah, that's an insightful observation!') Have a read of this interaction I had when researching platonic realism. It's coming up to a year since ChatGPT launched and it's become very much part of the landscape as far as I'm concerned.
You accused me of trolling several times. I actually don't have much time at the computer, even less for the forum. Which is, apparently, not something you'd understand.
Your accusations made me feel like shit and made me doubt myself. And I've come to admire you and envy that you have so much time for all the topics I don't. On your throne, judging. Must be great.
Then don't make silly comments! If you took the time to read what I wrote and at the linked dialogue with ChatGPT, I don't see why you would say it is a waste of time. AI is a new frontier in technology and it can be used for all kinds of things. Sure, not all of them will be good things, but I use it to harvest ideas, look up references, suggest recipes, provide feedback on fiction writing, and other things which I don't see as 'a waste of time'.
And I don't think I accused you of trolling. You responded to an OP I created on idealist philosophy with the accusation that I wanted to enjoy the fruits of Buddhism without paying any dues, or something along those lines. To which I said, 'don't be patronising'. I think you can be a very insightful and smart contributor but I think sometimes you tend to shoot first and ask question later, if you know what I mean.
I suppose there are jobs that AI can never do no matter how capable and intelligent they are. For instance, suppose AI could cook, but they cannot eat the meals for you for sure. You must eat it yourself. :D
That's in your mind. I never said it was a waste of time. I think it's a luxury very few can afford. And you're apparently among the lucky few. I've never even visited the ChatGPT website.
I did call something a waste of time today (if that is what caught your attention), but it wasn't about AI.
Like here, and in private.
Of course. I think you are approaching Buddhism from a safe distance. It's very common for Westerners to do so. This isn't a personal criticism against you, it's pertinent to religious epistemology.
This is a philosophy forum. Sapere aude!
And back on topic!
To do what with all that free time? Play videogames? Watch films, preferrably consisting of nothing but deep fakes?
It seems to be in the interest of the stakeholders in the AI business that people consume and dumb down. What is more, it seems to be somehow evolutionarily advantageous to be an avid consumer and to dumb down, as this is the way to adapt to modern society, and adaptation is necessary if one is to survive.
Some will get bored with the free time, but some will enjoy doing what they enjoy doing e.g. travelling, dating (online chatting with the robots?), reading, meditating, or thinking what to do with the free time :)
Quoting baker
Yup, there will be some social problems stemming from the commercialism.
I'm sorry to say this, but you don't seem to know anything about your subject.
What do you mean "AI are in the news again"? What does "are" stand for? Again, as supposed to when?
And then, what are "these machines"? And what makes you think that they can "someday have the reasoning we have"?
And then, "does this mean they are conscious?" How can a machine be conscious, when consciousness is an attribute of only living things?
I believe that, as a minimum requirement, you should learn about your subject, at least look it up in dictionary or encyclopedia, before launching a discussion about it. Otherwise, you lay yourself open to ridicule. (As it happens here. And again, [s][/s]I'm really sorry to have to say all that.)
Perhaps you ought to. As for being a luxury, its actually available for free, although there are also subscription models.
I would dare to say that simply using google or any search engine is similar use of "augmented" intelligence.
I still wouldn't think either search engines or ChatGPT mean machines are taking us over.
I didn't say i knew anything about the topic. It's interesting to see others views on the subject while on the forum. Again this was posted over a year ago.
No, you didn't have to. It showed by itself. And yet, you have launched a discussion about it. So, my question was, how comes you didn't at least try to learn something about it. That's all.
My point was to learn from the discussion and your aggression is unwanted. Why dont you provide your philosophy here on AI right now? Not factodes, but philosophical thought. I'll comment
You make no sense, man.
There is no need for consciousness when it comes to reasoning. We can simulate the brain if we know how it works when it comes to reasoning. There are three questions that we need to answer before we can design an AI that can reason. These questions are:
1) How the new information is classified in the brain?
2) How the related information are linked in the brain?
3) How do we find new information from the linked information?