Consciousness Encapsulated

enqramot June 25, 2022 at 14:25 6800 views 32 comments
By this I mean: Do you think it would be (strictly theoretically) possible to emulate, say, a person, by AI (common Artificial Intelligence, naturally not possessing any sort of consciousness, but instruction driven, as deterministic as they get), given unlimited resources, like ram, CPU speed etc. ASSUMPTIONS: Average person is not totally unpredictable, somehow follows certain patterns (you could argue they’re governed by subconsciousness), is limited in terms of imagination, memory, intelligence, you name it. All this creates this person’s general character, personality – which could be described in general terms. Programming such person into an AI system would mean a much more detailed description of such person, down to most minute details of this person’s behaviour in various scenarios.
What would be the stumbling block, do you think? How is conscious mind essentially different to AI on a strictly operational level? How would you go about programming such a thing? What are conscious thoughts? Who creates them / how are they generated ? Are they created inside the system or received from outside?
What would be the implications of 2 entities, a conscious one and an unconscious one, displaying the same sort of behaviour?

Comments (32)

Down The Rabbit Hole June 25, 2022 at 20:47 #712267
Reply to enqramot

I am inclined to think that consciousness is a natural result of complexity. If that's the case, an exact emulation may have to be conscious too.
Gnomon June 26, 2022 at 00:00 #712336
Quoting enqramot
How is conscious mind essentially different to AI on a strictly operational level? How would you go about programming such a thing?

Your question hinges on your philosophical or technical definition of "Consciousness". Literally, the "-ness" suffix implies that the reference is to a general State or felt Quality (of sentience), not to a specific Thing or definite Quanta (e.g. neurons). In Nature, animated behavior (e.g. seek food, or avoid being food) is presumed to be a sign of minimal sentience, and self-awareness.

AI programs today are able to crudely mimic sophisticated human behaviors, and the common expectation is that the animation & expressions of man-made robots will eventually be indistinguishable from their nature-made makers -- on an "operational level". When that happens, the issue of enslaving sentient (knowing & feeling) beings could require the emancipation of artificial creatures, since modern ethical philosophy has decided that, in a Utopia, all "persons" are morally equal -- on an essential level.

Defining a proper ethical hierarchy is not a new moral conundrum though. For thousands of years, military captives were defined as "slaves", due to their limited freedom in the dominant culture. Since, many captives of the ruling power happened to have darker skin, that distinguishing mark came to be definitive. At the same time, females in a male-dominated society, due to their lack of military prowess, were defined as second-class citizens. At this point in time, the social status of AI is ambiguous ; some people treat their "comfort robots" almost as-if they are "real" pets or persons. But, dystopian movies typically portray dispassionate artificial beings as the dominant life-form (?) on the planet.

But, how can we distinguish a "real" Person from a person-like Mechanism? That "essential" difference is what Chalmers labeled the "Hard Problem" : to explain "why and how we have qualia or phenomenal experiences". The essence-of-sentience is also what Nagel was groping for in his query "what does it feel like?". Between humans, we take homo sapien feelings for granted, based on the assumption of similar genetic heritage, hence equivalent emotions. But, the genesis of AI, is a novel & unnatural lineage in evolution. So, although robots are technically the offspring of human minds, are they actually kin, or uncanny?

Knowing and Feeling are the operational functions of Consciousness. But Science doesn't do Essences. "If you can't measure it, it ain't real". Yet, a Cartesian solipsist could reply, "If I can't feel it, it ain't real". Therefore, I would answer the OP : that the essential difference between AI behavior and human Consciousness is the Qualia (the immeasurable feeling) of Knowing. Until Cyberneticists can reduce the Feeling-of-Knowing to a string of 1s & 0s, Consciousness will remain essential, yet ethereal. So, if a robot says it's conscious, we may just have to take it's expression for evidence. :smile:


Google AI has come to life :
AI ethicists warned Google not to impersonate humans. Now one of Google’s own thinks there’s a ghost in the machine.
https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

Google's AI is impressive, but it's not sentient. Here's why :
https://www.msnbc.com/opinion/msnbc-opinion/google-s-ai-impressive-it-s-not-sentient-here-s-n1296406
enqramot June 26, 2022 at 00:26 #712355
Quoting Down The Rabbit Hole
I am inclined to think that consciousness is a natural result of complexity. If that's the case, an exact emulation may have to be conscious too.


I heard this theory, but I must admit it doesn't really make any sense to me, tbh. I just can't see how increasing complexity can lead to anything other than just more complexity. Impossible to rule this out, of course, but suppose scientists achieve consciousness in this way: they'd still have no clue what consciousness is. To me it seems that the very idea of associating computers with consciousness is based solely on apparent similarity of what computers do to human thinking, disregarding the fact that the general principle is completely different. Sounds like wishful thinking - hoping that increasing complexity will somehow result in consciousness as a byproduct. It's like barking up the wrong tree entirely. We know how computers work - there's nothing mystical about them - and they are not conscious.
Jackson June 26, 2022 at 00:30 #712357
Quoting enqramot
the very idea of associating computers with consciousness is based solely on apparent similarity of what computers do to human thinking,


Computers don't need to be conscious. I don't see why people make a big deal out of consciousness.
enqramot June 26, 2022 at 00:35 #712359
Reply to Gnomon
Good and long reply, but it's already 0230 local time so I'll take a stance tomorrow.
Manuel June 26, 2022 at 00:40 #712360
Reply to enqramot

I think the problem here already lies in the premise, that consciousness is a kind of AI. Or that it could be somehow transcribed to AI terms and relevant technology.

It seems to me that this is the immediate stumbling block. Brains are biological organs that have gone through billions of years of evolution. Machines, are things people do, and are much less sophisticated (by far) from anything that could be done with any technology.

There's also the issue that there are no machines in nature, we could choose to think of nature as a machine, but this would be misleading taken literally.

As for the general question, I think we don't know anywhere nearly enough about experience to even know where to begin on how to create consciousness. And we likely never will, given our cognitive limitations as natural beings.
enqramot June 26, 2022 at 00:41 #712362
Quoting Jackson
Computers don't need to be conscious. I don't see why people make a big deal out of consciousness.


Absolutely. Computers don't do consciousness and that's their advantage. But the nature of consciousness has eluded science for such a long time that it's impossible not to see it as a huge challenge.
Jackson June 26, 2022 at 00:41 #712363
Quoting Manuel
we don't know anywhere nearly enough about experience to even know where to begin on how to create consciousness


I don't think people working in AI are even concerned with consciousness.
Jackson June 26, 2022 at 00:43 #712364
Quoting enqramot
But the nature of consciousness has eluded science for such a long time that it's impossible not to see it as a huge challenge.


Much to debate here, and worthwhile. My short answer is that I think people make consciousness into a fetish. The question is about intelligence and processing information and making new things.
enqramot June 26, 2022 at 00:57 #712370
Quoting Jackson
Much to debate here, and worthwhile. My short answer is that I think people make consciousness into a fetish. The question is about intelligence and processing information and making new things.


Maybe consciousness isn't the right word, maybe sentience would be, but the fundamental difference between computers and me is that I am alive and can feel, whereas computer is completely dead and about as conscious as a brick, no matter how much ram, how fast cpu and how complex program currently running. I believe talking about consciousness in relation to computers can only be dictated by a marketing strategy. Selling "sentient" computer will make you rich in no time.
Jackson June 26, 2022 at 00:58 #712371
Quoting enqramot
I believe talking about consciousness in relation to computers can only be dictated by a marketing strategy.


AI is here and getting more complex. Again, I don't see the importance of self awareness.
Manuel June 26, 2022 at 01:05 #712374
Reply to Jackson

That's probably true. The issue then become, to what extent is it intelligible to create a separation between consciousness and intelligence?

Because if these aren't thought about carefully, we might attribute a lot of intelligence to almost all living creatures. And that's tricky.
enqramot June 26, 2022 at 01:05 #712375
Quoting Jackson
AI is here and getting more complex. Again, I don't see the importance of self awareness.


I don't see either. Not in computers, that is. But the mechanism of it I find interesting in itself although I believe science is no closer to solving it that it has ever been. Hard to even know where to start.
Jackson June 26, 2022 at 01:06 #712376
Quoting Manuel
The issue then become, to what extent is it intelligible to create a separation between consciousness and intelligence?


I separate them. Consciousness is a function of intelligence. But intelligence can function without consciousness.
Jackson June 26, 2022 at 01:08 #712377
Quoting enqramot
science is no closer to solving it that it has ever been


AI is an engineering problem. No need to have a theory of consciousness.
Manuel June 26, 2022 at 01:13 #712379
Reply to Jackson

Oh, that's interesting. It looks tricky to me, because, why use consciousness when intelligence can be way more efficient?

enqramot June 26, 2022 at 01:16 #712382
Quoting Jackson
AI is an engineering problem. No need to have a theory of consciousness.


Sure, if we're talking about AI. But these are completely separate topics, AI and consciousness, and I was only considering whether one can emulate the other.
Jackson June 26, 2022 at 01:17 #712383
Quoting Manuel
Oh, that's interesting. It looks tricky to me, because, why use consciousness when intelligence can be way more efficient?


That is my point, yes.
enqramot June 26, 2022 at 01:20 #712384
Quoting Manuel
I think the problem here already lies in the premise, that consciousness is a kind of AI.


I'm not making such a premise, merely considering to what extent behaviour of a conscious being can be described in terms of computer code.
Manuel June 26, 2022 at 01:27 #712388
Reply to Jackson

Ah, then it is a good point.

Reply to enqramot

Ah well, if its behavior, then, that's a different thing. But behavior can be very misleading, in terms of, not confusing the data (which is what behavior is) from the theory (that which causes the behavior).
Jackson June 26, 2022 at 01:33 #712390
This is from Turing,
" The view that machines cannot give rise to surprises is due, I believe, to a fallacy; this is the assumption that as soon as a fact is presented to a mind all consequences of that fact spring into the
mind simultaneously with it." https://www.csee.umbc.edu/courses/471/papers/turing.pdf

Here criticizing the idea that a machine cannot be creative.
Down The Rabbit Hole June 26, 2022 at 14:00 #712544
Reply to enqramot

Quoting Down The Rabbit Hole
I am inclined to think that consciousness is a natural result of complexity. If that's the case, an exact emulation may have to be conscious too.


Quoting enqramot
I heard this theory, but I must admit it doesn't really make any sense to me, tbh. I just can't see how increasing complexity can lead to anything other than just more complexity.


It is hard to believe, but this theory must be judged in comparison to the other theories of consciousness.

What theories of consciousness are more plausible?
enqramot June 26, 2022 at 14:11 #712549

Quoting Down The Rabbit Hole
It is hard to believe, but this theory must be judged in comparison to the other theories of consciousness.

What theories of consciousness are more plausible?


I honestly don't believe there are any credible theories in existence explaining phenomenon of consciousness. The science is completely in the dark in this area as far as I'm aware (correct me if I'm wrong).
Down The Rabbit Hole June 26, 2022 at 14:31 #712558
Reply to enqramot

Quoting enqramot
I honestly don't believe there are any credible theories in existence explaining phenomenon of consciousness. The science is completely in the dark in this area as far as I'm aware (correct me if I'm wrong).


The trouble is, how do you prove the subject has experiences? I think it likely we will never be able to do a test to tell us what consciousness is.
enqramot June 26, 2022 at 14:51 #712566
Quoting Down The Rabbit Hole
The trouble is, how do you prove the subject has experiences? I think it likely we will never be able to do a test to tell us what consciousness is.


Exactly, I can't fathom how such a test could be possible.
enqramot June 26, 2022 at 17:17 #712656
Quoting Gnomon
Your question hinges on your philosophical or technical definition of "Consciousness". Literally, the "-ness" suffix implies that the reference is to a general State or felt Quality (of sentience), not to a specific Thing or definite Quanta (e.g. neurons). In Nature, animated behavior (e.g. seek food, or avoid being food) is presumed to be a sign of minimal sentience, and self-awareness.

AI programs today are able to crudely mimic sophisticated human behaviors, and the common expectation is that the animation & expressions of man-made robots will eventually be indistinguishable from their nature-made makers -- on an "operational level". When that happens, the issue of enslaving sentient (knowing & feeling) beings could require the emancipation of artificial creatures, since modern ethical philosophy has decided that, in a Utopia, all "persons" are morally equal -- on an essential level.

Defining a proper ethical hierarchy is not a new moral conundrum though. For thousands of years, military captives were defined as "slaves", due to their limited freedom in the dominant culture. Since, many captives of the ruling power happened to have darker skin, that distinguishing mark came to be definitive. At the same time, females in a male-dominated society, due to their lack of military prowess, were defined as second-class citizens. At this point in time, the social status of AI is ambiguous ; some people treat their "comfort robots" almost as-if they are "real" pets or persons. But, dystopian movies typically portray dispassionate artificial beings as the dominant life-form (?) on the planet.

But, how can we distinguish a "real" Person from a person-like Mechanism? That "essential" difference is what Chalmers labeled the "Hard Problem" : to explain "why and how we have qualia or phenomenal experiences". The essence-of-sentience is also what Nagel was groping for in his query "what does it feel like?". Between humans, we take homo sapien feelings for granted, based on the assumption of similar genetic heritage, hence equivalent emotions. But, the genesis of AI, is a novel & unnatural lineage in evolution. So, although robots are technically the offspring of human minds, are they actually kin, or uncanny?

Knowing and Feeling are the operational functions of Consciousness. But Science doesn't do Essences. "If you can't measure it, it ain't real". Yet, a Cartesian solipsist could reply, "If I can't feel it, it ain't real". Therefore, I would answer the OP : that the essential difference between AI behavior and human Consciousness is the Qualia (the immeasurable feeling) of Knowing. Until Cyberneticists can reduce the Feeling-of-Knowing to a string of 1s & 0s, Consciousness will remain essential, yet ethereal. So, if a robot says it's conscious, we may just have to take it's expression for evidence. :smile:


Google AI has come to life :
AI ethicists warned Google not to impersonate humans. Now one of Google’s own thinks there’s a ghost in the machine.
https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

Google's AI is impressive, but it's not sentient. Here's why :
https://www.msnbc.com/opinion/msnbc-opinion/google-s-ai-impressive-it-s-not-sentient-here-s-n1296406


I wanted to make a comment but realised that basically I agree with everything you said and have nothing meaningful to add at this time, so I'll leave it at that.
Gnomon June 26, 2022 at 17:25 #712657
Quoting enqramot
Maybe consciousness isn't the right word, maybe sentience would be,

Consciousness and Sentience are sometimes used interchangeably. But "sentience" literally refers to sensing the environment. And AI can already to that. For example, the current National Geographic magazine has a cover article on the sense of touch. And it shows a mechanical hand with touch sensors on the fingertips. Without "sentience" (feedback) an animated robot would be helplessly clumsy. But "consciousness" literally means to "know with". Certainly a robot with touch sensors can interpret sensible feedback in order to guide its behavior. But is it aware of itself as the agent (actor) of sentient behavior?

Therefore, the philosophical question here is "does a robot (AI) know that it knows"? Is it self-aware? To answer that question requires, not an Operational (scientific) definition, but an Essential (philosophical) explanation. All man-made machines have some minimal feedback to keep them on track. So, it's obvious that their functions are guided by operational feedback loops. And that is the basic definition of Cybernetics (self-controlled behavior). Which is why some AI researchers are satisfied with Operational Sentience, and don't concern themselves with Essential Consciousness. It's what Jackson calls an "engineering problem".

But philosophers are not engineers. So, they are free to ask impractical questions that may never be answered empirically. When an octopus acts as-if it recognizes its image in a mirror, is that just an operational function of sentience, or an essential function of self-awareness? We could debate such rhetorical questions forever. So, I can only say that, like most philosophical enigmas, it's a matter of degree, rather than Yes or No. Some intelligences are more conscious than others. So, it's hard to "encapsulate" Consciousness into a simple matter of fact.

Ironically, the one asking such impractical rhetorical questions may be the most self-aware, and the most introspective & self-questioning. The behavior of Intelligent animals is typically pragmatic, and focused on short-term goals : food, sex, etc. They don't usually create art for art's sake. But, when they do, can we deny them some degree of self-consciousness? :smile:

ELEPHANT SELF-PORTRAIT
User image
enqramot June 26, 2022 at 18:34 #712677
Quoting Gnomon
Consciousness and Sentience are sometimes used interchangeably. But "sentience" literally refers to sensing the environment. And AI can already to that.


Let's stick to "consciousness" then :)

Quoting Gnomon
Therefore, the philosophical question here is "does a robot (AI) know that it knows"? Is it self-aware? To answer that question requires, not an Operational (scientific) definition, but an Essential (philosophical) explanation.


I'd rather philosophy steered clear of questions already settled. The operational principle of AI is already known, described in technical terms, there should be no need for an alternative explanation. We can by similar token ask: "Is a wall aware of its existence?" To which one would be tempted to respond: "What an absurd idea!"

Quoting Gnomon
When an octopus acts as-if it recognizes its image in a mirror, is that just an operational function of sentience, or an essential function of self-awareness? We could debate such rhetorical questions forever. So, I can only say that, like most philosophical enigmas, it's a matter of degree, rather than Yes or No. Some intelligences are more conscious than others.


Yes, various animals (including humans) seem to be conscious to varying degrees. For instance, I feel like a zombie when I haven't had enough sleep and feel that my level of consciousness is lower than when I'm in top shape. Some people, otoh, seem to run on autopilot for most of their lives. It would be interesting to scan them with "consciousness-O-meter", should such a thing exist. Anyway, thanks for your input.


Gnomon June 27, 2022 at 23:17 #713148
Quoting enqramot
I'd rather philosophy steered clear of questions already settled. The operational principle of AI is already known, described in technical terms, there should be no need for an alternative explanation.

Ha! Philosophy has no "settled questions", and philosophers are not content with mechanical "operational principles". So, the OP goal of encapsulating Consciousness, is still an open question.

Nevertheless, pragmatic scientists are still working on a Consciousness Meter to update the crude EEGs and somewhat more sophisticated MRIs. They are even using Artificial Intelligence to search for signs of Consciousness in Natural Intelligences that appear to be anaesthetic (unconscious). However, they are not looking for philosophical essences, but for operational signs & symptoms. So, even then, the OP on The Philosophy Forum will go unanswered. :smile:


Artificially intelligent consciousness meter :
https://www.monash.edu/data-futures-institute/study/phd-scholarship/artificially-intelligent-consciousness-meter

The hunt for hidden signs of consciousness in unreachable patients :
https://www.technologyreview.com/2021/08/25/1031776/the-hunt-for-hidden-signs-of-consciousness-in-unreachable-patients/
Agent Smith June 28, 2022 at 08:07 #713333
Quoting Gnomon
They are even using Artificial Intelligence to search for signs of Consciousness


Like how non-life (Mars rovers) looks for (signs of) life.

[quote=Ms. Marple]Most interesting![/quote]
enqramot June 28, 2022 at 12:04 #713375
Quoting Gnomon
philosophers are not content with mechanical "operational principles"


So, as I see it, philosophers take a "resolved question" and tackle at it a different angle thus being complementary to science.

Quoting Gnomon
Philosophy has no "settled questions"


That would mean philosophy only takes on questions that it doesn't ever hope to resolve. Is that really true? As I read somewhere "Astronomy, physics, chemistry, biology, and psychology all began as branches of philosophy." Does it mean that after a question is deemed "resolved" or at least "resolvable" philosophy moves on to other subjects and no longer concerns itself with it?

Quoting Gnomon
Nevertheless, pragmatic scientists are still working on a Consciousness Meter to update the crude EEGs and somewhat more sophisticated MRIs. They are even using Artificial Intelligence to search for signs of Consciousness in Natural Intelligences that appear to be anaesthetic (unconscious).


I meant consciousness-O-meter as a joke, little knowing that they're already working on it, that's funny :)
Gnomon June 28, 2022 at 17:50 #713445
Quoting enqramot
Consciousness and Sentience are sometimes used interchangeably. But "sentience" literally refers to sensing the environment. And AI can already to that. — Gnomon
Let's stick to "consciousness" then :)

Yes. Some plants, such as touch-me-not & venus flytrap, are "sentient" in a very limited sense. They sense and react to touch. But we don't usually think of them as Conscious. However, the typical scientific concept of Consciousness places it on a continuum from minimal sentience to Einsteinian intelligence. Nevertheless, some philosophers still imagine that Consciousness is something special & unique like a Soul. So, the OP seems to be searching for a physical mechanism that produces Self-Awareness. Yet, it's the last step on the continuum from Sentience to Consciousness that has, so far, resisted encapsulation.

One reason for that road-block may be the Reductive methods of Science. Some scientists assume that Consciousness is a function of Complexity. But complexity without Entanglement is just complicated. For example, Neanderthal brains were significantly larger (more neurons) that homo sapiens, but their intelligence was only slightly higher than that of a chimpanzee. So, it seems to be the single-minded interrelations of intelligent brains that produce the "difference that makes a difference" (i.e. information) in intelligence.

A current theory to explain that difference points to Social Intelligence as the difference maker. Whereas Neanders tended to hunt in small family groups (wolf-packs), Homos began to work together in large tribes of loosely-related people (communities). The SI hypothesis says that, individually, Neanders were about as smart as Homos. But, by cooperating collectively, Homos were able to offload some of the cognitive load to others. And that externalization of thought (language), eventually evolved into Writing, for even wider sharing of thoughts. In computer jargon, the collective human mind is a parallel processor.

Therefore, it's not just how many neurons a person has that determines intelligence, but the communal sharing of information with other brains, focused on the same task. Likewise, a more Holistic view of Consciousness might reveal that higher degrees of Sentience & Self-Awareness emerge from the evolution of collective Culture. Whereas Sentience is limited to the neurons of a single organism, sophisticated Consciousness (and Wisdom ; sapiens) may result from exporting & importing information between brains & minds via language*1. Sharing information via Culture is literally Con-scious-ness : knowing together". :nerd:

PS__Sci-Fi likes to extend that symbiosis to include include Mind-Reading. So, maybe human Consciousness is a form of "sym-mentosis". No magic required though. Just the ability to talk and read.

*1. For example, without Google & Wiki, my posts on this forum would read like Neander grunts.

Consciousness : from Latin conscius ‘knowing with others or in oneself’

The Social Intelligence hypothesis :
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2042522/