About algorithms and consciousness
Let's start simply with an example from everyday life: many actions you do, you do automatically, without having to think about it. In essence, these are a kind of neural algorithms, working methods that follow a standard procedure, with a certain goal to be achieved. Only when that method fails due to accidental circumstances our consciousness will take action to correct our actions.
These algorithmic methods (which occur in abundance in our daily activities) make it attractive to simulate them using computers with machines connected to them in the form of robots. Robots act deterministically based on algorithmic procedures. Smart robots can even adjust and improve their programs based on results achieved (which can also include failures). Are those smart robots also "aware"? No, because they act automatically, based on standard procedures (including procedures for making improvements). Only when these fail and no standard procedures are available, a conscious thinking person who can think "out of the box" must be involved to find "non-standard" solutions.
Take "killer robots" for example. They have a precisely defined "target" and a map of the environment in which to find that target. That map is inevitably of limited detail and made in the (recent) past. However, the current situation may be drastically different: for example, there may be other people in the immediate vicinity (playing children, for example). Making a decision about an attack can then be very tricky and it is to be hoped that the killer robot transfers its control to a watching conscious human!
Now compare with a predatory insect like a dragonfly. This creature has a large collection of algorithms to hunt for different insects. In addition, it can choose from a variety of tactics, depending in part on characteristics of the environment in which it hunts. Yet his nervous system is limited in size. The question now is, is he flying around like a robotic zombie or is he conscious?
You can also ask the same question about a cat that sees a mouse. Does the sort of algorithm of a killer robot apply here? Of course, a cat's brain is infinitely more complicated than a dragonfly's. If the environment is uncomplicated and there is no competition from other cats, then it seems obvious that the cat's algorithm to catch the mouse will be activated and the cat will outwit that mouse "like a zombie". However, if the environment is more complicated and there are several dangers lurking, then a weighing of possibilities must take place. You have to remember that all those algorithmic options to choose from are all electrochemical processes in the neuronal network, which result in emotions (caused by hormonal processes). In general, the algorithm with the strongest emotion will probably be chosen. The question now is whether this choice process is consciously gone through?
The transition from unconscious algorithmic to conscious thinking seems to be vague and might even be variable. After all, you can consciously rationally follow back a standard working method that fails and see where the algorithmic method goes wrong.
These algorithmic methods (which occur in abundance in our daily activities) make it attractive to simulate them using computers with machines connected to them in the form of robots. Robots act deterministically based on algorithmic procedures. Smart robots can even adjust and improve their programs based on results achieved (which can also include failures). Are those smart robots also "aware"? No, because they act automatically, based on standard procedures (including procedures for making improvements). Only when these fail and no standard procedures are available, a conscious thinking person who can think "out of the box" must be involved to find "non-standard" solutions.
Take "killer robots" for example. They have a precisely defined "target" and a map of the environment in which to find that target. That map is inevitably of limited detail and made in the (recent) past. However, the current situation may be drastically different: for example, there may be other people in the immediate vicinity (playing children, for example). Making a decision about an attack can then be very tricky and it is to be hoped that the killer robot transfers its control to a watching conscious human!
Now compare with a predatory insect like a dragonfly. This creature has a large collection of algorithms to hunt for different insects. In addition, it can choose from a variety of tactics, depending in part on characteristics of the environment in which it hunts. Yet his nervous system is limited in size. The question now is, is he flying around like a robotic zombie or is he conscious?
You can also ask the same question about a cat that sees a mouse. Does the sort of algorithm of a killer robot apply here? Of course, a cat's brain is infinitely more complicated than a dragonfly's. If the environment is uncomplicated and there is no competition from other cats, then it seems obvious that the cat's algorithm to catch the mouse will be activated and the cat will outwit that mouse "like a zombie". However, if the environment is more complicated and there are several dangers lurking, then a weighing of possibilities must take place. You have to remember that all those algorithmic options to choose from are all electrochemical processes in the neuronal network, which result in emotions (caused by hormonal processes). In general, the algorithm with the strongest emotion will probably be chosen. The question now is whether this choice process is consciously gone through?
The transition from unconscious algorithmic to conscious thinking seems to be vague and might even be variable. After all, you can consciously rationally follow back a standard working method that fails and see where the algorithmic method goes wrong.
Comments (45)
Even the definition of consciousness is vague and many have different views on just what is conscious and what isn't. Consciousness seems obviously something that gradually increases and there isn't this one thing, one detail that switches consciousness on or off like a switch.
Now can understand what an algorithm is, but ask people to give examples of something non-algorithmic and they'll have trouble. They will have even more trouble if you ask what is the importance of the non-algorithmic.
Quoting Ypan1944
Does it?
A transitional unclear form may be "dreaming".
I think this is an oversimplification. Your "algorithmic" action we would probably call either instinct or habit, depending on whether it is inborn or learned. Those types of behaviors are often both. Example instinct - border collies have an instinct to herd. They'll herd children if you don't give them sheep. Example habit - it is common to drive our cars without paying conscious attention.
There is another level of action before what we normally call consciousness. People can act without reflection but with full intellectual and emotional involvement. I'm doing that right now. I don't think about what I am writing, it comes out from somewhere inside me. Writers sometimes say the words write themselves. That's certainly true of me. I don't know what I'm going to write till I can read it on the page. And then comes what is more commonly called consciousness - rational reflection, logic, reason. That's the kind of thinking I do when I go back and reread and edit what I've written.
It seems to me that consciousness, conceptually, is exactly something on/off. Something either has experiences or it doesn't, I don't see a middle ground. A middle ground just doesn't fit the concept.
That's more a matter of attention than consciousness. In performing routine actions you're deploying what adult learning models describe as 'unconscious competence' i.e. doing something that doesn't require conscious effort because you know it so well. But you're still conscious, you're just not paying particular attention to what you're doing until something non-routine happens.
Overall, I think the issue with your OP is that you're assuming that conscious actions can be reduced to or described in terms of algorithms. Is this a valid assumption? Algorithms are used by computer science for modelling all kinds of complex actions and situations, but does that mean that conscious activities actually are algorithms? A dissenting opinion is given by Roger Penrose, in Emperor's New Mind, who argues that human consciousness is a capability that goes beyond what can be captured by algorithms. He believes that human understanding and insight involve non-computable processes that are not reducible to simple algorithms and that cannot be derived by mechanical procedures. He bases his argument in part on Godel's theorem, saying that these demonstrate that within any formal system of logic, there will always be true statements that cannot be proven within the system itself, meaning that there are limits to what can be achieved through purely algorithmic or mechanical processes.
Of course, many people disagree with Penrose, but at least the debate shows that the question of the algorithmic nature of consciousness is, well, debatable.
That's something I am very interested in as well, but at present it is only something we can speculate about. You could look up "Integrated Information Processing" as one line of speculation that I think might be on the right track to some degree.
Off the top of my head, I'd suggest that in mammals it is a matter of the neocortex having the ability to monitor and to some degree control older brain regions. In the process of monitoring the goings on in older brain regions I think the neocortex is able to integrate the information extracted from older brain regions to construct a (somewhat real time) model of the world. The massively parallel information processing available in the neocortex, observing this model of the world, simply is what consciousness is.
In evolutionary terms, I think the evolution of the neocortex provided the ability to think outside the box that you mentioned, by enabling more neurologically advanced species some ability to imagine alternative ways of modelling the world. Eventually, the evolution of linguistic faculties in the neocortex enabled our ancestors to communicate about their mental modelling of the world. Thus The Philosophy Forum.
My understanding of Penrose's view is that he thinks some element of quantum computation is needed for consciousness, but that seems a different matter than consciousness being non-algorithmic. (Though I don't see "algorithmic" as particularly useful terminology in thinking about the information processing that occurs in our brains.) In any case I don't see any reason to think some form of quantum computation is needed to explain consciousness, and I think Penrose is generally seen as somewhat of a crackpot on this topic.
I think what's really interesting about this question, is that there's no obvious empirical method to determine the answer to that question - especially now ChatGPT has blown the Turing Test out of the water. What that tells me is that the nature of consciousness is not necessarily determinable by empirical methods. Personally, I don't believe in sentient AI, but that there's no easy way to prove the case says something, I think.
I've always viewed consciousness as the monitor that controls and regulates other functions. Under this definition, we already have primitive AI consciousnesses. Many animals have consciousness, and its been observed that apparently plants do in some aspects as well. Consciousness is really not all that rare or special relative to living creatures. What people are really asking is, "Are we as human's special relative to other beings? Am I something more than a combination of matter and energy? Will my consciousness end when I die?"
Remove questions like this and the whole silly debate dies.
Given that 'soul' is a translation from the Greek 'psyche', and that 'psyche' can also be translated as 'mind', do you think that people have minds?
So you're saying that if we take a collection of electronic switches and turn them on and off in some particular sequence, consciousness will emerge? That begs all sorts of interesting questions.
I'm not using the Greek definition of soul. We've also come a long way since we had Greek medicine and biology. Your mind is just a personal description of your consciousness. Or your mind can be a description other people give you that combines your personality and manner of thinking. All of which come from the physical interactions of your brain. Damage the brain, you damage the mind. Heal the brain, you heal the mind. Kill the mind, you end the mind. If you don't have any need or desire for a soul, its a simple fact backed by science and reality.
The idea that consciousness is somehow beyond the matter and energy of the brain is a matter of faith. This doesn't require a religion. The point of faith is to believe something that is contrary to fact. Its why its a pointless argument. If people could say, "The mind is not matter and energy, but it is this, and we can prove it," it would be different. Its also different if we speculate. "Wouldn't it be neat if there was something undiscovered that showed us consciousness wasn't simply formed from the interactions of the brain?"
But to say with any seriousness at all that consciousness just simply does not come from the brain, or that it does not make sense for it to with the facts we know today, is absurd. It defies decades of neuroscience, medicine, and psychotherapy. If a philosopher is not using these firm experiences of reality as a basis for their arguments, it is a sophomoric philosophy based on fantasy. Leibniz' monads were an interesting idea at one time, but is a hobbyist historical study today. Philosophy must evolve with the times, or it will be viewed as a strange place where people invent overly verbose vocabulary and ill defined arguments to rationalize their personal desires.
See my next reply on where we can go instead, into more modern and exciting ideas of "the mind". Forgive me if I seem short. I had to deal with dinosaur professors of philosophy who thought studying clearly dead philosophies lead to some valuable contribution to the world of thought. I vowed I would end that type of thinking wherever I go.
Yes, just like if we take a bunch of cells and have them constantly shift into different states they'll have consciousness as well. Your brain proves it quite easily. When matter and energy are organized in a particular way, they will exhibit a pattern we call consciousness. You are a living example of this. Your degree of consciousness is one of the most powerful of the living beings on this planet.
We can get perceptively less conscious the more primitive the brain from dogs, to fish, down to an ant. Once you remember you are an animal and matter and energy like everything else, you realize you're just an extra step complication and evolution. You are not a magical being outside of the laws of physics. You are a magical being within the laws of physics.
This does bring up actual viable questions to explore. At what state of matter can consciousness exist at its most basic level? Since we cannot experience the consciousness of a being, can we create a definition of consciousness that applies consistently across matter through observation of actions? Does consciousness need us to know the internal state, considering its impossible for us to have that? These are interesting questions for philosophers to think on. Not whether consciousness comes from matter. Because it clearly does.
But you're assuming here that brains produce consciousness. I think the idea of machine consciousness should make us question the currently prevalent belief that brains cause consciousness. Believing that brains cause consciousness commits one to believing that, if you simply change the substrate to silicon, microchips can be conscious, which is to say that collections of electronic switches can be conscious.
I think this is magical thinking. I'll even go so far as call it an absurdity. So, if brain consciousness commits one to a belief in machine consciousness, and machine consciousness is absurd, by reductio ad absurdum, we should reject the idea that consciousness comes from brains. Let me ask you: if you didn't know anything about brains, would you think that turning switches on and off in a certain way can lead to consciousness?
No, this is not an assumption. This is a fact. Prove to me that brains do not produce consciousness and we'll talk.
Quoting RogueAI
No, it is not a belief that brains cause consciousness. It is the fact that brains cause consciousness which leads us to consider that machines could have consciousness as well.
Quoting RogueAI
No. Because it is the knowledge that brains cause consciousness which lets us consider this idea.
I am on the side of decades of facts, neuroscience, and neuropharmacology. You have a lot to present if you're going to deny the fact that consciousness comes from the brain. Feel free to try, I will listen and evaluate all of your facts and arguments.
My argument is very simple: belief that brains cause consciousness leads to belief that machines can be conscious and machine consciousness is an absurdity, therefore the belief that brains cause consciousness is wrong. What you think is neural causation is neural correlation. It's the old, correlation is not causation. Now, you can attack my argument by claiming either belief in brain consciousness doesn't commit one to belief in machine consciousness, or that machine consciousness is not an absurdity. Which option do you like?
But clearly there ARE patterns of flowing objects that give rise to Qualia. If you don't think so, just poke around your brain with a screwdriver and see what happens to your conscious experience. The question is WHY and HOW these patterns of moving particles do such things. And how to describe the relation of conscious experience to the matter that is correlated with them.
That's not an argument, that's a string of statements without any connective logic and an unproven conclusion.
Lets work backwards.
1. Brain consciousness is an absurdity.
Why?
2. Brain consciousness leads to machine consciousness
No, brain consciousness leads us to realize that matter and energy if organized correctly can be conscious. This appears across living species with different types of brains. We realize that brains are clumps of neurons which have a system of communication, reaction, and planning. Therefore it seems possible that if we duplicate matter in such a way that it can communicate, react, and plan, it would be conscious.
3. Quoting RogueAI
No, we have ample conclusion of causation. I'll start with a relatable example before getting deeper. Ever been drunk before? Been on anesthesia? We know that if we introduce these chemicals into the blood, they affect the brain. And when the brain is affected, your consciousness becomes inhibited or suppressed entirely. This is not happenstance correlation. This is repeatably testable, and falsifiable causation which has been upheld in both active life and science for decades. With modern day neuroscience, we can actually get live scans of the brain to show the physical impacts and when consciousness is lost.
Address these points, and we'll have a discussion.
There was an Australian TV feature on AI recently, which featured a very articulate guy who had a humanoid-looking doll - actually a sex doll, but not the point, as he didn't want a sexual relationship - linked to an AI chatbot. Fascinating insight. I can imagine the day when I converse frequently with a chatbot about subjects that interest me, which in fact I already do - been using ChatGPT since the day it launched.
How do you define consciousness? Is a baby infant conscious? Is a chimpanzee? A spider? An amoeba?
If you assume that it's exactly on/off, then what is the switch that has to be on?
But one aspect remains underexposed so far. A system like our brain has many components (neurons). A characteristic of a set of related components is that it exhibits emergent behavior, which is absent from any of the components. This is how algorithms are created: the whole has different properties than each part.
If you consider a lot of related algorithms (biological or computerized) , they can also show new emergent behavior. Such as "being able to look back on one's own behaviour", "adjusting behaviour", "considering which behavior has the best chance of success", in short "being aware".
As it grows in biological systems consciousness will grow also and even become self-consciousness .
When will a growing child become conscious?
Me too. I started to anthropomorphize it pretty quickly.
Yes. Kastrup gives god arguments along these lines.
Answer my original reply and I'll address this question. I'm not interested in a one-sided discussion where you get to ignore my statements back to you.
Then so shall I. Lets have a better conversation another time.
The capacity to experience.
Yes
Yes
Yes
Yes
The existence/non-existence switch. Or the something/nothing switch. I'm a panpsychist.
I decided to take you up on that.
Quoting Philosophim
Implicit in what you said is an assumption that there exist physical objects like brains. Why should I agree with your materialist/physicalist assumption?
In neurobiology, this is just the transition from habitual to attentional level brain processing. The brain is set up to predict its world so well that everything that happens can be dealt with in a routine "fire and forget" fashion.
That pre-filtering of awareness is then how the surprising, the significant, the unrecognised, can get selected for the more intensive post-processing of attentional thought the higher level figuring out that takes about half a second, and recruits working memory, the prefrontal cortex, a general whole brain "gestalt" form of fitting pieces of a puzzle into place.
So while it is fine to use computer jargon as helpful metaphor, the brain is not actually algorithmic at any level. It is not a Turing Machine or Finite State Automata.
What the brain really does is forward model its world in the way now described as Bayesian Mechanics. If you want neurobiology's rigorous alternative to familiar Turing Machine computation, this is the "algorithm" that the brain expresses both at its "unconscious" habit level, and "conscious" attentional level....
https://royalsocietypublishing.org/doi/10.1098/rsfs.2022.0029#d1e5377
RogueAI, I'll answer your questions if you're serious about replying to mine. First, you already agreed when we started discussing brains. Quoting Philosophim
You already agree there are neurons, and you claimed they correlated with mind, and didn't cause it. At this point retreating and saying, "Well maybe brains don't exist" is borderline trolling. I'm giving you the benefit of the doubt that you just made a mistake.
Also, please answer the rest of the points I made. Its going to need to take you more than a few sentences to reply adequately. Please take it seriously. If that is not what you are interested in, then again, no harm in bowing out of a conversation.
I'm an idealist. I've identified as such here for quite awhile. I was meeting you halfway for sake of argument earlier. Don't accuse me of trolling, please.
We're at first principles now. I want to know why, at the starting gate, I should adopt your materialistic view of reality because in actuality, I don't.
You aren't responding to my earlier points and now you want to change to a debate over materialism? I'm not playing this game. If you're not answering my points and just asking more questions, then you're not discussing. The subject was about the brain and consciousness. I've already put in effort to make some points and ask you to justify yourself. If you want to engage with me, first justify yourself. Explain to me why you don't believe brains are material reality instead of asking me. The onus is on you to respond and make an actual point before continuing on with your questioning. If you cannot do so, then lets end the conversation.
Then please make such an argument. Refer back to my original points to you where I formed a logical argument, then asked you to clarify and explain your own.
"No, brain consciousness leads us to realize that matter and energy if organized correctly can be conscious."
So if you have some matter and energy, and you organize them in the right way, you get consciousness (or the matter-energy system is conscious or becomes conscious).
A) how does that happen?
B) why does it happen with certain types of matter and energy and not others? A working brain is conscious, but if you put it in a blender, blend it, and then add some current to the mix, you won't have consciousness. What is it about working brains that makes them conscious? Why are only parts of the brain conscious? Why isn't my heart conscious?
C) Would something that is functionally identical to a working brain be conscious? Does substrate matter? Is there something unique about neurons that only a collection of them could be conscious? How would you test for consciousness in a machine or alien brain?
"This appears across living species with different types of brains."
Which brains are conscious? Are bees conscious? Ants? Toads? Approximately how many neurons are required before consciousness emerges? How can we test whether insects are conscious or not?