Moravec's Paradox
Consider Hans Moravec's Paradox:
Its great that computers can play chess, search the internet, and mimic human intelligence, but human intelligence is arguably the easiest behavior to mimic. As one of our youngest behaviors, it is relatively less evolved and complex as other traits, like perception and mobility. Even now, nearly 40 years after Moravecs observation, robots tend to look like bumbling fools wherever they mimic other behaviors, even if they could still school the best of us at chess and math.
Im curious about what Moravecs Paradox might imply about the philosophy of mind and request the wisdom of others. What questions might it raise for the field?
Just as an example, Ive never been too impressed by intelligence as I am with other forms of natural ability, and I suspect that this paradox helps to illustrate why. I have an instinctual aversion to analytic philosophy and the general notion that a man who stares at words and symbols all day can afford me a higher value to my education or the pursuit of wisdom than, say, an athlete or shop teacher, or anyone else who prefers to deal with things outside of themselves. I prefer common sense to the rational, the body to the mind, the objective to the subjective, and tend to defend one from the encroachment of the other. Does anyone else feel this way? Have we glorified intelligence at the expense of the other abilities?
At any rate, I thought Moravec's Paradox and its implications for the philosophy of mind to be a good topic of discussion.
it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.
Its great that computers can play chess, search the internet, and mimic human intelligence, but human intelligence is arguably the easiest behavior to mimic. As one of our youngest behaviors, it is relatively less evolved and complex as other traits, like perception and mobility. Even now, nearly 40 years after Moravecs observation, robots tend to look like bumbling fools wherever they mimic other behaviors, even if they could still school the best of us at chess and math.
Im curious about what Moravecs Paradox might imply about the philosophy of mind and request the wisdom of others. What questions might it raise for the field?
Just as an example, Ive never been too impressed by intelligence as I am with other forms of natural ability, and I suspect that this paradox helps to illustrate why. I have an instinctual aversion to analytic philosophy and the general notion that a man who stares at words and symbols all day can afford me a higher value to my education or the pursuit of wisdom than, say, an athlete or shop teacher, or anyone else who prefers to deal with things outside of themselves. I prefer common sense to the rational, the body to the mind, the objective to the subjective, and tend to defend one from the encroachment of the other. Does anyone else feel this way? Have we glorified intelligence at the expense of the other abilities?
At any rate, I thought Moravec's Paradox and its implications for the philosophy of mind to be a good topic of discussion.
Comments (14)
Technology is capable of reproducing mobility, sensation, and as you pointed out, information processing (intelligence).
But the root of our "aware-ing", independent of Mind (though "hijacked" or displaced thereby) is the way we are triggered to feel stemming from experience, and by that, every nano-"second" and corresponding subtle variation thereof.
And sure, we can duplicate a reward/punishment system with subtle variations possibly as sophisticated as our (I submit, some of which is even imperceptible to mind) sense of (inner) feeling. But whereas with the others, it seems we can even surpass the Organic faculties; when it comes to, what I would call the "real human consciousness" as opposed to Mind/Self consciousness, aware-ing-feeling, I have doubts we can ever succeed.
I think Mind itself fails to represent those feelings, but projects representations called emotions. Emotions are already a projection from Reality. It might be that we cannot duplicate a projected/represented Reality, now twice removed.
As a simple illustration (not purporting in any way to be an analogy, let alone sound) its like other forms of Fiction. When we project a real life character in books or movies, we can duplicate it in all respects but it's feelings. Think of the actor who played Ghandi. Even thoughts (at least knowable ones) can be transmitted if there was a way to record and transmit. But the Organic being is necessary for the feeling. Even the "how it feels" has "left" the Organism and entered Mind. That can be duplicated. But not aware-ing feeling.
Nice thinking.
I think youre right about human feeling, much of what I believed is derived from embodied experience. Its like weve started AI in the wrong direction, conceiving it first as disembodied brains and building it in that direction, rather than as embodied beings, which is probably so fundamental to experience that to forget it seems foolish.
Yes, because we also approach mind/body in the wrong direction, as if real being somehow inhabits the mind.
Chess and math are indeed far less complex than say motion and perception and language. Those things would be totally overwhelming to us if we had to consciously think them through. The brain is furnished with special purpose machinery that handles those things, and we have no conscious access to the workings of those parts of the brain, only to their results.
When a computer performs a task done by our slow brains, it can excel. Taking on a task done by our fast brains is far more formidable, and the breakthroughs for those things happened only recently.
Quoting NOS4A2
https://www.youtube.com/shorts/zS6vNNW5bEo
https://www.youtube.com/watch?v=UAG_FBZJVJ8&pp=ygUHI2JvdGRvZw%3D%3D
https://chatgpt.com/
Quoting NOS4A2
And yet, 9.3k and counting.
This caught my attention, besides the OP's really good point.
This, to me, is the 'insight', which only human consciousness and intelligence can possess. The AI is denied this experience.
The 'growth' and 'maturing' that humans experience cannot be duplicated in the machine because of the inherent nature of the neural networks in the brain.
Edit: consider the learning of speech -- the sounds that are produced through the vibration of the vocal cords must always begin with the babies making unintelligible sounds.
Your good points aside, in case my thought needs clarifying. I'm suggesting that only a living organism has (among other things which might apply) feelings; and by thar I mean what the brain, neurons, and I guess for e.g. the limbic system produce. And it is therd that experience is real, as it is for other creatures. In Mind, which presumably AI is at least currently focusing on replicating, there is only the script which uniquely for humans, gives meaning (usually I'm narrative form) to the feelings. But it is empty code without the feelings. It's one thing to know what love is, its another thing to feel it.
I am a mathematician and programmer. I've been interested in AI since the 1980s. I don't particularly remember Moravec's paradox but a lot of people were saying similar things at that time. Here are three things I do remember.
1. David Marr was a biologist turned computer scientist. He is sometimes known as the father of computational neuroscience. You can think of computational neuroscience as being like AI but restricted to use only algorithms which the brain might plausibly use, and to only use data of the sort that humans have access to during their lives. I think there is so much wisdom in this quote.
2. Douglas Hofstadter's essay 'Waking up from the Boolean Dream' (1982). It's 22 pages long, so these are tiny snippets from it. In 1980 AI researcher Herbert Simon said "Everything of interest in cognition happens above the 100 millisecond level - the time it takes you to recognise your mother." Hofstadter takes the opposite viewpoint "Everything of interest in cognition happens below the 100 millisecond level - the time it takes you to recognise your mother." One subtitle in the essay is "Not Cognition, But Subcognition Is Computational".
3. John Holland's classifier systems and in particular the paper Escaping Brittleness (1986). Holland's classifier systems are sometimes described as the first fully-fledged reinforcement learning system in AI. The brittleness being escaped here is the brittleness of expert systems.
In my opinion, reinforcement learning is the most important part of AI for philosophers to understand. It is especially relevant to understanding the way our brains work if it is restricted in the way that I described above for computational neuroscience.
Sadly there doesn't seem to be anyone except me on TPF who understands reinforcement learning or shows much interest in learning about it. There was once. I hoped to have a discussion with @Malcolm Lett. But as soon as I made a comment (https://thephilosophyforum.com/discussion/comment/900869) on his OP he disappeared from TPF and has never posted since. I live in hope.
@ENOAH, I agree that feelings are central. Replying to Malcolm Lett's "Our emotional affect additionally adds information, painting our particular current emotional hue over the latent state inference that is made from the raw sensory data", I said
Quoting GrahamJ
Reward functions and value functions are technical terms from reinforcement learning.
Yes, but my own thoughts may not align fully with either yours or Lett's (based on my extremely limited exposure here). While not wishing to put any of them into identifiable boxes, my thinking may be a strange hybrid.
I will explain super-briefly and in the context of this discussion about AI and my original reply to the OP.
I think emotions are a painting over of direct sensation; the paint being meaning.
I think feelings are a direct sensation, they regulate the body's mood but in a much broader way than conventionally thought of. To keep it brief, even that which triggers belief is a sensation.
The 'code' which Mind writes and projects into the world to give meaning to these direct feelings is emotion.
The emotion is available to AI because it is just code/meaning.
It's the feelings which are unique to living beings like us and therefore not accessible to AI. And I would speculate never will be.
To give an overly simplistic illustration.
I hold my newborn child fresh out of the womb and instantly feel [a bond]. That is an organic and real sensation the AI cannot have.
Within 'a second' Mind constructs from history, meaning to attach to the feeling (because I am human and blessed/burdened with Mind) 'love' to displace that initial feeling. Now I have the emotion, subjective, "I love my baby." That emotion is a construction and can be programmed into AI.
But just as for us, the emotion is not Consciousness. It is not even real. It is programmed code. Triggered by the same feedback loop that makes me nervous when I hear a siren and call upon History to attach meaning.
I'm saying the AI cannot have consciousness not because it cannot have emotions which only we humans construct; but because it cannot have feelings, the real source of our drives, moods, etc., and that which we share with many other species in the real world.
Anyway this may have been to brief and simple, but for what it's worth...
Your information by the way was fascinating. I sense that I might unwittingly align with Hofstadter. I'm not sure about terminology, 'cognition' etc. But for me real 'experience' for humans is like that nanosecond before the sensation gets flooded with constructions from History and displaced by perception or emotion or desire etc. etc.
We are using language very differently, particularly the word emotion. It's hard to tell how much we disagree about feelings though I certainly disagree about the possibility of AI having feelings (though exactly how we disagree is unclear). When talking with Malcolm Letts I was discussing the hard problem. My version of the hard problem is: how can anything ever have any feelings at all? I will start by defining how I want to use the word feelings in this thread.
I try to follow psychologists when using words like feeling and emotion because I figure they're the experts who study these things. Mind you, psychologists don't agree about these things so I pick and choose the psychologists I like ;-)
I use 'feelings' to mean bodily pains and pleasures, and the subjective experience of emotions and
moods. It is a very flexible word, and I want to restrict its meaning. People often use the words emotion and feeling as synonyms. But psychologists (so far as I can see) regard feelings as only one part of emotion. For example Scherers Component Process Model:
You'll notice this is quite backwards from the way you are using the word emotion. You seem to be referring to the way we talk about emotions after all these five components including the feeling have happened. I am not very interested in the way we talk about emotions (and I am completely uninterested in the way ChatGPT talks about emotions).
I am excluding the meanings of feelings that relate to intuition (I feel 87 is my lucky number) and the sense of touch (feeling my way in the dark).
I am also excluding uses of the word such as feelings of identification with the particular object that happens to be your body (Anil Seth) and your "feel [a bond]" where I am not clear what is meant, but it is something more general than the narrow way I want to use the word. Probably these are complex experiences with multiple components, some of which are feelings of the sort I want to talk about.
I'll go through the model again with your example
Note that only the fifth component is necessarily conscious. The others may or may not be. I would quibble about Scherers 'once it has occurred'. The cognitive appraisal must come first, or at least start first, but I'd expect the other four to occur in parallel.
Your conscious mind lags about 1/3 of a second behind reality. That's over three hundred million nanoseconds, enough time for your brain to process something like a million million bits. In top-level tennis, a player must return a serve before they are consciously aware that the ball has left the server's racquet. The conscious mind is so slow that everything seems instantaneous to it. I think there is a lot of calculation involved to produce a feeling.
Enough for now. Later, I hope will shake your confidence a bit about AI never being able to have feelings.
Or perhaps we have too narrowly defined the intellect? Hence we rail against so many smart people acting so stupidly.
We might consider here Plato's claim that no one [I]knows [/I] what is truly choiceworthy and chooses otherwise, and that such knowledge requires "turning the whole person" (body, appetites, passion, and reason) towards the Good.
Can generative AI pen a great epic? It seems to me more like a search algorithm for finding interesting bits of random text in Borges' Library of Babel (the unimaginably large library of all possible 500 page books, with every possible arrangement of symbols.) An interesting thing about the Library is that it contains many translation guides that will tell you how to read any of the random gibberish you find as a code for some coherent meaning. So, each bit of gibberish has some book that deciphers it and makes it intelligible, or even edifying! What then is the true meaning of the gibberish (which massively outweighs anything in a real language)?
Perhaps because perception and mobility are direct interactions with the environment? Unlike indirect interactions via computational or mental representations. One hardly needs a brain to be able to perceive things or move intentionally.
I'm not sure computational or mental representations are indirect interactions with anything other than the computational and mental organs themselves. If there were "representations", this would be a direct interaction, at least given any theory that supposes they occur in the brain. If there were no "representations", there is nothing to interact with.