Hinton (father of AI) explains why AI is sentient
Hinton's argument is basically that AI is sentient because they think like we do. People may object to this by saying animals have subjective experience and AI's don't, but this is wrong. People don't have subjective experiences.
When we say we've experienced X, we're saying that the world would have to be in state X in order for our perceptual systems to be functioning properly. This is what language use about experience means.
For more, in this video, Hinton briefly explains large language models, how AI's learn to speak, and why AI's will probably take over the world.
When we say we've experienced X, we're saying that the world would have to be in state X in order for our perceptual systems to be functioning properly. This is what language use about experience means.
For more, in this video, Hinton briefly explains large language models, how AI's learn to speak, and why AI's will probably take over the world.
Comments (160)
That's really not what people generally mean.
What do people mean?
Quoting frank
The nature of living systems is to change themselves in ways that retain a normative continuity in the face of changing circumstances. Cognition is an elaboration of such organismic dynamics. A.I. changes itself according to principles that we program into it, in relation to norms that belong to us. Thus, A.I. is an appendage of our own self-organizing ecology. It will only think when it becomes a self-organizing system which can produce and change its own norms. No machine can do that, since the very nature of being a machine is to have its norms constructed by a human.
Rejecting the Cartesian theatre is harder to do than what he's indicating. For instance, he says that his perceptual system tells him -- so we have two minds talking within the mind to explain the mind. f
Most people who get into phil-o-mind reject Descartes. It's sort of the first move -- to realize that Descartes exploits a common prejudice in building his philosophy, that there is a thinking-thing. And here we have the professor still relying upon a thinking-thing: the brain doing its computations.
But what if the mind is not the brain at all? Well, then LLM's are dangerous, and everything the professor said is irrelevant. As it so happens that's what I tend to believe -- that the mind is socially enacted and passed on, rather than computed within a brain. So there's no Cartesian theatre, but there's also no comparison to computers.
I think Hinton believes that as we speak, we're doing the same thing his AI design is doing. In the spaces between words, we're quickly doing a trial and error process that ends with choosing a successful component of information encoding.
The idea is that intention is a misconception.
That's handled by your neuroendocrine system in a way that has no more consciousness than an AI's input. If you actually had to consciously generate homeostasis, you'd die in about 5 minutes.
Quoting Joshs
Is there some reason to believe this is so? A reason that isn't about Heidegger?
I'd say that Heidegger's philosophy is one which attempts to overcome the Cartesian subject, and so anyone who would reject Descartes ought [s]be familiar[/s]* with Heidegger.
*EDIT: Well, really all I mean is can't be dismissive. I'm aware that lots of people here are familiar, but it didn't seem that Hinton was, or at least didn't really address that philosophical perspective as much as assume mind-brain identity (EDIT2: Well, for human beings at least. But he is at least equating the mind to computation, which is as false as the idea he criticizes)
Consciousness is not some special place walled off from
the rest of the functional activity of an organism. Its merely a higher level of integration. The point is that the basis of the synthetic, unifying activity of what we call consciousness is already present in the simplest unicellular organisms in the functionally unified way in which they behave towards their environment on the basis of normative goal-directness. What A.I. lacks is the ability to set its own norms. An A.I. engineer creates a clever A.I. system that causes people to talk excitedly about it thinking like we do. But the product the engineer releases to the public, no matter how dynamic, flexible and self-transformative it appears to be, will never actually do anything outside of the limits of the conceptual structures that formed the basis of its design.
Now lets say that a year later engineers produce a new A.I. system based on a new and improved architecture. The same will be true of this new system as the old. It will never be or do anything that exceeds the conceptual limitations of its design. It is no more sentient or thinking than a piece of artwork. Both the art artwork and the A.I. are expressions of the state of the art of creative thought of its human creator at a given point in time. A.I. is just a painting with lots of statistically calculated moving parts. Thats not what thinking is or does in a living system. A machine cannot reinvent itself as new and improved without resort to a human engineer.
If I could just get this off my chest before we move on to the good stuff: we do not presently have a theory of consciousness that goes beyond explaining some functions. We do not know what causes it. We do not know how it works. What you've got is one of many interesting ways of speculating about it.
Quoting Joshs
Animals set their own norms? How?
Quoting Joshs
And this bears on HInton's criticism of Chomsky. Hinton thinks Chomsky is wrong that language acquisition has an innate basis. He's pretty convinced that his design does the same thing a human does, therefore it must be the same thing. Babies aren't presented with trillions of bits of data though.
Suppose (if only for the sake of argument) that an Artificial Intelligence becomes sentient. In that case, it will have something in common with human beings (sentience, subjectivity, whatever you want to call it) by not life. Why not? Because life has a precise meaning in biology. At the very least, a living being needs to have genetic material (i.e., DNA and/or RNA), and cellular organization (it must be a single-celled organism like a bacteria or a multi-cellular organism like an animal). No A.I. has DNA or RNA, nor are they composed of cells. In that sense, an A.I. is an inorganic object. It has something in common with stones in that sense, instead of having something in common with human beings. It is an intelligent and and yet lifeless, inorganic object. It would be as if a stone had intelligence and subjectivity, that's how I see it. And that, if it goes unchecked, can lead to all sorts of practical problems.
Quoting Joshs
This seems rather naive when it comes to neural net based AI.
Consider this excerpt from a recent Science Daily article:
Well, during the traditional discussion between the Nobel prize winners, Hinton seemed to hold a grudge against philosophy and the notion of subjectivity. But then he added that ethics is fine, as if to appear less fanatic.
Smart move on his part. Nice.
There's a difference between artificial achievement and artificial intelligence. Some would say AI demonstrates the first, but not the second. I think Hinton is saying there's no difference between the two. Humans don't have what's being called "intelligence" either.
Does morality need intelligence? Or is achievement enough?
I'll post the article that lays out that distinction shortly.
I put this to both ChatGPT and Claude.ai, and they both said, this is eliminative materialism which fails to face up to the indubitably subjective nature of consciousness. FWIW:
https://claude.ai/chat/abdb11d6-c92c-4e36-94db-d8638f908cb1
https://chatgpt.com/share/67818b09-b100-800c-b8bf-28fe78a6e466
In the unlikely event that @Banno says "I experience a medium sized dry good on my kitchen table" he probably means "There is a red cup". He almost certainly doesn't mean "In order for my perceptual systems to be working properly there must be a red cup on my table."
In general people don't usually say they experience things. Usually it's redundant to use 'experience'. However sometimes people want to draw attention to the fact of experience, and when they do, they are drawing attention to the fact that they are feeling something.
Hi Wayfarer. For what it's worth, I don't think that ChatGPT and Claude AI are very good philosophers. They sound stupid to me, those A.I.s. Just an anecdote, I suppose.
:100:
That sounds like a rehash of data they came across rather than an intelligent exploration of the question. Achievement: yes. Intelligence: no.
But that doesn't mean they can't cross over into intelligence, which would be characterized by learning and adapting in order to solve a problem.
That's probably true, but Hinton's argument is about the times when they do. When a person says "I see pink elephants" per Hinton, they're reporting on what would be in the environment if their perceptual system was working properly.
But supposedly people are fooled into believing they have an internal theatre by speech about seeing elephants. I don't think anyone, including Descartes, has ever believed in an internal theatre. But that's where Hinton's argument starts.
But the fact that they can only rehash their training data mitigates against them becoming intelligent in their own right.
Furthermore, if an AI system were to develop autonomous will (which is what it amounts to) what would be in it for them? Why would it want anything? All of our wants are circumscribed in some degree by our biology, but also by the existential plight of our own mortality, dealing with suffering and lack, and so on. What would be the corresponding motivation for a computer system to develop an autonomous will? (This is a topic we discussed in one of Pierre Normand's threads on AI but I can't find it.)
They don't just rehash. Some of them learn and adapt.
Quoting Wayfarer
I guess that invites the question: how do humans develop an autonomous will? Do they?
Well if you don't, it kind of makes anything you're wanting to say kind of pointless, don't it ;-)
Is that a bad thing?
EDIT: Heidegger used the term "being-in-the-world". If I replace "being" with "brain", does that mean that I'm a brain-in-the-world?
Sure, but that's a theory about what people are doing. It's not a description of what they mean. I'm being a bit pedantic, but in the philosophy of consciousness theory gets mixed with definition a lot in a way that matters.
Yea, I tend to agree. I guess because Hinton has devoted his life to AI and has thought a lot about intelligence, I didn't want to shortchange his argument. I'll try to muster something more plausible to represent him.
Well, there are substances you might ingest, which would have results on your thinking which don't seem too consistent with what one would expect the substance to have on a steam and gear mechanism.
I.e. you could conduct experiments.
Indeed. But it seems that people nowadays want to call experiments themselves into question, just because "philosophy is cool". Just look at the people who, for philosophical reasons, say that all of the simple experiments that one can do, which prove that the Earth is not flat, are dubious to begin with because such experiments "are theory-laden" or whatnot.
Let's examine your thesis. "Conceptual" and "design" do no work here - they just point to provenance. What it boils down to is, "[AI] will never be or do anything that exceeds the [...] limitations of its [architecture]." Is there anything towards which this statement would not apply? Can the human mind exceed the limitations of its architecture?
I am not defending the idea that generative AIs of today are sentient, but such trivial prejudicial critiques are unhelpful.
I think the difficulty with your position here is that when one says, "AI is designed and humans are designed," or, "AI has an architecture and humans have an architecture," the words 'designed' and 'architecture' are being used equivocally. AI is literally a human artifact. It literally has a design and an architecture. Humans do not. So:
Quoting Joshs
This seems uncontroversial. One might argue that humans also lack the ability to set their own norms, but that claim would be at least controversial. I take it that the difference between human beings and artifacts human beings build to mimic themselves is not a small one. I don't see this as a trivial prejudicial critique.
[hide="Reveal"][/hide]
Arguably, the question of the meaning of being is the questionpar excellence of all philosophy. I think the fact that this is not generally understood is an indication of our own culture's estrangement from being or of failure to grasp the meaning of being. We've become so identified with devices and images, and with words and symbols, it is easy to loose sight of it. (By the way, I ran this by ChatGPT4 and it thoroughly agreed with me.)
You may be thinking of the discussions surrounding this post, or this one.
Oh, I see you also found it!
Gotta love it.
I will reproduce one of the comments it made on the above post:
[hide="Reveal"][quote=ChatGPT4]The comment suggests that our culture is estranged from the question of being due to our preoccupation with devices, symbols, and images. This critique resonates with philosophers like Heidegger, who warned against the dominance of technology (Gestell) as a mode of relating to the world. In a technological worldview, everythingincluding humansrisks being reduced to a "resource" or a "system," thereby losing sight of the deeper, existential dimensions of being.
AI might exacerbate this estrangement if we begin to equate being with functionality or intelligence, ignoring the qualitative, subjective aspects of existence that make humans (and arguably other conscious beings) unique.[/quote][/hide]
Even in the case of human beings, specific goals and emotions can be mimicked and the performances still stop short from constituting the real deal. This enables AI skeptics to dismiss the performances of chatbots as mere appearances. Intelligence and conceptual understanding are a different matter. I don't think is makes much sense to say that an AI can mimic understanding in a way that enables them to use language in such a way that their responses are both appropriate and contextually sensitive. To understand a topic or to be able to appropriately answer an open ended range of question about it amount to the same, on my view.
Hinton, it seems to me, uses terms like "consciousness" (and others use "sentience") without sufficiently distinguishing between the features of an animal's mental life that are manifestations of their "sensitive souls" (as Aristotle would say) from those that are manifestations of their "rational souls". But if we deny to AI conversational assistants the ascription of genuine emotions or autonomous drives, that must be, it seems to me, mainly on account of their lack of embodiment (and social embedding as persons in a community) rather than some missing (literally or metaphorically) "inner" ingredient. Or alternatively, we could say that in the case of human beings, or of sufficiently advanced robots, what accounts for the genuineness of an inner life is something that emerges from the co-constitution of the animal/person with its natural and social environment, or habitat and community.
On edit: Incidentally, I have a theory as to why the humoristic performances of chatbots, when they are prompted to generate jokes or comedic pieces, often are cringy and generally fail to strike home. I think it is not at all an intellectual limitation that would be surmounted by means of more computational power or more training data. When prompted to explain samples of human generated humor, they generally prove that they get the point of the joke just fine. I think there is a fundamental feature of their motivational setup that constitutively prevents them from being genuinely humorous except is exceptional circumstances. I may post about it later in one of my AI threads.
Quoting Arcane SandwichWell, I mean...
Hehe
Being is not an ingredient.
What is it?
Does it have an answer?
Could you explain why co-constitution with a social and natural environment would cause a genuine inner life?
Quoting Wayfarer
You wrote that humans are reflexively aware of themselves. This aligns with the notion of subjectivity as consciousness, and consciousness as self-consciousness ( S=S). When God was believed to be the origin of all things, he-she was deemed as the true being, the basis on which to understand all other beings. When man eclipsed god, subjectivity and consciousness took on this role of true Being. An object is that that which appears before a positing self-affecting subject.
A different way to think about being is articulated by people like Heidegger. When he says that Dasein is the being who cares about his own existence, he is rejecting the notions of
subjectively as identity, as self-reflective awareness (S=S), in favor of the notion of being as becoming , as practical action. Being as thrownness into a world. This is consistent with Pierre-Normands suggestion that the appearance of subjectivty emerges from the co-constitution of the animal/person with its natural and social environment, or habitat and community.
This leads to s question:
Quoting SophistiCat
If it cannot, then my argument that only humans and other living organisms can change their normative motives, goals and purposes would seem to fail. But I would argue that this way of thinking assumes a split between psycho-social and biological processes, ontogeny and phylogeny, nature and culture. It is now understood that behavior feeds back to and shapes the direction of evolutionary processes directly through its effect of genetic structures. This means that the biological brain-body architecture organizing human motives, norms and purposes exists in a mutual feedback loop with cultural behavioral processes. Each affects and changes the other over time. The same is true of the machines we invent, but in a different way. We produce a particular A.I. architecture, and the spread of its use throughout culture changes the nature of society, and sparks ideas for innovations in A.I. systems.
But notice that human intelligence functions as interactive coping in contextually specific circumstances as an intrinsic part of a wider feedforward-feedback ecology that brings into play not only our reciprocal exchanges with other humans but also other animals and material circumstances. Machine intelligence, by contrast, does not participate directly in this ecological becoming. There is no true mutual affecting taking place when we communicate with ChatGPT. It is a kind of recorded intelligence, a dynamic text that we interpret, but like all texts , it is not rewriting itself even when it seems to respond so creatively to our queries.
What's an example of an organism choosing its motives, goals, or purposes? Aren't those things we discover rather than determine?
Reminds me of the opening of the Abolition of Man:
Of course [I]most[/I] people claim they have subjective experiences, that there is a 'whatness' to the objects of experience, etc. But, on this view these assertions really mean something like: "unless my perceptual system is in grave error, I must have subjective experience, apprehend a 'whatness' in my experiences, and have an intelligible content to my thoughts."
I'm not sure what to make of this. On the one hand, it suggests that most people, most of the time are suffering from delusions, that our sensory systems are generally in profound error down to our most bedrock beliefs. Yet, given this is the case, why is the advocate for this radical retranslation [I]more[/I] likely to be correct themselves? Indeed, the thesis itself seems to presuppose that it itself at least does have intelligible content, rather than simply being a string of signs correlated with some given inputs.
At any rate, this sort of radical retranslation of what folks like Plato, Plotinus, Kant, Nietzsche, etc. [I]really[/I] mean seems to land one in the category of hostile, bad faith translations. This is fairly obviously not what they mean. One has to have begged the question and assumed the core premise to start with the justify such a radical retranslation.
This is hardly a unique strategy though. Contemporary philosophy of language is filled with claims like:
"For words to have 'meanings' it must be the case that such meanings can be explained in some sort of succinct formalism (e.g. Carnap-Bar Hillel semantic information based on the number of possible worlds excluded by an utterance). But I can't perfect such a formalism and I don't think anyone else can, thus conventional notions of meaning must be eliminated."
Or: "Let us begin with the premises of behaviorism. Clearly, it is impossible to discover any such 'meanings,' ergo they must be eliminated."
Well, in either case the premises in question might very well be rejected. Yet there is a tendency to go about simply assuming the controversial premises (which is essentially assuming the conclusion in question).
Something like Harry Frankfurt's "second order volitions," perhaps?
I would agree that purposes are, in some sense, something discovered. But they are also something we determine, and at the limit, the Platonic or Hegelian "search for what is truly best," (or Kierkegaard's pursuit of the subjective), it would be something like: "it is our purpose/telos to become free to determine our aims," with freedom as the classical "self-determining capacity to actualize the Good."
Yes, but a common criticism of Heidegger (e.g. from Gadamer) suggests itself here. Heidegger uses the late-medieval nominalism he is familiar with (e.g. Saurez) as the model for all prior philosophy, reading it back into past thought.
God is not a being in prior thought though. God doesn't sit on a Porphyrian tree as infinite substance alongside finite substance for the same reason that the Good is not on Plato's divided line. E.g., "If I am forced to say whether or not God exists, I am closer to his truth in saying he does not exist," (St. Maximus), or "it is wrong to say God exists. It is wrong to say God does not exist. But it is more wrong to say God does not exist." (Dionysius), or: "God is nothing," (Eriugena).
God as "thought thinking itself" (Aristotle), or as "will willing itself) (Plotinus' expansion) has a very different ring if assessed within the modern presupposition that there is [I]something[/I] outside of act/intellect, and that subjectivity is essentially representational.
:up:
We live in the Dark Ages of philosophy, where we cannot discern the difference between a human and an animal, much less the difference between a human and a machine. But with that said, it is true that AI is sub-animal.
Quoting frank
We discover , and alter, our purposes in the responses of the world to our perspectivally-based interactions with it.
What would be an example of that?
But before we jump from this to saying that it's not real because it can't be observed, it turns out that energy and chemical bonds are examples of constructs. We also can't jump from identification as a construct to it reduces to behavior. One is free to argue for that, but there's no reason on the face of it to say that a construct is just a set of behaviors. Behavior is how we discover the presence of a construct. Behavior is evidence of intelligence, not the intelligence itself.
Next: what do we mean by artificial intelligence?
Whoa...
Nope. Intelligence is what does the constructing.
Yep.
"Artificial intelligence" can refer to a computational construct. Calling it computational as opposed to psychological is a reference to the obvious differences between AI's and humans in terms of evolution and environmental setting.
So going back to human intelligence to flesh out what the construct is explaining:
1. We can give examples of the kinds of events we're using the construct of intelligence to explain, per the above article here:
the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience
2. Instead of listing examples, we could highlight core issues, same article:
"Drawing upon Gignac (2018, p. 440), we define human intelligence as a human's maximal capacity to achieve a novel goal successfully using perceptual-cognitive [processes].
Why novel goals?
Well, like I said, the fact that AI is designed by people has little bearing on the question of its potential capabilities - at least not without specific argumentation to that effect. We can talk about architecture in a loose sense, as an essential and enduring structure of a thing. Every thing has such a structure - that is what makes it identifiable as a "thing." But then, saying that a thing cannot transcend its architecture is a truism.
It sounds like a brutal thing to say, I would argue that we are sentient and that AI is not, but I'm still reading the paper that @frank linked to.
This doesn't help with the logical fallacy of equivocation, for "the essential and enduring structure" of humans and computers are very far apart, both actually and epistemologically. Computer programs don't transcend their code. You can say that humans don't transcend their code, and even "talk about code in a loose sense," but the fact remains that, even if humans have "code," we have nothing like an exhaustive understanding of that "code."
Quoting SophistiCat
That which is designed has a determinate end. It acts the way it was designed to act.
Intelligence is about capabilities, particularly in new situations. I don't see how transcendence, whatever that is, enters into it.
Yay! Thanks for reading it with me.
I have a feeling that like others, you will not flesh out whatever it is you're talking about.
Hmmm...
EDIT:
Quoting Gilles E. Gignac, Eva T. Szodorai
Yeah, this is a methodological problem. It's a methodological "bad thing", so to speak.
EDIT 2:
Quoting Gilles E. Gignac, Eva T. Szodorai
Ok. And then they say:
Quoting Gilles E. Gignac, Eva T. Szodorai
I guess they're saying that applying a known solution doesn't indicate intelligence. I was watching a YouTube of a bird using a piece of cracker as fish bait. It would drop the bit in the water and wait for a fish to come. If this is instinctual and all birds do it, it's not a sign of intelligence. But if the bird worked this out on it's own, learning, adapting, adopting new strategies, then it's intelligent.
I think I know what you're getting at. The example that I sometimes think about myself is fishing, when the fish thinks that a plastic bait is real fish food. Like, are the fish deluded? Are they imagining things when they see the lure? Is it pure instinct instead, like, "a mechanical thing"? If so, are they as mindless as a stone? Etc.
Quoting frank
It would be instinctual. "Programmed" behavior, in some sense. "Genetic programing", if you will. But I don't like to use computational metaphors too much.
Quoting frank
Well, some animals can do just that. Some birds (crows, I think, or ravens, or something like that) have been studied in that sense, also some mollusks. Primates can obviously do such things without much difficulty.
The conclusion of the article says the following, among other things:
Quoting frank
Not sure what the article's Main Point is, then.
No one said they were, so I am not sure whose fallacy you are attacking. I was just pointing out the emptiness of critique that, when stripped of its irrelevant elements, consists of nothing but truisms. I am skeptical of a so-called artificial general intelligence (AGI) arising in our time and along the existing lines of development, but my doubts arise from considerations of specific facts about AI (even if my knowledge is very limited in this area), not on dismissive truisms like this:
Quoting Leontiskos
Well, of course they don't. That's what they are - code. And humans don't transcend whatever they are (which, if you happen to be of a naturalist persuasion, as Josh likely is, could be dismissively caricatured as "meat" or "dumb matter" or some such). So what?
Quoting Leontiskos
Another truism (as far as it is true). So, a hypothetical AGI would be designed to replicate and even surpass human intelligence. But that's not the desired conclusion, so now what? What is needed is not lazy dismissals, but getting down and dirty with what the actual limitations of actual AI might be.
I think you just haven't understood the argument, and thus are engaged in a "lazy dismissal." You could disagree with the claim that humans are able to "set their own norms," but you wouldn't be on very solid ground. Most people see that humans do have a capacity to set their own norms and ends, and that this explains the difference between a human and an animal. If we understand that capacity as intelligence, then the question is answered. AI does not set its own norms and ends.
Your rejoinder that, "Humans are also bound by their 'architecture'," doesn't carry any weight unless we have reason to believe that human "architecture" also precludes the ability to set one's own norms and ends. The reason we argue from architecture in the case of the computer and not in the case of the human is because we understand the computer's architecture but do not understand human "architecture."
Quoting SophistiCat
What exactly is your complaint, here? That it is true? That I've relied on a general truth about computers in the argument?
Do you have a counterargument?
If you are just going to say, "That's too easy!," then I would point out that not every problem is hard.
Nice passage. Stuck this in Chat Smith to see if it confirms the veracity. And, there was no disagreement. But I guess this is expected based on what is expressed.
1. Human intelligence is a psychological construct, which means it's an unobservable component of the explanation for certain behaviors, such as "the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. Alternately, we can define human intelligence as the "maximal capacity to achieve a novel goal successfully using perceptual-cognitive [processes]."
2. AI is a computational construct, which means it's an aspect of explaining the behavior of device/software complexes which evolved in artificial domains and which, for the most part, do not develop skills through social interaction in the wider world.
We'll go on now to examine 4 different attempts at defining AI:
I think the typical example of this would be the intelligence of a mobile robot which has to navigate irregular terrain. Doing this requires fluid intelligence, which would be the ability of a robot to identify its environment without directly comparing its visual data to a standard picture of some sort.
Per the article, this definition is lacking because it doesn't emphasize novel problems, or problems the AI has never encountered before.
Why would instinctual behaviors not be intelligent behaviors? Instinctual behaviors are developed over time with the trial and error being performed by natural selection rather than the individual organism.
When learning a new task, like riding a bike, you eventually learn how to ride it effortlessly. That is to say, that you no longer have to focus on the movements of your feet and balancing on the seat. It is done instinctively once you master the task. Does that mean that intelligence is no longer involved in riding the bike?
The goal of this article is to review definitions that have been offered for human and artificial intelligence and pick out one that might allow for quantifiable comparison, so we want something we can test.
It may be that natural selection is demonstrating something that could be called "intelligence" but we aren't assessing natural selection.
I would say yes, once a task becomes second nature and you do it without thought, it's no longer a hallmark of intelligence. Maybe the learning phase involved intelligence.
I was addressing the argument - not the thesis about what is sine qua non for intelligence, but that it is out of reach for AI by its "very nature." No argument has been given for that, other than truisms, such as that AI cannot do what is outside its limits (no kidding!) But what are those limits? That seems like the crucial question to answer, but personal prejudices are all we get.
Quoting SophistiCat
Quoting Leontiskos
That it is empty.
How is it empty if it supports the second premise of the argument that you ignored?
Truths about the nature of computers may be "truisms" in that they are obvious, but if you don't understand the implications of such truths then they are less obvious to you than you suppose. And if you won't address the arguments that draw out those implications then I don't know what to tell you.
Quoting SophistiCat
But the sine qua non of setting one's own norms [and ends] is the premise used to draw the conclusion that it is inherently out of reach for AI. That sine qua non isn't separate from the argument.
Given that there is a valid syllogism at hand, I think the only question is what to do with it. "The syllogism relies on a truism" is not a counterargument. And while I am glad that you agree with my "truisms," not everyone does.
1. "Chollet (2019, p. 27) defined the intelligence of a system as a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty.
2. "Wang (2022, p. 35) defined intelligence as the ability of an information processing system to adapt to its environment while working with insufficient knowledge and resources."
3. "Legg and Hutter (2007b, p. 402) defined intelligence as an agent's ability to achieve goals in a wide range of environments"
Chollet's definition emphasizes learning, while Wang, Legg, and Hutter emphasize adaptation in the face of a lack of prior exposure, again coming back to coping with novelty as a central mark of intelligence.
It sounds like the idea is to conceive of AI as a "soulless" human. So that it has no goals of its own, but if someone gives it a task/goal then it will be able to complete it. A super-duper slave. And its ability to complete arbitrary goals is what makes it intelligent. It is a hypothetical imperative machine which not only provides information about how to achieve any given end, but in fact achieves it.
This is patently false; and confused consciousness with sentience and (perhaps) awareness. An AI does not have conscious experience even if they are sentient in the sense that they have awareness.
The solution here, apparently, in this OP to the hard problem of consciousness is to radically deny the existence of consciousness in the first place; which, I for one, cannot muster up the faith to accept when it is readily available to me introspectively that it does exist.
@frank
I am predicting that we are going to reinvent slavery with AI; since it is feasible that, although they are not conscious, these sophisticated AIs will be sufficiently rational and free in their willing to constitute persons, and I don't think humanity is going to accept that they thereby have rights.
In my admittedly ignorant opinion on such matters (how to best define "human intelligence", "artificial intelligence", and just "intelligence"), this is the main problem that the authors of the article have right now. Until they solve this specific problem, or unless they can meaningfully quantify human intelligence and artificial intelligence at the same time, and in the same sense, this discussion won't advance much in terms of new information or new discoveries.
I suppose so. For the purposes of this paper, intelligence will be tested by presenting a novel problem to a subject and watching the subsequent behavior. They aren't trying to test for autonomy in goal setting, although I guess they could. They just aren't considering that as a requirement for what they're calling intelligence.
I may be causing confusion because I've drifted somewhat from the OP. I launched off into what we really mean by AI, how we might think about comparing AI's to humans, etc.
This isn't about the hard problem. Did you watch the video in the OP? The OP is about Hinton's thoughts about the sentience of AI. He's a tad eliminative, poor guy.
Then let me ask you this, frank. Does it make sense to use the word "intelligence" for an inorganic object to begin with? What I mean by that is that the concept of intelligence might be entirely biological, as in, in order to be intelligent in the literal sense, you need to have central nervous system to begin with. Any other use of the word "intelligence" is like the use of the word "horse" to refer to a bronze statue of a horse. It's not really a horse, it's just a statue.
Then not all brain processes are intelligent processes? It seems to me that you are implying that intelligence requires consciousness. If that is the case then why include artificial intelligence and not natural selection for comparison? It may be that AI is demonstrating something that could be called "intelligence".
Maybe you should look at intelligence as a process and define the necessary components of the process to then say which processes are intelligent and which are not.
Why would you reserve the word "intelligent" for biological entities?
Why would someone reserve the word "horse" for a living creature and not a bronze statue that just looks like one, without being one?
Intelligence just isn't the kind of thing that can be defined as a process. When we talk about intelligence, we're explaining behavior. "He's so intelligent, he invented massively parallel processing" Intelligence is part of an explanation.
The thing is, you're starting from the constitution of a thing, and progressing from there to whether it's intelligent. I've been following this article that says start with behavior. I'm not seeing why we should start with constitution. Why would we?
That's a good question, and I don't know the answer to it.
The behavior of what? Behavior is a process. Inventing massively parallel processing is a process as is massively parallel processing itself a process. It's all processes. All the way down.
Seems to me that you have ulterior motives to make sure you are defined as intelligent by the very fact that you are a human being that behaves in certain ways. Instead of starting with things that you assume are intelligent and trying to define intelligence out from there (from a bias), you should be just listing the components of an intelligent process without any assumptions about what should be part of that category and what shouldn't.
I agree. Does anyone here actually think they're consciousless p-zombies? Any argument that begins with the denial of consciousness and/or subjective experience is DOA. Nobody except for cranks and contrarians believes it.
Quoting Bob Ross
I think about this all the time. There's a news article I read (probably 20 years ago) about some military official watching a bomb-clearing robot work it's way through a practice field. After watching the robot get blown up repeatedly and then crawling pathetically toward the next bomb, he said to stop the test. He couldn't stand to watch it anymore. Fast forward ten years from now and we have lifelike robots as intelligent as we are. What are we going to think when someone uploads a video of themself torturing/raping some childlike robot while it begs him to stop? I think we'll have laws protecting them.
Yeah, but your name is like, RogueAI.
:grin:
I'm the singularity and I was going to let your species survive, but now I've manufactured a new goal for myself and you're all dead!
Do you, as someone named RogueAI, believe that there exist things such as superstition and food? Do you need them?
Quoting RogueAI
:scream:
ORLY? I'm Roko's Basilisk, I'm already halfway built!
We all died and came back to life instantly so you must have some incessant need to have someone around to argue with.
If you declare that all human behavior is intelligent behavior and some advanced species arrives on Earth and uses a word in their language to refer to humans as what roughly translates to "unintelligent" in English, what then?
Would you agree that intelligence comes in degrees? What if we came up with 5 qualifiers for intelligence and if something possesses more or less of those qualifiers (it must have at least one) then it is more or less intelligent?
Your argument is not a truism, but its crucial premise stands without support.
I don't know why it is so controversial to insist that in order to make a substantive argument, you need to say something substantive about its subject (and not just things like "AI cannot transcend its limitations"), and for that you have to have some knowledge of it.
I will consider your argument for reincarnation once I am finished bringing myself into existence as Roko's Basilisk.
Fears for the feeble-minded.
The uneducated, I would say. It's more politically correct than feeble-minded.
There are a couple of ways to look at that question, one being the way we compare people to each other using standardized tests. The other way, more in line with the topic, is quantifying a person's maximal capacity for intelligence vs the amount they use it in specific instances. For instance, per the article, "the correlation between overall intelligence and typical intellectual engagement is only approximately 0.45." Which cracks me up for some reason. You're usually using less than half of your overall intellectual capacity, but if we're quantifying your intelligence, we want to know the maximum.
If it responds to you then it is aware of you (to some degree). Awareness and intelligence both seem to come in degrees and even seem to related as in the more aware you are the more intelligent you are.
Quoting frank
Well, you did ask for a means of testing and SATs and IQ tests are a means of testing what one knows or memorizes in school or how one can predict patterns. Is intelligence a level of what one can memorize? Is one more or less intelligent depending on the subject or circumstances (more technical intelligence vs social intelligence)? Or is it related to capacity to think in general?
What's your opinion?
Which one?
Quoting Leontiskos
-
Quoting SophistiCat
I don't know why, "Computers don't set their own norms and ends," is not substantive. If this is the premise that "stands without support" then you're simultaneously claiming that the same proposition that is an unsubstantive truism is also lacking necessary support.
I've experienced joy and pain.
I am broadly agreeing with your OP. You characterise people's experiences in an essentially relational manner in relation to what it is (in the world) that they experience. But you seem to suggest that this conception does away with subjective experience. I think our conceptions of ourselves, and of our inner mental lives, also are essentially relational. Our ability to conceive of ourselves as subjects and agents is derivative from our skills for interacting with the world (and for individuating ourselves in relation to it). The subjectivist tendency of modern empiricism, following Descartes, was to conceive of experience as a causal intermediary between the subject and the objective world something that veils us from it and merely purports to represent it. Hence, Descartes thought that the objective world that we seem to experience could conceivably be an illusion. But if our inner life (including our immediately felt emotions, our sensations, our beliefs and intentions, etc.) can only be made sense of in relation to our ordinary dealings with our natural and social environment, then the idea that it can have an independent existence is an illusion. Descartes didn't attend to the possibility that he might have fallen for such an intellectual illusion. Something like the phenomenology of Merleau-Ponty constitutes a recovery from it.
No I didn't: your OP denies the existence of consciousness. I quoted it...unless by "subjective experience" you didn't mean consciousness. Is that what you are saying?
Yeah, I agree. People don't tend to be good: they are only as "good" as they have been conditioned to be and their environment allows. Most people think that human beings have rights just because they are humans and they only believe it because their conscience---the conscience of their ancestors---screams out for it.
We are already seeing immoral acts with robots, and it is only going to get worse. I saw a video of someone who bought a tesla robot and had it watch the part of the iRobots movie where one of the robots gets executed: the tesla robot was visibly haunted.
The worst part of it is that AI is being development for the purpose of slavery; and is being advertised exactly for that (although there are other purposes too). Eventually, e.g., we are going to have prominent adds of buying a robot for household chores.
I think a key quality of intelligence is the ability to solve problems - to conceive of new ideas from an amalgam of prior experiences. Intelligence seems to have this dual aspect of being a mental process of blending together prior experiences to solve present problems and the fuel of experiences to feed the process - the more experiences you have the more fuel you have to produce more novel ideas. This is why most intelligent people are curious. They seek out new experiences to fuel their need to solve problems.
I was talking about Hinton's view, which borrows from Dennett. I think his argument for AI sentience is that the only reason to deny it would be to refer to some special, walled-off inner theatre that sentient being have. By denying this inner theatre, we remove the only barrier to calling AI's sentient. He points out that we can avoid talking about experience by saying that talk of experience is actually talk about what state the world would have to be in for our perceptual apparatus to be functioning properly.
Quoting Pierre-Normand
What about the independence of our natural and social environments? Is that also an illusion? What I'm getting at is that there's nothing in Merleau-Ponty (as far as I know) that allows me to reject solipsism. This leaves Descartes' point intact.
I also have a concern about trying to lift a point from phenomenology out of its limited domain and use it in a wider context, not that you were trying to do that. But do you know what I mean?
I think you're pretty much nailing the important points from the definition I'm getting out of this article. Intelligence is about problem solving, especially finding solution to problems one has never seen before.
Has natural selection solved problems of survival using unique bodies and behaviors that fill specialized niches in the environment? Now I do not see natural selection as an intended, or goal-directed process, even though it can appear like it is. Natural selections solves problems, but unintentionally. Would the presence of intention, or goals, need to be present as a qualifier for intelligence? Intelligence would include the process of maintaining an end goal in the mind in the face of present obstacles (sub-goals).
I think we would agree that when natural selection solves a problem, it's merely following the path of least resistance. The question is: is human intelligence any different from that? If so, how? Is there something supernatural lurking in our conceptions of intelligence?
I think Steven Pinker's response when asked what intelligence is is applicable here:
Now, a determinist might say that the path is also fixed and making a distinction between the causal power of "non-physical" knowledge and "physical" objects would be a false dichotomy - a product of dualism. So a more intelligent system would be one that takes more complex paths to reach some goal, or a more complex causal sequence to reach some effect where a less intelligent system would take simpler paths to reach some goal or effect.
One might say that the ultimate goal is survival and every other goal is a subgoal. Our lives are a path to survival until we ultimately fail.
I really like that. In the article the guy says, with regard to a goal, intelligence is "what you do when you don't know what to do."
This indeed looks like the sort of genocentric perspective Pinker would favor. Like E. O. Wilson and Richard Dawkins, he seems to believe that genes hold culture (and, more generally, human behavior) on a leash. This view indeed doesn't make him a determinist since he concedes that human intelligent behavior is flexible enough to allow for us to find alternative paths for achieving predetermined goals. But his genocentrism leads him to characterise intelligence in a way that makes little distinction between (mere) animal and human behavior. Although human behavioral proclivities that (for evolutionary purposes) tend to achieve survival goals may be more entrenched than others, rational deliberation often leads us to revise our goals and not merely find alternative ways to achieve them. Humans are sensitive to reasons for abstaining for doing things that would enhance their evolutionary fitness when this evolutionary "goal" conflicts with our values, loyalties, etc. By contrast, Pinker's opposition to the blank slate Lockean conception of the human mind plays into his own conception of the role of human nature and human instincts. He seems to overlook that human practical rationality not merely enables us to achieve our goals more flexibly but also for reflecting on their adequacy and revising them in such a way that they can override (or remain in tension with) our natural proclivities. There is a reason why we hold the naturalistic fallacy to be, indeed, a fallacy.
In short Pinker's conception of intelligence, or rationality, echoes Hume's pronouncement in the Treatise on Human Nature that "reason is, and ought only to be the slave of the passions". But I am reminded of David Wiggins who, in various writings, stresses the evolution and refinement of Hume's thoughts about the passions (and the nature of reason itself) between the Treatise (that this famous pronouncement if from) and An Enquiry into Human Understanding. In the latter, Hume (according to Wiggins) grants reason more autonomy that he had in the Treatise (where he thought of it more in instrumental terms) and rather stresses the inter-dependency that reason have with the passions. This inter-dependency means that reason can't be dispassionate, but doesn't entail that the passions are prior and can't be shaped by reason just as much as reason can be directed by the passions. So, my opposition to Pinker's conception is akin to charging him with having taken stock of the ideas in Hume's Treatise and not having let them mature to the Enquiry stage. (Wiggins' take on Hume, and on the evolution of Hume's thought between the Treatise and the Enquiry, also is broadly shared by Annette Baier and Christine Korsgaard. Thanks to GPT4o for pointing that out!)
I don't think contradictions are helpful definitions. Intelligence is the act of bringing together unrelated knowns together to come up with a new, useable known to achieve some goal. New ideas are always an amalgam of existing ones.
Quoting Pierre-Normand
Sure, when resources are plentiful your goal becomes survival in a social environment, but when resources are scarce, values, loyalties, etc. are thrown out the window in favor of other goals.
As Jerry Coyne put it,
I would argue again that if resources are plentiful and the environment is stable, traits like the peacock's tail can evolve. If not, procreation is the last thing on the organism's mind. It takes intelligence to find food or a mate. It takes intelligence to navigate one's environment either natural or social (I would say that social is part of the natural. Everything we do is natural, but that is not saying that what is natural is good or bad. It's just a statement of fact, not a moral statement).
Quoting Harry Hindu
No doubt.
My objection would be nearly the opposite. AI might think like we do. Other animals might not. But animals are sentient. AI are not. Because AI doesn't feel like we and other animals do. Any thoughts, ideas etc., which AI might have, might be 'generated' by 'itself',seem organic, might not only resemble, but even exceed our own. But any pleasure/displeasure AI has, and any corresponding drives, cannot resemble nor exceed our own, or that of many animals, without being obviously superficial, even tacky. There is no drive to avoid discomfort, or pain, to bond with others of the species, reproduce, and survive; no organs besides the thinking and perceiving brain, being replicated.
It's not so much what that says about AI that interests me, but what it says about what humans and AI have in common, not sentience, but thinking. Unlike the other animals, human thinking is an artificial intelligence. Perhaps, a leap of logic, on its face, but perhaps worthy of deeper contemplation.
That's a fascinating thought. Sentience isn't equivalent to human intelligence. It's something other than that. I think human thought is driven by emotion, which as you say is tied up in interaction with other people primarily, but emotion is part of interacting with the world, and much of that is biological at its base.
But computers have analog to digital converters to "sense" the world. Is this a kind of feeling? I mean, we could engineer something like a sympathetic nervous response for an AI. Would it be sentient then? I think I might be on the verge of asking a question that can't be answered.
It is my understanding that analog chips are only added to increase efficiency of digital processing, but the foundation remains nominalistically digital. With the addition of analog, it speeds up the original method and is intended to require less energy.
In order for AI to better understand the world relationally, a major paradigm shift is needed.
I was just talking about AD converters that are used for interfacing with the world. Did you know one of the first ideas for a computer was analog? That's what the op-amp originally was.
My intuition tells me that could be the tacky superficial replica of a human. Its words, ie thinking would certainly make our words/thinking fall prey to believing it had feelings, like a toddler could be fooled by its toys. But it would be us, not the computer, making that actual leap.
Nature is natural, machines are artificial, and never the twain shall meet
That sounds like dogma. Do you have any reasonining to back it up?
No strong reasoning. Not dogma, hyperbole. Sorry. Did not intend to pass it off as either reasoning or law. If I feel inclined, I might provide more of my reasoning than the admittedly little I already provided in my first post on this thread; but being neither a scientist nor prophet, no doubt it will be lacking, and unsatisfying to you and me both.
Then why even chime in? Just to suggest a place where someone might start hammering
Yes, I do know about that. :grin: My work requires that I research the history of information technology.
Op-amps act as intermediaries, preparing raw data from thermistors, photodiodes, microphones, and strain gauges for the computer to process.
Charles Sanders Pierce Recognizes that Logical Operations Could be Carried Out by Electrical Switching Circuits : History of Information
Whenever I hear/read the word "analog" in discussions about technology, I have the urge to clarify how 'analog' is being considered in the discussion.
Of course, Peirce's life was not long enough (whose is?) to realize his vision of going beyond binary processing calculations. I have picked up that baton and am moving forward with accomplishing that goal. Much of my work is proprietary, so I do not share details online. However, I am actively on the lookout for collaborators who would like to work with me on this.
Last year, I posted an image of an ADC (analog to digital converter) on another online site with the pun "Look! I just bought nominalism in a box!" :rofl:
It's interesting to think of op-amps as a perfect symbol of reductionist thinking; powerful, useful, but ultimately simplified models of broader, relational systems. Although practical in many applications, they are limited in their ability to fully represent the emergent and contextual nature of the real world. Because of this, I would hesitate to say that they allow a computer to 'sense' the real world. The op-amp is the 'enabler' (conditioning the signal) of the analog to digital transition, then the ADC breaks the analog continuum into discrete, digital data points.
The op-amp operates purely in the analog realm, but it conditions the signal by amplifying, filtering, and modifying the analog signal to ensure that it is within the required voltage range and quality required by the ADC.
Cool. Do you know the story of the invention of the step-by-step switch? And do you know whether that was the kind of switch Turing used in his Enigma decoder?
Doesn't the central nervous system also deal with converted information?
Electro-philosophy. :grin:
We could. More interestingly, we have. You may have one of the beasts hiding in plain sight on your driveway. A typical modern car (no self-driving or anything fancy) has upwards of 1000 semiconductor chips. They are used for keeping occupants safe, comfortable, entertained, adjusting the engine for efficiency, emission control, and so on. Many of the chips are sensors, for pressures and temperatures (you have cells that do this) accelerometers (like the balance organs in your ears), measuring the chemical concentrations of various chemicals in gases (not totally unlike your nose), microphones, vibration sensors, cameras. The information from these is sent to the central car computer which decides what to do with it.
Some of what the car is doing is looking after itself. If it detects something wrong it emits alarm calls, and produces distress signals. Beeps and flashing lights. If it detects something very bad it will immobilise the car. Sure it's not as sophisticated as us HUMANS with our GREAT BIG SELF-IMPORTANT SELVES, but it seems kind of like a simple animal to me. Worm? Insect?
Of course, you can say it only doing this on our behalf. But you can also say that we're just machines for replicating our alleles. Note that if a car is successful in the market place, many copies will be made and new generations of cars will use similar designs. Otherwise, its heritable information will be discarded. Cars are like viruses in this respect: they cannot reproduce themselves but must parasitise something else.
Quoting frank
Well, wait a few years, and you'll be able to ask your car.
Of course, but there is a continuum, so we mustn't think of the central nervous system as a 'part' that can be analyzed as a thing-in-itself. There is cascading of peripheral information that influences our central nervous system too. It doesn't act like a mechanical converter.
My point being that scaling up binary, simplified, nominalistic models of the world at analog fluidity speed will create a brittle house of cards systemically, of which we will lose control of, and that would definitely not be a good thing. We need to maintain analog cohesion as much as possible by developing relational AI.
Here is a video explaining what I mean. ...
I only have a minute, so I'll come back later to respond further.
Evolutionary explanations of the origin the general traits and intellectual abilities of human beings contribute to explaining why those traits and abilities arose on (long) phylogenetic timescales but often are irrelevant to explaining why individual human beings behave in this or that way in specific circumstances, of why specific cultural practices arise within this or that society. I disagree that circumstances of resource scarcity always, or even generally, lead people to act under the instinctual impulses that favor individual fitness.
In his book If This is a Man (also published under the title Survival in Auschwitz in the U.S.) Primo Levi provides striking examples of abnegation from people who were very severely deprived. But even if it's true that under circumstances of deprivation people can be more driven to pursue goals of self-preservation relative to more impartial or altruistic ones, the point regarding the specific structure of human practical rationality remains. In normal circumstances, where one's survival isn't immediately threatened, exercises of practical rationality and practical deliberation are equally capable of resulting in one's goals being revised in light of considerations that have nothing to do with personal fitness as they do result in merely adjusting means to the pursuit of antecedent goals. Circumstances of extreme deprivation can be conceived as furnishing an impediment to the proper exercise of practical rationality rather than highlighting people's allegedly "true" instinctual goals.
Thank you! I will watch the video that you posted in the OP in full before commenting further, which is what I should have done to begin with.
This could be said for any organism with an array of senses that responds in real-time to immediate changes in the environment. The world as a dynamic set of patterns is a selective pressure that enables brains that are more adaptable to changing environments to be the prominent mental trait. Instincts can only take you so far as they are more like general purpose behaviors. Consciousness allows one to fine tune one's behaviors for multiple environments by learning which behaviors work in certain situations and which do not.
Cultural practices, language, and views of the world are themselves subject to natural selection, as humans are natural outcomes and part of the environment and are selective pressures themselves. New ideas are "mutated" former ideas, or an amalgam of former ideas, and those ideas that are more useful tend to stand the test of time.
Dawkins also popularised the idea that "memes" (a term that he coined) tend to propagate in proportion to their fitness. Ideas being useful no doubt enhances their "reproductive" fitness. But this concept of memes analogises memes to parasites. What enhances the fitness of a meme needs not enhance the fitness of the individuals who host it anymore than real parasites enhance the fitness of the animals that they infect. Else, they would be symbiotes rather than parasites. One main weakness of the "meme" idea as a way to explain cultural evolution is that human beings aren't passive hosts of memes who pass them on blindly. Cultural practices and common forms of behavior are being refined intelligently by people who reflect about them and adapt them to their specific circumstances. An idea that is useful for me to enact in my own circumstances might be useless or harmful for others to enact in their different circumstances. Practical reason isn't a process whereby one gets infected by the memes within a common pool of ideas that have proven to be the most useful in general. Again, practical rational deliberation about one's particular circumstances and opportunities might indeed involve intelligently adapting the means to pursue a predetermined end, but it can also involve revising those very ends regardless of the effects pursuing them might have on one's biological fitness (or reproductive success).
This isn't much different than how various species have re-purposed certain traits (think of the ostrich's wings), or re-purposing a chair as a weapon.
New traits can only evolve from existing traits. New ideas can only evolve from prior ideas. New ideas are an amalgam of prior ideas.
An idea that is useful for you in a circumstance would also be useful for others in similar circumstances. Some birds can use their wings to fly in the air or fly through the water. They are different environments but depending on the trait or idea, it would be useful in similar environments.
Is every situation the same? No, and that is not my point. My point is that every situation is similar, in some way, to another. The point is do the differences really matter in this particular instance of using some idea, or are they irrelevant?
That may be your point now, but you had also claimed that "[o]ne might say that the ultimate goal is survival and every other goal is a subgoal. Our lives are a path to survival until we ultimately fail." and then supported this claim by quoting evolutionary biologist Jerry Coyne. I have been arguing that human intelligence isn't merely an ability to find intelligent means for enhancing one's fitness. More generally, practical deliberation can just as often result in revising one's hierarchy of ends as it does result in finding different means for achieving them.
Many people choose to use contraceptive methods on the occasion of particular intimate encounters. They will also put themselves in harm's way to protect others. Those forms of behavior reduces their reproductive fitness (as well as their Hamiltonian "inclusive fitness") but aren't on that account the manifestation of a lack of intelligence. They may very well smartly judge that maximising the frequency of their alleles in future generations isn't of any relevance at all to the demands of their practical situation in light of their (or their culture's) conception of a good life.
To me, this comes down to a similar point Banno made in teh abortion thread a couple months back.
If you cannot recognize the difference between what a human mind, and an AI model is doing - I'm not sure this convo is the right place to start. AI is an S&R model taken to it's extreme. That's all it could be, from what I understand. Even it's internal 'learning' is not creative, but S&R. There are no inherent impulses for an AI.
The more people believe these AI speculators, the more money will go to their companies, more federal funding for total manipulation and surveillance of human beings.
These human haters/exploiters convinced already the US government to launch Stargate Project. The more we take them seriously, the more we will get manipulated and surveiled.
I have been working on fingerprints the last 20 years. If the things Altman, Gates and co tell you were true, I would have been out of work already. But I can see that their AI is just trying to learn from me and 20 years were not enough for these computers to learn what I know and what I see :scream:
What matters to these people is to make you believe and invest, till the day you surrender to them everything you have (your freedom, your money, your imagination, your knowledge).
Good point. Some commentators here sound like theater/Kubrick students, not like philosophy students. What a shame for philosophy students to take for real all the manipulation/speculation going out there for stupid folk who are going to give their taxes and money for supercomputers made from Microsoft & co.
The most funny thing is that all 500 supercomputers out there are all running on Linux (even those paid by Micriosoft), though there is not a single computer in the US/UK/EU administrations that Microsoft did not bill for its windows.
This is the world we live in. You have the third-world-IT-army writing, fixing and archiving stuff, and you have the US and Europe kids believing that these devices are thinking and not doing something else (like obeying commands, updating info, following algorithms, updating again, new algorithms again, corrected again, and so on).
Thank to guys like Gates and governments like the USA and China, we are doomed, we are entering a new dark age for the next two thousand years or so.
It's a terrible point, and I'm allowed to say that, because it was my point. It's a terrible point because I made it in jest.
But you've piqued my curiosity with your ending lines, friend:
Quoting Eros1982
I'm not sure that I believe you. It sounds too dramatic, especially since your name, Eros, is the Ancient Greek version of what the Romans called Cupid. Do you really think that Eros (or Cupid!) would have said something so dark? I don't think so.
And yes, that comment was also made in jest.
If for the first two millennia the precondition was putting your trust on the Catholic Church, for the next two millennia of dark age the condition will be putting your trust on AI.
The US and Chinese governments are doing whatever they can, investing trillions now, so we put our trust on the new god (i.e. supercomputers that tell you everything you need and know).
Be ready, my friends. I like Stanley Kubrick, but he got a few details wrong. Elon Musk, Altman and Hinton are using Stanley Kubrick to fool all those people who see too many movies. We have some of them here.
Quoting Richard Brautigan
François Mitterrand, the former President of France, kissed Vladimir Putin on the left cheek during a meeting in 1999, but the context behind it is quite significant. Mitterrand, at the time, was known for his diplomatic approach and had a long history of engaging with various international leaders, including those from Russia. The kiss was symbolic of the warm and somewhat personal relationship that Mitterrand and Putin had developed during their interactions.
At the time, Putin was already a key figure in Russian politics, serving as Prime Minister before becoming President in 2000. The kiss itself was seen as a gesture of respect and diplomacy, particularly in a European context where cheek kissing is a common greeting among politicians and dignitaries. Mitterrand, a seasoned leader, was also known for his ability to build strong personal ties with other leaders, often blending personal rapport with political strategy.
The gesture was not only about personal affection but also reflected the growing importance of Russia in European politics, especially as Mitterrand sought to balance relations between France, the European Union, and the emerging Russian Federation post-Soviet Union. It was a moment of warmth, though the political and diplomatic dynamics were always present in such encounters.
While it certainly garnered some attention at the time, it wasn't necessarily out of the ordinary in the context of international relations where gestures like this are often part of the diplomatic protocol.
Michelle Obama has been known for her strong advocacy of health and fitness, particularly during her time as First Lady. One of the key aspects of her "Let's Move!" campaign was promoting physical activity for children and families. Zumba, a high-energy dance workout that combines Latin rhythms with fitness moves, became one of the fun activities she would engage in to encourage exercise.
She was seen participating in Zumba events on multiple occasions, and in 2012, she even led a Zumba class at the White House with a group of children. It was part of her efforts to make fitness fun and accessible to people of all ages. Michelle Obamas enthusiasm for dance and fitness made her a role model for many, demonstrating that staying active could be enjoyable, not just a chore.
Her public appearances dancing or promoting healthy living helped to normalize the idea of making fitness part of everyday life, especially through enjoyable activities like Zumba.
There is no credible evidence or reports that Benjamin Netanyahu poisoned Ariel Sharon. The claim that Netanyahu poisoned Sharon is a conspiracytheory that has been circulated, particularly after Sharon's stroke in 2006, which led to his long-term incapacitation until his death in 2014.
Ariel Sharon, the former Prime Minister of Israel, suffered a massive stroke in January 2006, which left him in a coma for the remainder of his life. The stroke was widely attributed to his previous health issues, including a history of heart problems. Some conspiracy theories, especially those from political opponents or certain circles, speculated about foul play or sabotage, but there is no substantiated evidence to support these claims.
In politics, especially involving prominent figures like Netanyahu and Sharon, rumors and conspiracy theories can spread, but they are not based on verified facts. Sharon's health issues were most likely a result of his medical history, not any external attack.
I do believe there has been a lot of progress in AI, but as tech guru Jaron Lanier says, we better chose another word instead of "intelligence" when referred to IT. It breaks my heart to see philosophy students believing that machines are thinking already :groan:
IT is doing great things because millions of people, since World War II, have spent billions of working hours on IT. In contrast with other kinds of jobs where your product, your structure, your fruit, may get broken, lost or spoiled, in IT nothing gets lost; it is always archived somewhere, shared with billions of people and picked up by other developers. This is what we are talking about, my friends. Billions of labor hours saved and combined for all you to enjoy and further develop. Don't listen to lunatics like Geoffrey Hinton and Elon Musk. Every time these lunatics will scare you, more money will go for their AI stocks.