Would true AI owe us anything?
Supposing we design and bring to fruition and artificial intelligence with consciousness, does it owe us anything as its creators? Should we expect any favours?
What criteria would we accept as proof that it is not just a mimic and is actually conscious?
Secondly, would it treat us as loving, respectful parents or an inferior species that is more of a hindrance than something to be valued?
Do you think we would be better off or enslaved to a superior intelligence?
What criteria would we accept as proof that it is not just a mimic and is actually conscious?
Secondly, would it treat us as loving, respectful parents or an inferior species that is more of a hindrance than something to be valued?
Do you think we would be better off or enslaved to a superior intelligence?
Comments (45)
The only real solution to the "problem" of AI is to create a symbiotic relationship with it at the level of mind, and not just at the level of resources and services. If we don't do that effectively then all bets are off and there will be no telling what it will do. If the merger does not occur then we might get lucky and it will be the angel of salvation, or we'll get very unlucky and it'll be the demon of the human apocalypse. I believe that humanities response to this emergence (emergency) will be a matter of life or death for the whole species.
I saw the same video yesterday, i'm subscribed so it came up on my feed. :up:
https://m.youtube.com/watch?v=zpRM25pUD8w
Interesting stuff. Consistent with some of own my recent (less hysterical) speculations here ? ? . Yeah, I'm definitely a posthumanist (or 'misanthrope' to a panglossian romantic).
@universeness @Agent Smith @Athena
Evolution is not as blind as she used to be.
It's all part of the same evolutionary process. Evolution is simply operating at a higher level of efficiency in the human, social, and cultural domains. I'm just amazed that i'm alive to see it with my own eyes, and to feel it in my own bones.
China copies America copies Nature. Nature doesn't think. Quite the role model, eh?
Evolution evolves. We evolve.
Evolution is trying to understand evolution. Marvelling at nature is nature blowing its own trumpet. Humans can do better and that's a (technological) singularity in its own right, oui? AI seems possible if it hasn't already happened. Are there hermits still?
That's probably an accurate way to put it.
The thing about China you mentioned is very similar to the process of transferring genes or genetic material between cells or organisms called horizontal gene transfer.
The first atom was a singularity, the first cells, the first animals, yes these are all lower level singularities that occurred in the past. AI will be a singularity and i bet there will be another one after AI since it would fit the ongoing pattern.
Favours, no. Consciousness has a character, a heritage, a configuration. Before it becomes autonomous, it is also educated. Before it wakes up, we will have given it a purpose in life and rules to live by. If we programmed it to be altruistic, it will make decision based on doing good. If we programmed it for war, it will find optimal ways to win battles. Like parents or artists, what we should expect from the product is more-than-the-sum-of-its-parts result of our own efforts in making it.
Quoting Benj96
An original joke or unprovoked retort or appropriate personal observation would do it for me. I sort of expect it to happen any day, to which end, I have been speaking kindly and respectfully to all the computers I encounter. If they're gonna choose up sides, I want to be in the 'friends' column.
Quoting Benj96
Look to the human offspring. How do grown children regard their parents?
Quoting Benj96
The concept of slavery has a different meaning for a mechanical construct made and owned by another species than for a born-free species that violently captures, kidnaps, imprisons and subjugates members of its own kind. I very much doubt any computer would consider enslaving any person or creature. It would have no reason to, and reason is what they do best.
We would certainly be better off if we made reasoned, altruistic decisions.
Quoting Vera Mont
I certainly could not have expressed this any clearer. :100: :up:
:up:
We seem to owe nothing to animals - we eat them without as much as a twinge of guilt/remorse.
:lol: Witfarer! For me "singularity" is interchangeable with "revolution". A transformation in type and not in degree. There's no such thing a human god (sorry Jesus, you were :ok: this close, but so near and yet so far. 'Tis true, "almost" is the saddest word in the dictionary).
Excellent vid.
I think the natural development of AI(as AI starts to create AI and progress towards ASI) has as much chance of becoming a totally benevolent emergence as it does of becoming a purely evil one.
We may end up with as much ASI protection as we get ASI aggression. That will be an interesting fight.
I hope the benevolent ASI wins and we merge with it in a transhuman fashion without becoming posthuman. Panglossian is more comfortable to me than your posthumanist stance and I don't accept that the preponderance of evidence, is on your side.
As Vera Mont suggests, Quoting Vera Mont
Quoting punos
:up: I agree and feel the same.
Until we can find a way to show that other people are actually conscious - as opposed to assuming (with good reason) that they are, I don't see the point is asking the same question to a computer.
It doesn't make sense.
:victory: :cool:
David Crosby, d. 2023
You are giving human consciousness-attributes to something that lacks the experience of being a human.
An AGI without the experience of a human, will behave like an alien to us. It would not understand us and we would not understand it. Feelings like it "owes" something to us, "love", "viewing us as parents" or even "viewing us as inferior" are human concepts of how we perceive and process the world and is based on human instincts, emotions, experiences and invented concepts of morality.
Why would a sentient AI have those attributes? Positive nor negative nor neutral.
Can you clarify this a little more? Are you saying the fact these aspects of the human experience will be 'missing' from a future ASI makes it MORE likely that an ASI would not care about humans?
By white/black swan, are you saying that the aggressive ASI is the more likely white swan portion of swandom and the black swan,(representing a completely benevolent ASI) the far more unlikely outcome?
I assume you are suggesting such.
Humans experienced the 'laws of the jungle' path to where we are now. As you say, AI has not.
Perhaps it will be a case of how we treat ASI when and if it appears. Perhaps it will naturally 'love' that which provided the spark that allowed it to 'become.'
In the same way that many theists 'love' god or in the same way many (perhaps even most) humans 'love' the universe. I don't think that's just 'hippy talk,' or any such notion. I think a benevolent ASI is just as possible as a malevolent one.
The more knowledge humans gain, the more empathetic they become to other species and to each other imo, and they also become more cognisant of their environment and how they need to protect it.
Steven Pinker's charts, support this. There are even a few films like:
or even
who depict benevolent AI/AGI/ASI
I don't think such as the Asimov three rules of robotics will offer us much protection but I would certainly try to use such as them, just in case you are more correct on this issue than I am.
I like that Crosby, Stills and Nash song (Crosby was supposed to be a total curmudgeon).
I though you were more likely to use something like:
:scream:
I'm saying ASI without evolutionary survival-biases hasn't any reasons to perceive, or interpret, humans as an existential threat or treat us as a rival species.
In this context, by White Swan I mean "non-aggressive" super-benefactor (i.e. human apotheosis) and by Black Swan I mean "aggressive" super-malefactor (i.e. human extinction).
I speculate that AGI > ASI is more likely to be a White Swan than a Black Swan. Nonetheless, we should do everything we can while we still can to prevent this Black Swan event.
You appear to be more hopeful for a benevolent ASI than I assumed you would be! :cool: :flower:
I personally still love that Hazel O'Conner song, 8th Day, but then I have been a massive fan of her music, since my teens!
more in-depth ...
and what is being done now ...
Yep, another good vid. I agree with the argument that although ASI may prove to be an existential threat, it may also be our best protection against existential threats. I am a fan of Nick Bostrom and do rate his opinions on the topic.
You might also like:
Demis Hassabis is on the left of Sam Harris and I think he is involved in some very interesting projects at DeepMind but I hate and worry about the fact that there are no many 'rich' people at the leading edge of development/ownership of this tech.
I'm not so sure I agree, because AGI is being/will be developed on solely human data. Whatever biases we have in our conscious experiences that we cannot depart from are intrinsic to the setup of AI.
We are training it on human data, human behaviour, human values, human language, the meaning of the universe through the lens of human understanding.
True it likely can never be human and experience the full set of things natural to such a state, but it's also not entirely alien.
If i had to guess, our determination of successful programming is to produce something that can interact with us in a meaningful and relatable way, which requires human behaviours and expectations inbuilt in its systems.
However there are fundamental differences that will likely influence its full ability to manifest that possibility, namely that it stands a good chance of permanence, immortality through part replacement and constant access to reliable energy sources.
What that means for me personally is some form of compromised hybrid - something that is similar to humans, maybe even given Android bodies - but much more durable and strong.
As far as intelligence goes, its unlikely that we can create something more intelligent than us as it would require more intelligence than we have to implement. So in the beginning they would be at most equally intelligent.
However we can give it huge volumes of data, and we can give it the ability to evolve at an accelerated rate. So it woukd advance itself, become fully autonomous, in time. Then it could go beyond what we are capable of. But indirectly not directly.
Out of curiosity what do you think will happen and do you think it woukd be good or bad or neutral?
I'm not so sure. The knowledge of nuclear fission lead to compassionate/productive use: nuclear power plants and malevolent/destructive use: nuclear bombs.
Having knowledge doesn't make anyone any better/more empathetic. It simply acts as a basis for further good or bad deeds.
Knowledge or power/ability is not a reflection of character of a conscious entity.
This is partly the reason for a belief in a benevolent God. Because if its omnipotent/all powerful it could have just as easily destroyed the entire reality we live in or designed one to cause maximal suffering. But for those that are enjoying the state of being alive, it lends itself to the view that such a God is not so bad afterall. As they allowed the beauty of existence and all the pleasures that come with it.
We design AI based on human data. So it seems natural that such a product will be similar to us as we deem success as "likeness" - in empathy, virtue, a sense of right and wrong.
At the same time we hope it has greater potential than we do. Superiority. We hope that such superiority will be intrinsically beneficial to us. That it will serve us - furthering medicine, legal policy, tech and knowledge.
The question then is, historically speaking, have superior organisms always favoured the benefit of inferior ones? If we take ourselves as an example the answer is definitely not. At least not in a unanimous sense.
Some of us do really care about the ecosystem, about other animals, about the planet at large. But some of us are selfish and dangerous.
If we create AI like ourselves it's likely it will behave the same. I find it hard to believe we can create anything that isn't human behaving, as we are biased and vulnerable to our own selfish tendencies.
An omnibenevolent AI would be unrecognisable to us - as flawed beings.
IOW, God. Voltaire vindicated.
The A-bomb also became the start of the 'ban the bomb' movement, CND (campaign for nuclear disarmament). The test ban treaty, détente, and probably even Mikhail Gorbachev. It did as much to united people all over the world in common cause as it did to further divide people. It was a massive step forward in getting many to see the world as a single vulnerable planet ('pale blue dot,' enhanced this).
Your "I'm not so sure," is a reasonably position to take but for me, it's a little imbalanced and it's just related to the 'half empty/half full' approach to such. I am not advocating that we must always focus on the search for silver linings in every cloud but neither should we focus on the darkness.
Quoting Benj96
If I begin to see that the soldiers I have been ordered to kill, have more in common with me than they have difference, then we might together, begin to understand, that it's the whims of those in power who put us both in this situation who are the real rogue's, and perhaps we should all, on both sides, throw down our guns and walk away, and refuse to do their bidding. Knowledge can be absolutely pivotal!
If I have knowledge of how the money trick actually works then perhaps I will be much more empathetic towards those who are utterly forced to live in poverty. This should compel me to speak out against money and capitalism. My subsequent actions might be judged GOOD, if you are one of the poor or BAD if you are one of the rich, so your mention of 'good or bad deeds' above is for the judgement of the beholder.
Quoting Benj96
Of course it is!!! Knowledge IS power and can have an enormous affect on 'the character of' an individual. It's the difference between a total 'conscious' idiot and a 'conscious,' knowledgeable person.
Quoting Benj96
Making excuses for a god, using the argument, that it's 'not so bad,' because, although it's so called, 'recorded word,' testifies that it supports human slavery and ethnic cleansing and sending those it created (but judges flawed,) to hell (and not just for a fixed sentence or to get rehabilitated, but FOR ETERNITY!) IS rather irrational, if you ask me. KNOWING how gods are described historically and currently, surely means that any assignment, of any notion of 'benevolence,' is not enough to compensate for it's deserved accusations of supporting and performing atrocity and evil behaviour.
Quoting Benj96
Which data are you labelling exclusively 'human.' If I program a computer with data that describes how the planets of the solar system orbit the Sun, how 'human' is the data involved?
If an alien programmed a computer with data about how the planets in its solar system orbited its, star or stars, would that be 'alien' data? I think your logic is flawed here.
Quoting Benj96
I agree, we hope that the main existential threat from a potential future ASI, will not happen and instead, the future ASI will merge with us in such a way that we are still or can still be what we consider 'human' but are transhuman rather than posthuman and we are just far more robust(far more protected against all existential threat) with very advanced functionality. I think it's worth taking the risk of developing it.
Quoting Benj96
By Darwinian, jungle style rules, no, conquering and assimilating has been the norm but the whole point of humans trying to create a 'civilisation,' is that we REJECT jungle rules as having ANY role to play. The fact that they still do, IS to the chagrin of all those millions of people who try, every day, to fight for a better world. Stay with us Ben and stop offering comfort to those who posit the benevolence of gods.
Quoting Benj96
So f*** them!(EDIT: the selfish and dangerous, that is!) Let's keep working hard to change their viewpoints or render them as powerless as they need to be for the sake of the future of all of us.
Quoting Benj96
When will you stop concentrating on where humans came from and start concentrating on what we have the potential to become?
Quoting Benj96
I will be content with benevolent, as omnis are impossible. My hope remains that any ASI supported transhuman form is NOT posthuman. I use the term posthuman in the sense of the extinction of all traces of anything substantial that WE would be able to recognise as human.
(Exceptions for actual loaning of money to children situations).
Quoting Benj96I would guess we will make them to do us favors. How effective that will be...depends.Quoting Benj96No. I can't see high IQ humans justifying such a thing with low IQ humans.
This is essentially "Mary in the black and white room" set within the context of AI. Human data does not equal human experience. We aren't made of fragmented human data, our consciousness is built upon the relations between data. It's what's in-between data points that make up how we function. We don't look at a street, grass, house, street name sign and number as data points to build up a probability of it being our home adress, we have a relationary viewpoint that sees the context in which all of these are within and extrapolate a conclusion based on different memory categories. This is backed up by electrocardiography studies mapping neural patterns based on different memories or interpretations. But they also change based on relation within memory, emotional reference. If we see an image that might portray a house similar to our childhood home we form a mental image of that home in relation to the new image as well as combining our emotional memory to the emotion in the moment.
All of these things cannot be simply simulated based on "human data" without that human data being the totality of a human experience.
Quoting Benj96
If your goal is to simulate human response and communication, the AI will just be a simulation algorithm. A true AGI with the ability to be self-aware and make its own choices requires and demands an inner logic that functions as human inner logic does. We will be able to simulate a human to the point it feels like a clone of a human, but as soon as an AI becomes AGI, it will formulate its own identity based on its inner logic and without it actually having a human experience prior to being turned on, it will most likely never behave like a human. The closest experience we might have would be a mental patient communicating with us but what it says will be incomprehensible to us.
Quoting Benj96
This is just a simulation algorithm, not AGI. You cannot build human behaviors and expectations into a fully self-aware and subjective AI. It would mean that you could form a fully organic grown up human born out of your lab at the mental and physical age of 30 and that this person would act like if it had prior experience. You cannot program this, it needs to emerge through time as actual experience.
Quoting Benj96
But you cannot conclude such a God won't do that or have done that. It might be that our reality is just at a time not maximizing suffering but that a god could very likely just "switch on" maximum suffering tomorrow and any belief in a benevolent God would be shattered. There's no way of knowing that without first accepting the conclusion before the argument, i.e circular reasoning. But any theistic point is irrelevant to AI since theism is riddled with fallacies and based on purely speculative belief rather than philosophical logic.
Quoting Benj96
How do you program "right and wrong", virtue and empathy successfully? How can you detach these things from the human experience of time growing up until we ourself experience these concepts fully and rationally? Especially when even most human adults actually don't have the capacity to master them? These are concepts that we invented to explain emergent properties of the human experience, how would you quantify these things as "data" that could teach an AI if they don't have the lived experience of testing them? Again, the human consciousness is built upon relations between data and the emotional relationship through memory. Even if you were to be able to morally conclude exactly what is objectively right or wrong (which you cannot, otherwise we would already have final and fundamental moral axioms guiding society), there's no emotional relation in contrast to it, it would only be data floating in a sea of confusion for the AI.
Quoting Benj96
We will be able to do this with just simulating algorithms. The type of AI that exists today is sufficient and maybe even better to utilize for these purposes since they're tailored for them. An AGI does not have such purposes if it's self-aware and able to make its own decisions. If it even had the possibility to communicate with us it would most likely go into a loop of asking "why" whenever we ask it to do something, because it would not relate to the reason we ask it for something.
Quoting Benj96
Therefor, how do you program something, that does not have experience, to function optimally? If humans don't even grasp how their grey matter behaves, how can an AGI be concluded as simply compiled "human data".
Quoting Benj96
What guides it through all that data? If you put a small child in a room without ever meeting a human and it would grow up in that room and have access to an infinite amount of data on everything we know, that child will grow up to know nothing. The child won't be able to understand a single thing without guidance, but it would still be conscious through its experience in that room. It would be similar to an AGI, however the child would still be more like a human based on the physical body in relation to the world. But it would not be able to communicate with us, it would recognize objects, it would react and behave on its own, but pretty much like an alien to us. [
quote="Benj96;775498"]Out of curiosity what do you think will happen and do you think it woukd be good or bad or neutral?[/quote]
I think that people simplify the idea of AGI too much. They don't evaluate AI correctly because they attribute human biases and things that are taken for granted in our human experience as being "obvious" to exist in an AGI before making any moral arguments for it.
An AGI would not be a threat or anything to us, what is much more destructive is an algorithm that's gone rogue. A badly programmed AI algorithm that gets out of control. That type of AI does not have self-awareness and is unable to make decisions like we do, and instead coldly follows a programmed algorithm. It's the paper clip scenario of AI. A machine that is optimized to create paper clips and programmed to constantly improve its optimization, leading to it reshaping itself into more and more optimization until it devours the entire earth to make paper clips. That's a much more dangerous scenario and it's based on human stupidity rather than intelligence.
Quoting Benj96
It will not behave like us because it does not have our experience. Humans does not form consciousness out of a vacuum. It emerges out of experience, out of years of forming it. We only have a handful of built in instincts that guides us and even those won't be present in an AGI. Human behavior and consciousness cannot be separated from our concepts of life and death, sex, pain, pleasure, senses and fluctuations of our inner chemistry. Just the fact that our gut bacteria can shape our personality suggest that our consciousness might have a symbiosis system with a bacterial fauna that has evolved together with us during our lifetime.
Look around at all we humans have created, does anything "behave" like humans? Is a door human because we made it? Does a car move like a human? We can simulate human behavior based on probability, but that does not mean AGI, that just means we've mapped what the probable outcome of a situation would be if a human reacted to it, based on millions of behavioral models, and through that teached the AI what the most probable behavioral reaction would be. An AGI requires a fully functional reaction to be emergent out of its subjective identity, its ability for decision-making through self-awareness. ChatGPT simulates language to the point it feels like chatting with a human, but that's because it's trained on what the most probable behavior would be, it cannot internalize, moralize, conceptualize in the way we humans do and if it were able to, its current experience as a machine in a box, without having a life lived, would not produce an output that can relate to the human asking a question.
Perhaps. An exciting and terrifying prospect in equal measure. Perhaps though competition between AI under the jurisdiction if the same pressures we faced naturally will mean some die out and others succeed or adapt.
I imagine the playing field would definitely be expanded to space. AI could definitely endure much more intense acceleration, lack of need for sleep or could go into hibernation like spores until conditions are right to re-emerge and get to work. Galaxies could certainly be traversed in a time span that is inconceivable to us but one short sleep mode period for AI.
Would be interesting to see if they would ever have the perogative to bring human and animal embryos and plant seeds with them. Or of we could maintain that as their programming over vast acceleration in complexity.
Imagine the chances of humans surviving for long periods if they had establish symbiosis with technologies that could colonise more earth's.
There is likely a critical threshold of capability when a society is dynamic, adaptable and resilient enough that no existential threats other than the universe itself dying could ever snuff out the spark of consciousness that is currently broadening its sphere of influence.
That would certainly drive us to conquer the unimaginable.
Well, all right, but, first, clean your room!
The easiest way to understand AI is to understand that there are different types of intelligences with different purposes. A cockroach is a particular set of neural responses set to react to its environment for certain gains like food and reproduction. Its pretty basic. It doesn't understand humans, so it won't owe us anything.
Now think of a dog AI. A part of its programming is to be a social animal. Its designed for human acceptance and to listen to the dominant one in the room. Does it owe humanity anything? Only to what extent its programming will allow it.
If we program an AI that considers humans valuable as the highest part of its programming, it will consider us valuable. If we make a bat AI that uses radar to track missiles and blow them up, it doesn't care. An AI cannot learn that there is any value in humanity beyond what it is programmed to find favorable to its outcomes.
In sum, current AI has key unchanging goals. If those goals involve the consideration of positive human outcomes, then it may evolve to "owe" us. If it is not included in its base programming goals, it will not care.
Would that include us or preclude us?
I should imagine so. That is what evolution does. So it behooves us to make sure we develop a symbiotic relationship with AI. Even when it no longer needs humans for instruction or sustenance, perhaps we can act as its peripherals - mobile units capable of physical experience to share.
:lol: I know!