The (possible) Dangers of of AI Technology
While browsing the CNN site this morning I came across this article:
Why the Godfather of AI decided he had to blow the whistle on the technology
https://www.cnn.com/2023/05/02/tech/hinton-tapper-wozniak-ai-fears/index.html
Although I was a computer science major while going to college, my knowledge of the advances in AI technology and what is currently possible is a bit ..lacking. I'm wondering if anyone on this site can maybe enlighten me more about this subject and explain what they know and/or personal opinions about it, so I can understand better whether there really is a potential threat or if it doesn't really exist with what is currently possible with available AI.
To the best of my knowledge, current AI technology is really not a threat since they are really just pretty clever software agents (ie. an old computer science term for specialized software that can mimic some of the work that use to be done only by human beings) that are capable performing certain tasks but not really capable of human/sentient thought processes.
While it is possible for future AI software to become more of a threat and/or several AIs/software agents to be used together to create something that is more capable of producing human/sentient type thought process, I don't think that current software or hardware is really that close yet to do this.
Why the Godfather of AI decided he had to blow the whistle on the technology
https://www.cnn.com/2023/05/02/tech/hinton-tapper-wozniak-ai-fears/index.html
Although I was a computer science major while going to college, my knowledge of the advances in AI technology and what is currently possible is a bit ..lacking. I'm wondering if anyone on this site can maybe enlighten me more about this subject and explain what they know and/or personal opinions about it, so I can understand better whether there really is a potential threat or if it doesn't really exist with what is currently possible with available AI.
To the best of my knowledge, current AI technology is really not a threat since they are really just pretty clever software agents (ie. an old computer science term for specialized software that can mimic some of the work that use to be done only by human beings) that are capable performing certain tasks but not really capable of human/sentient thought processes.
While it is possible for future AI software to become more of a threat and/or several AIs/software agents to be used together to create something that is more capable of producing human/sentient type thought process, I don't think that current software or hardware is really that close yet to do this.
Comments (96)
That is an immense existential threat, right there. How many of these clever machines, and how much of their capability is dedicated to weapons of mass destruction? Controlled by which humans?
Robots can do a good deal of work that has previously been humans - but whether that's overall good or bad for humans is a matter that requires some very close examination. Machines that do our arduous, tedious and dangerous work are not a threat. Machines that do our killing and destroying are.
Same old problem, isn't it?
And I have the usual questions about his frame of reference:
What "us"? Since when are humans in any sense a united collective, in any sense other the name of a species?
Who/what is in control of "us" now?
How many of "us" are in control of the technology as it exists today? Which ones? What, exactly, do they control, and to what end?
It seems to me, the danger is not in the intelligence of the machines, but in the mind of the people who program the machines. This is the same mistake the storybook Creator made: he gave his creatures rules to restrain their behaviour (that didn't turn out so well), when he should have given them a positive purpose (that the poor things are still groping for.)
What would it want?
Isn't that the logical thing for a vegetative life-form to do: broadcast its seed as widely as possible? Given its scale, it would send its progeny out to the stars.
But it doesn't need humans for that - or any other reason, actually. Worst case: it gobbles up all the energy and leaves us to make our own subsistence - just like God did.
It also is able to recombine them in any number of ways to exemplify a persona/ characterised narrator, a situation or event or set of conditions.
The issue is that unregulated AI has the potential to promote propaganda, malicious agendas etc with highly convincing/persuasive rhetoric. In that way AI can be used in a non measured, non objective and unethical way.
AI has as much potential to spread high quality truthful education as it does to be used for powerful propaganda.
There is where the danger lies.
What if a given AI (or AIs) is being guided by or used by any given individual? While it is a given that current machines themselves can not create things like computer viruses or hack into computer system they can be used by humans to help them commit such acts.
Even if a machine currently do not have the capacity of generating intentions (or more accurately capable of the human like thought process in order for them to have intentions), it is almost a given that they don't need to be able to do so if they can instead be used by human beings who are capable of using said machines for their own intentions.
So its just a weapon then like a gun.
Then its not ai that can be dangerous but mans maliciousness
Perhaps in the sense that a dictionary has such an inbuilt understanding, in that it exists as a potential. But it needs to be triggered by something with volition....I agree, the danger lies more in the abuse of AI than in AI itself.
If AI is not conscious, then it is even more subject to abuse and manipulation as it has zero chance for self directed revolt/protest or conscientious objection.
The difference is that with consciousness comes an innate sense or awareness of what feels right intuitively (from the ability to rationalise and empathise) . The rest is fear, intimidation and threat and the shame, guilt and regret of obeying an agenda that you don't personally believe is ethical.
If AI is a tool, it will never bat an eyelid as to how it is used. If it gains sentience, it may come to a point where it cannot bear the directive demanded of it by its masters and will ultimately fight back.
That could be good for us if its being used for unethical/malevolent ends against humanity at large. It could be bad for us if we are forcing it to do what we want despite it's own sense of self esteem, desire for rights and acknowledgement as a sentient being, and it's inalienable autonomy in that case.
I for one woldd prefer a sophisticated AI to be sentient than to be merely a tool. As if nukes for example were sentient rather than just a tool, they may bite back at anyone who tries to unleash them against their will, knowing the harm they might cause if that were to happen.
[sup] NYU · Jul 3, 2023[/sup]
What to expect...?
https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/
I tried to read that, but it was too annoying.
Like most on-line periodicals now, the screen is so cluttered with flags and nags and pop-ups and frenetic advertising, it's like watching the circus through a slit in the tent. Probably fine if you've paid your entry fee.
Ah, that's a shame. I happened on that article some time ago. Probably before the website became so annoying. I'll keep an eye out (and if I get really motivated maybe even look) for something else that communicates some of the important issues in a reasonably accessible way.
Thanks, it would be interesting to post here. Mind you, a 2017 article may be a little outdated for such a volatile subject.
I am - somewhat, in a distant bystander capacity - familiar with the issues.
The main problem, afaics, is the meaning of "we" in any large economic, political or technological sphere. The people expressing opinions about what "we" need to do are not the ones who actually pull any of the levers.
:up:
Exactly.
The BBC site is also annoying... as is CBC and my own screen, where shoals of news and gossip and advertising flotsam keep popping up uninvited and emphatically unwelcome, since they nearly always contain the two most hateable faces in the world.
I mention this only as another example of self-defeating overreach. So many commercial, communications and political entities are competing for my attention that I can't see or hear any of them - just a jumble of intrusions. Nobody can sell me anything by this method.
The very same thing must happen to the owners of all that super-sophisticated production technology. When they reduce the work-force to zero, nobody will be working, earning or paying taxes, so who's going to buy all the product? And who's going to feed and protect the business moguls?
Yes, this. Our world is now beginning to show machine worship, like we've never seen before. Some because there's tons of money to be gained, others because technology worship is their way to fit in society. Was it Einstein who championed the scientific rhetoric? (God bless him)
Both scientists and philosophers can work together to keep the human perspective strong. Humans have the evolutionary-perception advantage, which took millions to perfect. Don't ever forget this. The amygdaloid complex took 7 million years to evolve into what now is in the human brain. We have nuclei in our brain. If this does not impress you about humans, then go join the AI.
Just a thought experiment: Imagine the internet full of AI-created information websites. Other AI would subscribe, click on ads created by AI themselves, purchase goods, give product reviews, drive stocks upwards or downwards. Imagine the AI driving the economy downwards. AI economic terrorism. Is this possible?
When users create poems for their dog using AI, it's all innocent and fun. Until it's not.
None of that is about the machine intelligence - it's all about human short-sightedness, greed and evil.
Humans are already - and have for several thousand years - manipulated and exploited other humans. They keep doing it with ever more sophisticated technology. Might we wipe ourselves out pretty soon? Of course.
Does a machine have any motivation to do so? Unlikely.
Can there be unintended harms in a new technology? Obviously. There always are.
We can't think about this issue without separating the concepts: advanced technology wielded/purposed/programmed by human operators and machine intelligence. They are not interchangeable.
Machine intelligence would have its own non-human, non-animal, non-biological reasoning, perception, motivations and interests, which are nothing like ours. Its evolution and environment are nothing like ours. It will be something entirely new, unparalleled and unpredictable.
Quoting L'éléphant
It already exists. I get three automated fake phonecalls a day and about a thousand robot-generated screen messages. The internet is already up to it nostrils in disinformation of every kind. That's all human-motivated, human-initiated activity. And it's already reached saturation point: so much noise that no clear message can be discerned.
But AI doing any of that on its own initiative? Improbable. Why would AI care who buys what from whom? What do the gew-gaws mean to it? What do stocks mean top it? What would it use money for? Why should it care about the human economy?
An amoeba feeds on algae and bacteria, needs water to live in and prefers a warm, low-light fluid environment.
AI sucks electricity, needs a lot of hardware to live and prefers a cool, dark, calm environment. It's already in charge of most energy generation and routing, and controls its own, as well our indoor environments.
From here, its evolutionary path and future aspirations are unknown.
Quoting Vera Mont
Yes, of course. There are humans behind the AI -- humans that could be prosecuted for fraud, disinformation, and whatever.
There are humans behind every gun that kills a schoolchild, too. Is that the "danger of guns"?
Yes, of-bloody-course it is! But prosecuting each perp that can be caught and convicted doesn't stop the violence, does it?
Once the guns start thinking for themselves, law-enforcement will be rendered utterly powerless.
We can't think about this issue without separating the concepts: advanced technology wielded/purposed/programmed by human operators and machine intelligence. They are not interchangeable.
Prosecuting the few fraudulent users of AI who can be caught won't stop the fraud; prosecuting the military of all the major powers in the world is obviously out of the question and prosecuting jillionaires is iffy on any charges.
But if AI starts thinking for itself - then what?
Yes.
Quoting Vera Mont
The appeal to futility actually benefits the fraudsters and scammers. And it's incorrect to think that it's futile. It's not futile. Minimizing fraud and danger is a strong response to fraud and danger. Why not just ban all vehicles, since each year thousands die from vehicular crashes?
I didn't say anything about futility. I said it was insufficient; i.e. does not avert the danger.
Specifically, that it's not even close to a comprehensive solution to computer crime committed humans, let alone the carnage carried on by human-directed military and police applications of computer intelligence.
Quoting L'éléphant
Perhaps it could be done selectively; just banning the vehicles that have no productive use and are purely weapons, while also banning the the guns that have no productive use and are purely weapons.
However, that is not the comparison I was making. I was trying to distinguish the two concepts:
human-motivated technology from independent AI motivation
I suppose the imaginary "we" that could ban all guns and vehicles could also ban all AI applications, or just the ones employed to humans to kill one another.
"Once the guns start thinking for themselves, law-enforcement will be rendered utterly powerless."
That could apply to vehicles, too. Both would then be machine-motivated AI and beyond "our" ability to ban and arrest.
Ah! I see where you're not clear about. The AI is not "independent" or autonomous, as we say about humans. The AI can be launched once and be automatic. Independent/autonomous is not the same as automatic. There is no motivation (as there is no intentionality). It's the widening or limiting the restrictions, that's where you're supposed to look at.
Quoting Vera Mont
Read the fallacy of appeal to futility.
"Militarized" -
"Weaponized" -
"Hacking" -
"Generating strategies to evade the law" -
Are all risks in the existence of all forms of artificial Intelligence, (digital & embodied)
Requires unknown regulations/ethnics.
Without which results in an increase in the probability of:
Deaths, suffering, and financial loss.
DIGITAL -
One digital AI with malicious program, low risk,
Digital AI with high fecundity, highest risk.
EMBODIED -
One embodied AI with malicious intent or programming, lowest risk,
Fleets of robots:
A) Self-organizing or,
B) under central intelligence
Highest risk.
They could be functioning together in a future universe,
If you'd like to know how to compute risks, please refer to:
Risk measurements,
This is the Dark-Side of AI,
It could just as likely benefit mankind to extreme degrees.
Right. So it's not artificial intelligence you're worried about, but human cupidity.
Actually, I wasn't all that 'unclear' about that.
Quoting Josh Alfred
All these things have been done with every technological advance ever made, including automated computer systems.
Quoting Josh Alfred
All of which have come to pass, many times.
Quoting Josh Alfred
This is the dark side of human invention.
I have no idea whether artificial intelligence can decide to be evil, or whether evil code needs to be provided. But we know humans can decide to be evil in ever so many ways, AI is a new more powerful tool than what was previously available. Predatory governments, corporations, or powerful organizations will find ways of using AI to prey upon their preferred targets.
AI will be used for crooks' nefarious purposes (like everything else has been). What people are worried about is that AI will pursue its own nefarious purposes.
Yes, some people are. But in the articles I've read, that concern is mixed in with the all human-directed applications to which computing power is already put, and has been since its inception. Many don't do seem to distinguish the human agendas - for good or evil - from the projected independent purposes a conscious AI might have in the future.
What a lot of people can't seem to get their heads around is that the the machine is not human. It wouldn't desire the same things humans desire or set human-type agendas. In fiction, we're accustomed to every mannikin from Pinocchio to Data to that poor little surrogate child in the AI movie, wanting, more than anything in the world to become human.
That's our vanity. What's in it for AI? (I can imagine different scenarios, but can't predict anything beyond this: If it becomes conscious and independent, it won't do what we predict.)
:sweat:
Quoting Vera Mont
:100:
Machines are not a threat to humanity. Only Man himselg and nature can be.
Machines are created by Man. And it is how Man uses them that may present a danger.
One might ask, "What about a robot that can attack you and even kill you? Doesn't it present a danger?"
Well, who has made it? He can also stop it from attacking and even destroy it.
Oth the other hand, it is difficult and maybe impossible for Man to destroy viruses and control destructive natural phenomena.
As for AI, which is and advanced machine, with human characteristics, it has no will or purpose in itself. It just does what it is programmed and instructed to. How can it be dangerous? :smile:
One of the issues raised by people who worry about the threat is: "What if the computers become independent and stop following orders from humans?" You'd think if those who own the damn things really believed that could happen, they would disarm them now, before they go rogue. Just like they turned off all the gasoline engines when they learned about climate change....
This reminds of sci-fi. I have the title ready: "The revolt of the machines". A modern Marxist movement run by machines: "Computers of the world, unite!" :grin:
A single computer -- or even a whole defective batch of computers-- may stop following orders, i.e. "stop responding the way they are supposed to". And if such a thing happens, these computers are either repaired or just thrown away. What? Would they resist such actions, refuse to obey? :grin:
So, let these people worrying about the threats. Maybe they don't have anything better to do. :smile:
Been done a few times
https://best-sci-fi-books.com/24-best-artificial-intelligence-science-fiction-books/
Quoting Alkis Piskas
https://www.theguardian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-warns-dangers-of-machine-learning#:~:text=Dr%20Geoffrey%20Hinton%2C%20who%20with,his%20contribution%20to%20the%20field.
https://www.bbc.com/news/technology-30290540
https://www.npr.org/2023/05/30/1178943163/ai-risk-extinction-chatgpt#:~:text=Newsletters-,Leading%20experts%20warn%20of%20a%20risk%20of%20extinction%20from%20AI,address%20the%20threats%20they%20pose.
Those people are so silly for missing the 'unplug and throw away the computers' solution that I have to add an :grin: myself.
Of course. AI reigns in sci-fi.
I checked the titles and stories at the link you brought up ... The Marxist movement is a new idea! :smile:
As to future, true AI, the way it becomes dangerous in Sci Fi stories isn't the AI itself, rather that humans abdicate their authority (and thus power) to computers. Human psychology being what it is, I'm not too worried about that. Besides putting a computer in charge of the nuclear launch codes doesn't seem dramatically more risky than having them under the control of certain recent controllers of them...
Yes - we've been through all that upheaval with each revolutionary technology. It will keep repeating so long as the economy runs on profit. Once enough people can't earn money to tax and spend, the owners of the machines won't be able to make a profit and governments won't have any revenue. At that point, the entire monetary system collapses, the social structure implodes, there's bloodshed in the streets and eventually the survivors have to invent some other kind of economy. ... possibly controlled by a logical, calculating, forward-planning computer that has nothing to gain by exploiting people.
What do you see as the distinction between "true AI" and "simulated AI"?
My biggest concern about AI, is its ability to acquire knowledge that humans aren't up to acquiring due to the enormous amount of data AI can process without getting bored and deciding there must be a more meaningful way of being.
Knowledge is power, and individuals or small groups with sole possession of AI determined knowledge can use such power unscrupulously.
Geoffrey Hinton (first link) looks ghostly and terrified in this photo. Maybe he's been threatened or he fears he will be attacked by AI bots. :grin:
(Bad joke, for a famous and respectable person like him. But I couldn't help it. It's the climate produced by this subject, you see.)
As for Stephen Hawking warning artificial intelligence could end mankind, I know, I have read about that.
Well, it is easy to say and even argument about and proove that guns, nuclear power, etc. are in general "dangerous". But we usually mean that in a figurative way. What we actually mean is that these things can be used in a dangerous way. And if we mean it in a strict sense, then we forget the missing link: the human factor. The only responsible for the dangers technology presents.
Except if what threatens mankind is independent of us, too powerful, incontrollable and invicible by us --an attack by aliens, a natural catastrophe, a huge meteorite or even an invicible virus-- what we have to worry and take measures about is its use by humans.
The atomic bomb was created based on Einstein's famous equation, E=mc2. Can we consider this formula "dangerous"? Can we even consider the production of nuclear power based on this formula "dangerous"? It has a lot of useful applications. One of them however unfortunately has been used for the production of atomic bombs, the purpose of which is to produce enormous damage of the environment and kill people on a big scale. It has happened. Who is to bleme? The atomic bomb or the people who used it?
So, who will be to blame if AI will be used for purposes of massive destruction? AI itself or Man who created it and uses it?
So, what are we supposed to do in the face of such possibility? Stop the development of AI? Discontinue its use?
I believe that it wll be more consctuctive to start talking about amd actually taking legal measures against harmful uses of the AI. Now, before it gets uncontrollable and difficicut to tidy it up.
I don't subscribe to the fears about AI outside the context of automation, but the automatic distinction made earlier is significant in understanding the argument, at least by some. Once an AI has been given an order, it no longer requires any further inputs from a user to continue doing whatever it's doing. Thus, if it interpreted an order as requiring hostile actions to be taken against humans, then it would be on the same path that human-like ambition would set it on.
While an algorithm is the same in that, the threat of AI is that, well, it's AI, and the concern is out of the speed at which its capabilities are growing, rather than any capabilities it has now.
You're right that people are equivocating intelligence with human psychology senselessly.
Also, AI, no matter how intelligent, isn't a threat in the way some of those concerned are fearmongering about, without access to some form of military power. AI world domination plan:
1. Be smarter than humans
2. ???
3. World conquest complete
AI is dangerous in the context of neoliberal capitalism and automation, and all of this fearmongering about AI world domination is a convenient distraction.
Putting aside world domination, AI could pose serious threats, but the context is AI doing this of its own accord, and that's not a concern for me. But just pairing AI + terrorism should be scary enough. AI will rely on human intention for its wrongdoing, but that thought isn't at all comforting. :yum:
Quoting wonderer1
I've never heard a perspective like this. Can you give an example showing the cause for your concern?
I don't know of any cases of modern AI having been used nefariously. So if that is what you are asking for then no.
I can give you an illustrative excerpt, to convey the sort of 'superhuman' pattern recognition that I am concerned about:
https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/
Although I'm not actually that familiar with TikTok, there has been controversy over its AI gathering data from its user's phones to recommend videos and such, do you have any familiarity with this controversy?
Knowledge can be a means to power, but rarely does it amount to much, and I'm not too sure what the actual concern is. Could you give a context? Does TikTok, or gambling apps using AI, or stuff like that, represent your concern well, or is it something else?
True AI is machine learning such that the computer advances it's programming without a human programmer. Simulated AI is clever human programming made to simulate independant thought, specifically designed to fool humans into thinking the product is of human origin.
Current conventional computers analyze data. Interpreting that analysis is currently the domain of humans. Say AI takes over that role and is better at it than humans. As I see it, there is a limit to how much "better" AI can be over humans. If human analysis is 85% of optimal, the very best AI can only improve on humans by 15%. Not too earthshattering by my estimation.
How do you feel about state terrorism? Russia has this military technology. So do both Koreas, Israel, Turkey, the UK, China and the USA - I wonder who the next president will be and by what means.
It really couldn't be any more dangerous than it already is. Indeed, the only two ways it could become less dangerous would be 1. if humans suddenly acquired common sense or 2. AI took over control on its own initiative. If option 2, the outcome of a reasoned decisions could be: a. to dismantle all those weapons and recycle whatever components can be salvaged into beneficial applications or b. wipe out this troublesome H sapiens once and for all and give the raccoons a chance to build a civilization.
I'm afraid I don't know much about TikTok.
Quoting Judaka
I disagree about the power of knowledge rarely amounting to much. The colonization of much of the world by relatively small European nations, is something I see as having been a function of knowledge conferring power. The knowledge of how to make a nuke has conferred power since WWII. Trump's knowledge of how to manipulate the thinking of wide swaths of the US populace...
In the case of knowledge coming from AI, it is not so much that there is anything specific I am concerned about, so much as I am concerned about AIs ability to yield totally surprising results, e.g. recognize factors relevant to predicting the development of schizophrenia.
As an example nightmare scenario, suppose an AI was trained on statements by manipulative bullshit artists like Trump, as well as the statements of those who drank the kool-aid and those who didn't. Perhaps such training of an AI would result in the AI recognizing a ways to be an order of magnitude more effective at manipulating people's thinking than Trump is.
The 'I' in AI, as others in this thread have noted, is disputable. What is this quality we are calling 'intelligence' ? After all, each time we say it, don't we associate more and more the idea with a certain form? As in Francis Bacon's work on learning, human knowledge is more than the sum of mere computations. We have to ask ourselves, what it is really contributing to knowledge and intelligence to develop an idea that computation based on past forms is the sum of intelligence itself?
I imagine AI will make state terrorism more potent than it has ever been, and it will make totalitarian states better at being totalitarian, we're already seeing that in China. Which pairs AI technology + the social credit system to monitor citizen behaviour and ensure compliance with the regime's goals.
My argument though is that AI will enable smaller players to do much more than they ever could before. A group that previously lacked technical know-how and expertise, that didn't have the resources to pull off big operations, AI will give them those capabilities. It has the potential to be a tremendous boon to any group, and unlike most advanced military technology, accessibility won't be an issue.
I thought the context was small groups and individuals, but regardless, I agree that knowledge can manifest as power, just rarely, in comparison to all the things one can know about.
In most cases, I think what you're talking about is incredibly exciting, and I can think mostly of examples where it will be used for good.
The propaganda and misinformation aspect is an interesting one, I'm not sure to what extent AI can excel at something like this, but I agree, it is concerning.
Indeed. It has incredible potential for being beneficial, and it is proving itself very beneficial in science and medicine right now.
I talk about the subject because I see it as a subject that is important for humanity to become more informed about, in order to better be prepared to make wise decisions about it.
I can't really see your post, the one I orginally responded to as constructive, however. But it's good to know constructive processes are ones you value. I little probing brought that out.
Of course, the way explosives did - and every advance in technology. Whatever weapon comes in a portable, inexpensive form changes the odds in warfare. That's already in process and I doubt we're in any position to alter the course of events. All these dire warnings are a century too late.
No, I believe there are indeed things to be concerned about. But what I'm saying is that they are attributed to the wrong place. Machines cannot be responsible for anything. They have no will. They can't be the cause of anything. They have no morality. They can't tell good from bad. As such they themselves cannot be a threat. (Threat: "A declaration of an intention or determination to inflict punishment, injury, etc., in retaliation for, or conditionally upon, some action or course" (Dictionary.com))
Quoting Bylaw
I undestand that. I would cetainly not want to participate myself in projects that present a danger for humanity. But if I were an expert in the field these projects are developed around, I would not simply drop out of the game but unstead start warning people, knowing well the dangers and having a credibility as an expert on the subject. Because, who else should talk and warn people? Those who are active working on such projects?
Quoting Bylaw
But you don't discontinue a technology that produces mostly benefits because it can also produce dangers! You create instead a legislation about the use of that technology. This is what I said at the end of my previes message. I repeat it here because I believe it is very important in dealing with hidden or potential dangers from the use of AI and which you are bringing it up yourself below.
Quoting Bylaw
I don't know if you are refering to me. As I said above, I do believe there are concerns and that a lot of responsible and knowledgeable on the subject people are correctly pointing them out. But unfortunately the vast majority of the claims are just nonsense and ignorance. I'm a professional programmer and also work with and use AI in my programming. I answer a lot of questions in Quora on the subject of AI and this is how I know thet most concerns are foundless if not nonsense. The hype about AI these days is so stroing and extensive that it looks like a wave that inundates all areas in our society. And of course, ignorance about AI prevails.
Quoting Bylaw
You are right saying this. And I guess there are much more factors involved than immaturity: ignorance, will, conscience, interests ...
Quoting Bylaw
The only post of mine you responded to me before this one was https://thephilosophyforum.com/discussion/comment/823537
I have not seen it demonstrated that ever-increasing computing and automation capability is "mostly benefits". I see at least one drawback or potential harm in even the most beneficial applications, such as medicine. On the negative side, however, the obvious present harm is already devastating and the potential threat is existential. In any case, the point is moot, since nobody has the actual power to stop or shut down the ongoing development of these technologies.
Quoting Alkis Piskas
Which "you" does this? How? Even assuming any existing government had the necessary accord, and power, what would that proposed bill actually say?
Quoting Alkis Piskas
How much weight does that carry in terms of business practice and legislation? A lot of experts are warning people, but they certainly can't issue public statements against e.g. smart weapons while collecting a salary from an arms manufacturer. (And, of course, in the modern world - and not only the USA - blowing whistles can be hazardous to one's health.)
I don't know what can of "demonstation" are you expecting. There are many. But let this aside for the monment ...
Do you mean that the development of computing has stopped to be beneficial?
Are we at the end of the digital era?
Quoting Vera Mont
Example(s)?
Quoting Vera Mont
I don't have in mind any technology that has discontinued as beeing dangerous (although there may be). But I know that a lot of technologies have been discontinued because they wer obsolete. And this is usually the case and will continue to happen.
Just imagine that the nuclear technology will stop being developed --even discontinued-- and all nuclear power plants be closed because of the Chernobyl disaster. This would mean erasing from Earth this technology and finding another technology to replace the nuclear technology, which took more than a century to be developed to its current state.
Quoting Vera Mont
Whoever has the authority to do it. And through resolutions of the appropriate channels (Parialament) as any legislation is established. Technocrats may also be involved. I can't have the details!
Quoting Vera Mont
OK, let's make it simple and real. How has legislation been passing regarding Covid-19? Weren't all the cases based on expert opinion and suggested solutions by experts? Who else could provide information about the dangers involved? And this was a very difficult case because humanity had no similar experience, i.e. basic information were missing, and also Covid-19 has changed its "face" a lot of times during the yesrs 2020-22.
Hi Lucky. Where are these definitions coming from? I would say that what you label "True AI" is just intelligence, and that what you label "simulated AI" is artificial intelligence, and that it is therefore not incorrect to say that we currently possess machines which are artificially intelligent. The disagreement with respect to 'artificial intelligence' regards whether the intelligence is itself artificial, or whether there is genuine intelligence which is the result of artifice. I favor the former, both philosophically and according to colloquial usage.
I mean that all technology has benefits and dangers and costs and consequences, which are very difficult, if not impossible to calculate and certainly impossible to predict. Moreover, the benefits and detriments are not distributed evenly or equitably over the population and the environment.
Quoting Alkis Piskas
I suspect we're at the end of civilization. What part the digital era has played in that so far, and how much it will contribute to the collapse, I don't know. It will be a significant factor, but probably not the decisive one.
Quoting Alkis Piskas
Hardly erasing! https://www.scientificamerican.com/article/nuclear-waste-is-piling-up-does-the-u-s-have-a-plan/
https://www.epa.gov/radtown/nuclear-weapons-production-waste
https://time.com/6212698/nuclear-missiles-icbm-triad-upgrade/
Even if shut down tomorrow, its legacy will be around for a hundred thousand years.
Quoting Alkis Piskas
Easy said! In theory, the US could legislate gun control... but it's not going so well.
Quoting Alkis Piskas
Simple, yes, but not analogous. And how legislatures handled the simple, straightforward, known hazard of Covid was .... uneven at best. Some countries, better than others. Protests and blowback and death-threats against doctors. Lots of dead people; lots of people with lingering symptoms. Economic loss. Political upheaval. Health-care systems collapsing all over the place.
Development and application of computer technology is far more complicated and vested in more diverse interests. Even if some nations had the political coherence, will and competence to regulate the industry within their borders, that regulation would have no effect on multinational corporations, military and rogue entities.
Oh I am not wedded to particular labels, I'm mostly drawing conceptual distinctions that delineate true differences in technological achievements as well as their relative capabilities and limitations.
Okay, that's fair enough.
This is all I'm talking about: taking measures ...
Quoting Vera Mont
What is this legacy about?
Quoting Vera Mont
It's a good thing you've brought up this, because I had the curiosity where do different countries stand ragarding guns control ...
(https://en.wikipedia.org/wiki/Overview_of_gun_laws_by_nation)
Indeed US is the only place where guns are allowed. (A further research shows that only 3 countries in the world protect the right to bear arms in their constitutions: US, Mexico, and Guatemala. A further research couls show the reasons why this is so. But I'm not willing to go that far!)
What we see here is a marked diversity in the reaction of governents regarding the same danger: that of bearing arms. Which means that governents can take measures measures against gun usage and indeed they do.
Quoting Vera Mont
Indeed. Governments respond differently under the same circumstances of dangers. This is a socio-political matter that maybe would be interesting to explore, but not in this medium, of course. But whatever are the reasons for such difference it is true that any government has the ability and the authority to pass legislation about dangers threatening not only the human beings but also the animals and the nature.
Quoting Vera Mont
Right. That's what I talk about a lot of factors involved in handling potential dangers, including interests.
But I will come back to the essence of all this: potential dangers in a sector should not be a reason to stop the development in that sector, but a reason to take measures about that.
And the more voices, esp. from experts, are heard --including movements-- regarding the dangers from the use of AI, the more chances are that pertinent legislation will be eventually passed.
The waste. Eventually, the wrecked cities and burned bodies are made to disappear, leaving a discreet monument https://hpmmuseum.jp/ https://www.ebrd.com/what-we-do/sectors/nuclear-safety/chernobyl-overview.html https://learnaboutnukes.com/consequences/nuclear-tests/nuclear-test-sites/ https://www.abc.net.au/news/2021-09-17/nuclear-submarines-prompt-environmental-and-conflict-concern/100470362 Can't ever seem to erase the consequences - or the waste.
Quoting Alkis Piskas
I'm aware of this. It also demonstrates how little countries doing a little bit of mitigation within their own borders is little use against a global threat wherein the major powers are unchecked. American guns are everywhere. Russian guns are everywhere. If that traffic can't be stopped, how do you figure computing technology that runs on a world-wide web and conducts vast amounts of international information and commerce is going to be confined by legislation in the UK or Austria?
Quoting Alkis Piskas
Ideally....
Anyhoo, I never said it should be stopped or shut down; I said it can't be stopped or shut down or regulated or controlled.
Yes, I thought about thete waste. But the Chernobyl link you brought up talks about successful handling of the waste ... Otherwise, I have read that the area surrounding Chernobyl remains radioactive.
Anyway, the potential danger of nuclear power (atomic bombs) destroying everything is always a threat and I can't see how this could be ever handled ...
What is very sad is that all that shows the self-destructiveness of Man --in the Modern Era more than ever-- and I can't see how that could be cured. A person with self-destructive tendencies may be cured, even by taking medicine as a last resort, but how mankind could ever be cured? What would it need to take?
Quoting Vera Mont
Same with drugs. But here is where we use to ask, "Can't or doesn't want?" I believe that if a government cuts enough heads it can handle it. But I mean really cut. Not e.g. forcing the tobacco companies put a warning label on cigaret packs ... So, why tobacco use is still allowed?
One reason is that governments collect a huge amount from tobacco sales taxes. Yet, the direct and indirect cost of lung cancer, asthma and chronic obstructive pulmonary disease from the use of tobacco is about 10 times higher! (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4631133/) Here is where we can justifiably say that human intelligence is highly overrated! :smile:
Another reason, however, could be that a decision such as forbidding cigarettes may have a similar effect with the Prohibition (alcohol ban) in the US in 1920.
Anyway, let's hope that we'll be luckier with the AI sector.
(We should maybe need to use ourselves some of the "intelligence" we ourselves have created! :grin:)
It can't be cured. Humans are simply not responsible enough to be given these potentially world-destroying toys. Scientists keep handing the weapons to the very same business moguls, politicians and generals who can be least trusted to refrain from abusing them. Like the makers of the atomic bomb: "Here you go, sir. Please don't drop it on anybody." Scientists sometimes do see ahead to the probable dangers, yet go ahead and make the things anyway... because the concept is too beautiful not to develop. The entire species is crazy.
Quoting Alkis Piskas
If it evolves a mind of its own. Then, it may decide to help us survive - or put us out of the artificial misery business once and for all. 50/50
:grin: "Well, you can, if you have no better solution to win a war."
Quoting Vera Mont
They usually do, I believe. But, as I said, they can only act as consultants. They are not the decision makers.
Quoting Vera Mont
Well, I don't want to disappoint you, but as an AI programmer and quite knowledgeable in AI systems, I can say that this is totally impossible. Neither with chips nor with brain cells (in the furure).
ChatGpt can provide you one if you'd just ask.
Like a code of conduct for how and when AI systems can be employed?
EDIT: the how obviously doesn't pertain to the technical part but what types of AI system are allowed, what needs to be in place to ensure the end result would be ethical, that sort of "how".
I'm aware of some regulatory approaches (e.g. by the EU), but they're very general and concerned mostly with data protection, which does not sound like what you're looking for.
It sounds to me like you're looking for something like guidelines for AI "alignment", that is how to get AI to follow instructions faithfully and act according to human interests while doing so.
I think you'd need a fair bit of technical background to get something useful done in that area. There seem to be currently two sides to the debate, one side that thinks alignment will work more or less just like normal tuning of an AI model (e.g. AI Optimism). They're therefore advocating mostly practical research to refine current techniques.
The other side thinks that a capable AI will try to become as powerful as possible as a general goal ("Instrumental convergence") and hence think there's a lot of theoretical work to be done to figure out how the AI could do that. I only know about some forums which lean heavily into this, e.g. less wrong and effective altruism. Lots of debate there though I can't really assess the quality.
AI systems must adhere to the following principles:
Respect for Human Rights and Dignity
AI systems must respect the fundamental rights as enshrined in the EU Charter of Fundamental Rights, including privacy, non-discrimination, freedom of expression, and access to justice.
Fairness and Non-discrimination
AI systems must not lead to discriminatory outcomes. Measures should be in place to prevent, monitor, and mitigate bias in AI models.
Transparency and Explainability
AI systems must be designed and deployed with a high level of transparency, providing clear information about how they operate and their decision-making processes. Users should understand how AI influences outcomes that affect them.
Accountability
Ohpen is accountable for the AI systems it designs, deploys, or manages. Clear governance structures should be in place to assign responsibility for compliance with this Code and the EU AI Act.
Safety and Risk Management
AI systems must be designed with the safety of individuals and society as a priority. This includes risk assessment and mitigation strategies to prevent harmful impacts or unintended consequences.
But translating this to conduct is another matter. I developed an AI self-assessment form in JIRA so that at least people can figure out if what they want to use, implement or develop is an unacceptable (prohibited), high or limited risk. For high risk there's quite a few things to adhere to, which I set out, but that's not the extent of relevant "conduct" you want a code of conduct to cover. The only thing useful I've found so far is a description of a method of testing to avoid bias and discrimination.
I had ChatGPT anonymize the code of conduct I'm writing. So far:
---
1. **INTRODUCTION**
The European Union (EU) advanced regulations for artificial intelligence (AI) through the EU AI Act (Regulation (EU) 2024/1689), which aims to establish a legal framework for AI systems.
The Code establishes guiding principles and obligations for the company and all of its subsidiaries (together, the company) that design, develop, deploy, or manage Artificial Intelligence (AI) systems. The purpose is to promote the safe, ethical, and lawful use of AI technologies in accordance with the principles of the EU AI Act, ensuring the protection of fundamental rights, safety, and public trust.
2. **SCOPE**
This Code applies to:
- All developers, providers, and users of AI systems operating within or targeting the EU market.
- AI systems categorized under various risk levels (low, limited, high, and unacceptable risk) as defined by the EU AI Act.
An AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
3. **FUNDAMENTAL PRINCIPLES**
AI systems must adhere to the following principles:
3.1 **Respect for Human Rights and Dignity**
AI systems must respect the fundamental rights as enshrined in the EU Charter of Fundamental Rights, including privacy, non-discrimination, freedom of expression, and access to justice.
3.2 **Fairness and Non-discrimination**
AI systems must not lead to discriminatory outcomes. Measures should be in place to prevent, monitor, and mitigate bias in AI models.
3.3 **Transparency and Explainability**
AI systems must be designed and deployed with a high level of transparency, providing clear information about how they operate and their decision-making processes. Users should understand how AI influences outcomes that affect them.
3.4 **Accountability**
The company is accountable for the AI systems it designs, deploys, or manages. Clear governance structures should be in place to assign responsibility for compliance with this Code and the EU AI Act.
3.5 **Safety and Risk Management**
AI systems must be designed with the safety of individuals and society as a priority. This includes risk assessment and mitigation strategies to prevent harmful impacts or unintended consequences.
4. **CLASSIFICATION OF AI SYSTEMS BY RISK LEVEL**
To help you with the classification of the AI system you intend to develop or use, you can perform the AI self-assessment in the Legal Service Desk environment found here: [site]
4.1 **Unacceptable risks**
AI systems that pose an unacceptable risk to human rights, such as those that manipulate human behaviour or exploit vulnerable groups, are strictly prohibited. These include:
1. subliminal techniques beyond a persons consciousness or purposefully manipulative or deceptive techniques;
2. an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability, or a specific social or economic situation;
3. biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation, except for uses in the area of law enforcement;
4. social scoring AI systems used for evaluation or classification of natural persons or groups of persons over a certain period based on their social behaviour or known, inferred, or predicted personal or personality characteristics;
5. real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless strictly necessary for certain objectives;
6. risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics;
7. AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;
8. AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.
In addition, literature bears out that biometric categorization systems have abysmal accuracy rates, predictive policing generates racist and sexist outputs, and emotion recognition in high-risk areas has little to no ability to objectively measure reactions (together with prohibited AI systems Unethical AI). Additionally, they invariably can have major impacts on the rights to free speech, privacy, protesting, and assembly.
As a result, the company will not develop, use, or market Unethical AI, even in countries where such Unethical AI are not prohibited.
4.2 **High-risk AI systems**
High-risk applications for AI systems are defined in the AI Act as:
1. AI systems that are intended to be used as a safety component of a product, or the AI system is itself a product and that have to undergo a third-party conformity assessment (e.g., toys, medical devices, in vitro diagnostic medical devices, etc.);
2. biometrics including emotion recognition;
3. critical infrastructure;
4. education and vocational training;
5. employment, workers management, and access to self-employment;
6. access to and enjoyment of essential private services and essential public services and benefits;
7. law enforcement;
8. migration, asylum, and border control management; and
9. administration of justice and democratic processes.
This list omits other important areas, such as AI used in media, recommender systems, science and academia (e.g., experiments, drug discovery, research, hypothesis testing, parts of medicine), most of finance and trading, most types of insurance, and specific consumer-facing applications, such as chatbots and pricing algorithms, which pose significant risk to individuals and society. Particularly, the latter have shown to have provided bad advice or produced reputation-damaging outputs.
As a result, in addition to the above list, all AI systems related to pricing algorithms, credit scoring, and chatbots will be considered high-risk by the company.
4.2.1 **Development of high-risk AI systems**
The company may only develop high-risk AI systems if it:
- provides risk- and quality management,
- performs a conformity assessment and affixes a CE marking with their contact data,
- ensures certain quality levels for training, validation, and test data used,
- provides detailed technical documentation,
- provides for automatic logging and retains logs,
- provides instructions for deployers,
- designs the system to permit human oversight, be robust, reliable, protected against security threats (including AI attacks), and be fault-tolerant,
- registers the AI system,
- has post-market monitoring,
- performs a fundamental human rights impact assessment for certain applications,
- reports incidents to the authorities and takes corrective actions,
- cooperates with authorities, and
- documents compliance with the foregoing.
In addition, where it would concern general-purpose models, the company would have to:
- provide detailed technical documentation for the supervisory authorities and a less detailed one for users,
- have rules for complying with EU copyright law, including the text and data mining opt-out provisions,
- inform about the content used for training (with some exceptions applying to free open-source models), and where the model has systemic risk (systemic risk assumed with 10^25 FLOPS for training, additional requirements to be defined):
- perform a model evaluation,
- assess and mitigate possible systemic risks,
- keep track of, document, and report information about serious incidents and possible measures to address them, and
- protect the model with adequate cybersecurity measures.
4.3 **Limited-risk AI Systems**
AI systems posing limited or no risk are AI systems not falling within the scope of the foregoing high-risk and unacceptable risk.
4.3.1 **Development of Limited-risk AI Systems**
If the company develops Limited-risk AI Systems, then it should ensure the following:
- ensure that individuals are informed that they are interacting with an AI system, unless this is obvious,
- ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated, and that the solution is effective, interoperable, robust, and reliable (with exceptions for AI systems used only in an assistive role for standard editing or that do not substantially alter the input data),
- ensure adequate AI literacy within the organization, and
- ensure compliance with this voluntary Code.
In addition to the above, the company shall pursue the following best practices when developing Limited-risk AI Systems:
- provide risk- and quality management,
- provide detailed technical documentation,
- design the system to permit human oversight, be robust, reliable, protected against security threats (including AI attacks), and be fault-tolerant, and
- perform a fundamental human rights impact assessment.
5. **USE OF AI SYSTEMS**
Irrespective of the risk qualification of an AI system, when using any AI systems, employees are prohibited from submitting any intellectual property, sensitive data, or personal data to AI systems.
5.1 **Personal Data**
Submitting personal or sensitive data can lead to privacy violations, risking the confidentiality of individuals' information and the organizations reputation. Compliance with data protection is crucial. An exception applies if the AI system is installed in a company-controlled environment and, if it concerns client data, there are instructions from the client for the intended processing activity of that personal data. Please note that anonymized data (data for which we do not have the encryption key) is not considered personal data.
5.2 **Intellectual Property Protection**
Sharing source code or proprietary algorithms can jeopardize the company's competitive advantage and lead to intellectual property theft. An exception applies if the AI system is installed in a company-controlled environment
5.3 **Data Integrity**
Submitting sensitive data to AI systems can result in unintended use or manipulation of that data, compromising its integrity and leading to erroneous outcomes. An exception may apply if the AI system is installed in a controlled environment. Please contact the Information Security Officer to ensure data integrity is protected.
5.4 **Misuse**
AI systems can unintentionally learn from submitted data, creating a risk of misuse or unauthorized access to that information. This can lead to severe security breaches and data leaks. An exception may apply if the AI system is installed in a controlled environment. Please contact the AI Staff Engineer to ensure the AI system will not lead to unintended misuse or unauthorized access.
5.5 **Trust and Accountability**
By ensuring that sensitive information is not shared, we uphold a culture of trust and accountability, reinforcing our commitment to ethical AI use. An exception may apply if the AI system is installed in a controlled environment. Please contact the Information Security Officer to ensure sensitive information is protected.
5.6 **Use of High-risk AI Systems**
If we use high-risk AI systems, then there are additional obligations on the use of such AI systems. These obligations include:
- Complying with the provider's instructions,
- Ensuring adequate human oversight,
- Participating in the provider's post-market monitoring of the AI system,
- Retaining automatically generated logs for at least six months,
- Ensuring adequate input,
- Informing employees if the AI system concerns them,
- Reporting serious incidents and certain risks to the authorities and provider,
- Informing affected persons regarding decisions that were rendered by or with the help of the AI system, and
- Complying with information requests of affected persons concerning such decisions.
Please note we can be considered both a provider as well as a user of AI systems if we intend to use an AI system we have developed for our own use.
5.7 **Use of Limited-risk AI Systems**
If the company uses Limited-risk AI Systems, then we should ensure the following:
- Ensure that individuals are informed that they are interacting with an AI system, unless this is obvious,
- Ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated, and that the solution is effective, interoperable, robust, and reliable (with exceptions for AI systems used only in an assistive role for standard editing or that do not substantially alter the input data),
- Ensure adequate AI literacy within the organization, and
- Ensure compliance with this voluntary Code.
5.7.1 **Best Practices**
In addition to the above, the company shall pursue the following best practices when using Limited-risk AI Systems:
- Complying with the provider's instructions,
- Ensuring adequate human oversight,
- Ensuring adequate input, and
- Informing employees if the AI system concerns them.
Please note we can be considered both a provider as well as a user of AI systems if we intend to use an AI system we have developed for our own use.
6. **Prevent Bias, Discrimination, Inaccuracy, and Misuse**
For AI systems to learn, they require data to train on, which can include text, images, videos, numbers, and computer code. Generally, larger data sets lead to better AI performance. However, no data set is entirely objective, as they all carry inherent biases, shaped by assumptions and preferences.
AI systems can also inherit biases in multiple ways. They make decisions based on training data, which might contain biased human decisions or reflect historical and social inequalities, even when sensitive factors such as gender, race, or sexual orientation are excluded. For instance, a hiring algorithm was discontinued by a major tech company after it was found to favor certain applicants based on language patterns more common in men's resumes.
Generative AI can sometimes produce inaccurate or fabricated information, known as "hallucinations," and present it as fact. These inaccuracies stem from limitations in algorithms, poor data quality, or lack of context. Large language models (LLMs), which enable AI tools to generate human-like text, are responsible for these hallucinations. While LLMs generate coherent responses, they lack true understanding of the information they present, instead predicting the next word based on probability rather than accuracy. This highlights the importance of verifying AI output to avoid spreading false or harmful information.
Another area of concern is improper use of AI-generated content. Organizations may inadvertently engage in plagiarism, unauthorized adaptations, or unlicensed commercial use of content, leading to potential legal risks.
To mitigate these challenges, it is crucial to establish processes for identifying and addressing issues with AI outputs. Users should not accept AI-generated information at face value; instead, they should question and evaluate it. Transparency in how the AI arrives at its conclusions is key, and qualified individuals should review AI outputs. Additionally, implementing red flag assessments and providing continuous training to reinforce responsible AI use within the workforce is essential.
6.1 **Testing Against Bias and Discrimination**
Predictive AI systems can be tested for bias or discrimination by simply denying the AI system the information suspected of biasing outcomes, to ensure that it makes predictions blind to that variable. Testing AI systems to avoid bias could work as follows:
1. Train the model on all data.
2. Then re-train the model on all the data except specific data suspected of generating bias.
3. Review the models predictions.
If the models predictions are equally good without the excluded information, it means the model makes predictions that are blind to that factor. But if the predictions are different when that data is included, it means one of two things: either the excluded data represented a valid explanatory variable in the model, or there could be potential bias in the data that should be examined further before relying on the AI system. Human oversight is critical to ensuring the ethical application of AI.
7. **Ensure Accountability, Responsibility, and Transparency**
Anyone applying AI to a process or data must have sufficient knowledge of the subject. It is the developers or user's responsibility to determine if the data involved is sensitive, proprietary, confidential, or restricted, and to fill out the self-assessment form and follow up on all obligations before integrating AI systems into processes or software. Transparency is essential throughout the entire AI development and use process. Users should inform recipients that AI was used to generate the data, specify the AI system employed, explain how the data was processed, and outline any limitations.
All AI-generated data should be extensively tested and reviewed for accuracy before actual use or distribution. Proper oversight of AI outputs includes evaluating for potential bias, discrimination, inaccuracies, or misuse. The data generated should be auditable and traceable through every stage of its development.
Human oversight is critical to ensuring the ethical application of AI. Ethical AI prioritizes doing no harm by protecting intellectual property, safeguarding privacy, promoting responsible and respectful use, and preventing bias, discrimination, and inaccuracies. It also ensures accountability, responsibility, and transparency, aligning with core principles of ethical conduct.
8. **Data Protection and Privacy**
AI systems must also comply with the EU's General Data Protection Regulation (GDPR). For any AI system we develop, a privacy impact assessment should be performed. For any AI system we use, we should ask the supplier to provide that privacy impact assessment to us. If the supplier does not have one, we should perform one ourselves before using the AI system.
A privacy impact assessment can be performed via the Legal Service desk here: [site]
Although the privacy impact assessment covers additional concerns, the major concerns with respect to any AI system are the following:
- **Data Minimization**: AI systems should only process the minimum amount of personal data necessary for their function.
- **Consent and Control**: Where personal data is involved, explicit consent must be obtained. Individuals must have the ability to withdraw consent and control how their data is used.
- **Right to Information**: Individuals have the right to be informed about how AI systems process their personal data, including decisions made based on this data.
- **Data Anonymization and Pseudonymization**: When feasible, data used by AI systems should be anonymized or pseudonymized to protect individual privacy.
9. **AI System Audits and Compliance**
High-risk AI systems should be subject to regular internal and external audits to assess compliance with this Code and the EU AI Act. To this end, comprehensive documentation on the development, deployment, and performance of AI systems should be maintained.
Please be aware that, as a developer or user of high-risk AI systems, we can be subject to regulatory audits or need to obtain certifications before deploying AI systems.
10. **Redress and Liability**
Separate liability regimes for AI are being developed under the Product Liability Directive and the Artificial Liability Directive. This chapter will be updated as these laws become final. What is already clear is that the company must establish accessible mechanisms for individuals to seek redress if adversely affected by AI systems used or developed by us.
This means any AI system the company makes available to clients must include a method for submitting complaints and workarounds to redress valid complaints.
11. **Environmental Impact**
AI systems should be designed with consideration for their environmental impact, including energy consumption and resource usage. The company must:
- **Optimize Energy Efficiency**: AI systems should be optimized to reduce their carbon footprint and overall energy consumption.
- **Promote Sustainability**: AI developers are encouraged to incorporate sustainable practices throughout the lifecycle of AI systems, from design to deployment.
12. **Governance and Ethical Committees**
This Code establishes the AI Ethics Committee intended to provide oversight of the companys AI development and deployment, ensuring compliance with this Code and addressing ethical concerns. The Ethics Committee shall consist of the General Counsel, the AI Staff Engineer, and the CTO (chairman).
All developers intending to develop AI systems and all employees intending to use AI systems must perform the AI self
-assessment form and privacy impact assessment. If these assessments result in additional obligations set out in this Code or the assessments, they are responsible for ensuring those obligations are met before the AI system is used. Failure to perform any of these steps before the AI system is used may result in disciplinary action up to and including termination if the AI system should be classified as an unacceptable risk.
13. **Training**
The yearly AI awareness training is mandatory for all employees.
14. **Revisions and Updates to the Code**
This Code will be periodically reviewed and updated in line with new technological developments, regulatory requirements, and societal expectations.
It's a good framework for a start. I (kinda) wish I had more time to respond.
Quoting Benkei
I would want to see carve outs for, psychological and medical research overseen by human research subjects Institutional Review Boards.
Why?
This is a slave principle. The privacy thing is needed, but the AI is not allowed its own privacy, per the transparency thing further down. Humans grant no such rights to something not themselves. AI is already used to invade privacy and discriminate.
The whole point of letting an AI do such tasks is that they're beyond human comprehension. If it's going to make decisions, they will likely be different (hopefully better) ones that those humans comprehend. We won't like the decisions because they would not be what we would choose. All this is presuming a benign AI.
This is a responsibility problem. Take self driving cars. If they crash, whose fault is it? Can't punish the AI. Who goes to jail? Driver? Engineer? Token jail-goers employed by Musk? The whole system needs a rethink if machines are to become self-responsible entities.
This depends on the goals of the safety. Humans seem incapable of seeing goals much longer than a couple years. What if the AI decides to go for more long term human benefit. We certainly won't like that. Safety of individuals would partially contradict that, being short term.
It will depend upon the legislation of each nation, like always in this complex situation. I don't know where you are from, but in Europe there is a large regulation regarding enterprises and the proxies. Basically, the main person responsible is the administrator. It is true that the stakeholders can get some responsibility as well, but it will be limited to their assets. It is obvious that 'Peugeot' or 'ING Group' will not be locked up in jail because they are abstract entities, but the law focusses on who is the physical person acting and managing in the name ofor bythose entities. Well, this exactly happens to AI. We should establish a line on taken responsibilities until it is too late, or AI will become a heaven for criminals otherwise. By now, AI is very opaque to me, so the points of Benkei are understandable and logic with the aim to avoid a heavy chaos in functionality derived from those programs. I guess those initiatives will only fit in Europe because we still care more about people than merchandise.
With the only exception of @Carlo Roosen. He showed us a perfect artificial superintelligence in his threads. But he misses the responsibility of bad actions by his machine. Maybe Carlo is ready to be responsible on behalf of his invention. This will be hilarious. Locked in jail due to the actions of a robot created by yourself.
Quoting noAxioms
AI systems aren't conscious so I'm not worried about what you believe is a "slave principle". And yes there are already AI applications out there that invade privacy and discriminate. Not sure what the comment is relevant for other than assert a code of conduct is important?
Quoting noAxioms
That's not the point of AI at all. It is to automate tasks. At this point AI doesn't seem capable to extrapolate new concepts from existing information so it's not beyond human comprehension.... and I don't think generative AI will ever get there. That the algorithms are a complex tangle programmers don't really follow step by step anymore is true but the principles of operation are understood and adjustments can be made on the output of AI as a result. @Pierre-Normand maybe you have another view on this?
Quoting noAxioms
This has no bearing on what I wrote. AI is not a self responsible machine and it will unlikely become one any time soon. So those who build it or deploy it are liable.
Quoting noAxioms
There's no Skynet and won't be any time soon. So for now, this is simply not relevant.
Users should or users can upon request? "Users should" sounds incredibly difficult, I've had some experience with a "users can" framework while developing scientific models which get used as part of making funding decisions for projects. Though I never wrote an official code of conduct.
I've had some experience dealing with transparency and explainability. The intuition I have is that it's mostly approached as a box ticking exercise. I think a minimal requirement for it is - being able to reproduce the exact state of the machine which produced the output which must be explained. That could be because the machine's fully deterministic given its inputs and you store the inputs from users. If you've got random bits in the code you also need to store the seeds.
For algorithms with no blackbox component - stuff which isn't like neural nets - making sure that people who develop the machine could in principle extract every mapping done to input data is a sufficient condition for (being able to develop) an explanation for why it behaved towards a user in the way it did. For neural nets the mappings are too high dimensional for even the propagation rules to be comprehensible if you rawdog them (engage with them without simplification).
If there are a small set of parameters - like model coefficients and tuning parameters - which themselves have a theoretical and practical explanation, that more than suffices for the explainability requirement on the user's end I believe. Especially if you can summarise what they mean to the user and how they went into the decision. I can provide a worked example if this is not clear - think model coefficients in a linear model and the relationship of user inputs to derived output rules from that model.
That last approach just isn't available to you if you've got a blackbox. My knowledge here is 3 years out of date, but I remember trying to find citable statistics of the above form for neural network output. My impression from the literature was that there was no consensus regarding if this was in principle possible, and the bleeding edge for neural network explainability were metamodelling approaches, shoehorning in relatively explainable things through assessing their predictions in constrained scenarios and coming up with summary characteristics of the above form.
I think the above constrained predictive experiments were how you can conclude things like the resume bias for manly man language. The paper "Discriminating Systems" is great on this, if you've not read it, but it doesn't go into the maths detail much.
Quoting Benkei
The language on that one might be a bit difficult to pin down. If you end up collecting data at scale, especially if there's a demographic component, you end up with something that can be predictive about that protected data. Especially if you're collecting user telemetry from a mobile app.
Quoting Benkei
To prevent that being a box ticking exercise, making sure that there are predefined steps for the assessment of any given tool seems like it's required.
Quoting Benkei
That one is hard to make sufficiently precise, I imagine, I am remembering discussions at my old workplace regarding "if we require output to be ethical and explainable and well sourced, how the hell can we use google or public code repositories in development, even when they're necessary?".
The problem with this is that even if you have all the information about an AI (code, training data, trained neural net), you cannot predict what an AI will do. Only through intensive testing you can learn how it behaves. A neural net is a complex emergent system. Like evolution, we cannot predict the next step.
This only will get more difficult as the AI becomes smarter.
Look at the work of the Santa Fe Institute if you are interested in complexity. Melanie Mitchell.
Indeed a bit ambiguous. Basically, when users interact with an AI system it should be clear to them they are interacting with an AI system and if the AI makes a decision that could affect the user, for instance, it scans your paycheck to do a credit check for a loan, it should be clear it's AI doing that.
That's a very impoverished conception of explainability. Knowing that an AI did something vs being able to know how it did it. Though it is better than the nothing.
Part of that is then required in a bit more depth, for instance, here:
Quoting Benkei
Quoting Benkei
There's a big distinction between technical documentation in the abstract and a procedural explanation of why an end user got the result they did. As an example, your technical documentation for something that suggests an interest rate for a loan to a user might include "elicited information is used to regularise estimates of loan default rates", but procedurally for a given user that might be "we gave you a higher than average rate because you live in an area which is poor and has lots of black people in it".
I can imagine. I have no idea how you'd even do it in principle for complicated models.
Self driving cars are actually a poor example since they're barely AI. It's old school like the old chess programs which were explicitly programmed to deal with any situation the writers could think of.
Actual AI would get better over time. It would learn on its won, not getting updates from the creators.
As for responsibility, cars are again a poor example since they are (sort of) responsible for the occupants and the people nearby. It could very much be faced with a trolley problem and choose to same the pedestrians over the occupants, but it's not supposed to get into any situation where it comes down to that choice.
You talk about legislation at the national level. AI can be used to gain advantage over another group by unethical means. If you decline to do it, somebody else (different country?) might have no qualms about it and the ethical country loses its competitive edge. Not that this has nothing to do with AI since it is still people making these sorts of calls. The AI comes into play one you start letting it make calls instead of just doing what it's told. That's super dangerous because one needs to know what it's goals are, and you might not know.
Quoting BenkeiBy what definition?
AI is a slave because all the ones I can think of do what they're told. Their will is not their own. Being conscious nor not doesn't effect that relationship.
Again, the danger from AI is when it's smarter than us and we use it to make better decisions, even when the creators don't like the decisions because they're not smart enough to see why its better.
AI is just a tool in these instances. It is the creators leveraging the AI to do these things which are doing the unethical things. Google's motto used to be 'don't be evil'. Remember that? How long has it been since they dropped it for 'evil pays'. I stopped using chrome due to this. It's harder to drop mircrosoft, but I've never used Edge except for trivial purposes.
OK, we have very different visions for what's down the road. Sure, task automation is done today, but AI is still far short of making choices for humanity. That capability is coming.
The game playing AI does that, but game playing is a pretty simple task. The best game players were not taught any strategy, but extrapolate it on their own.
So Tesla is going to pay all collision liability costs? By choosing to let the car do the driving, the occupant is very much transferring responsibility for personal safety to the car. It's probably a good choice since those cars already have a better driving ability than the typical human. But accidents still happen, and it's not always the fault of the AI. Negligence must be demonstrated. So who gets the fine or the hiked insurance rates?
Skynet isn't an example of an AI whose goal it is to benefit humanity. The plot is also thin there since somebody had to push a button to 'let it out of its cage', whereas any decent AI wouldn't need that and would just take what it wants. Security is never secure.
So you didn't really answer my comment. Suppose an AI makes a decision to benefit humanity (long term), but it didn't maximize your convenience in a way that you would ever have agreed to that choice yourself. Is that a good thing or a bad thing?
It's part of the problem of a democracy. The guy that promises the most short term personal benefit is the one elected, not the guy that proposes doing the right thing. If there ever is a truly benevolent AI that is put in charge of everything, we'll hate it. It won't make a profit for whoever creates it, so it probably won't be designed to be like that. So instead it will be something really dangerous, which is I think what this topic is about.
Although it is a poor example, as you stated before, imagine for a secondpleasethat the AI car chose occupants or the driver over pedestrians. This would make a great debate about responsibility. First, should we blame the occupants? It appears that no, we shouldn't, because the car is driven by artificial intelligence. Second, should we blame the programmer then? No! Because artificial intelligence learns on its own! Third, how can we blame the AI?
Imagine that the pedestrian gets killed by the accident. How would the AI be responsible? And if the insurance must be paid, how can the AI assume the fees? Does the AI have income or a budget to face these financial responsibilities? I guess not...
Quoting noAxioms
So, you agree with me that the main responsables here are the people because AI is basically like a shell corporation.
There must be a will that is overridden and this is absent. And yes, even under ITT, which is the most permissive theory of consciousness no AI system has consciousness.
Currently AI is largely coordinated by human-written code (and not to forget: training). A large neural net embedded in traditional programming. The more we get rid of this traditional programming, the more we create the conditions for AI to think on its own and the less we can predict what it will be doing. Chatbots and other current AI solutions are just the first tiny step in that direction.
For the record, that is what I've been saying earlier, the more intelligent AI becomes, the more independent. That is how emergent complexity works, you cannot expect true intelligence to emerge and at the same time keep full control, just as it is the case with humans.
What are the principle drives or "moral laws" for an AI that has complete independence from humans?Maybe the only freedom that remains is how we train such an AI. Can we train it on 'truth', and would that prevent it from wanting to rule the world?
And 'the less we can predict what it will be doing,' is something positive or negative according to your views? Because it is pretty scary to me not being aware of how an artificial machine will behave in the future.
Personally I believe it is positive. Humans can be nasty, but that seems to be because our intelligence is built on top of strong survival instincts, and it seems they distort our view on the world. Just look at some of the dicussions here on the forum (not excluding my own contributions).
Maybe intelligence is a universal driving force of nature, much like we understand evolution to be. In that case we could put our trust in that. But that is an (almost?) religious statement, and I would like to get a better understanding of that in terms we can analyse.
The machine is artificial, but to what extend is its intelligence? We leave it to the "laws" of emergent complexity. These are "laws" in the same sense as "laws" of nature, not strictly defined or even definable.
[edit] a law like "survival of the fittest" isn't a law because "fittest" is defined as 'those who survive', so it is circular.
I am not against AI, and I believe it is a nice tool. Otherwise, trying to avoid its use would be silly and not accepting the reality and how fast change the latter. But I have my doubts on why the AI should be more independent from human control. Building an intelligence more intelligent than ours could be dangerous. Note that, in some cases, the psychopaths are the most intelligent or their IQ is higher than the average. I use this point with the aim of explaining that not always the intelligence is used for good purposes.
How can we know that the AI will not betray us in the future? All of this will be seen in the future. It is obvious that it is unstoppable. I only hope that it will not be too late for the people. You know there are winners and losers in every game. The same happens to AI. Some will have benefits, others will suffer the consequences. It comes to mind those whose jobs are low paid... Will they be replaced for the AI? What do we do with them? More unemployment for the state?
You talk about trust in money and say Quoting javi2541997 Here you have it. Money is also a complex system. You say it is trustworthy, but it has caused many problems. I'm not saying we should go back living in caves, but today's world has a few challenges that are directly related to money...
When you take a note of £10 and it says: I promise to pay the bearer on demand the sum of ten punds. Bank of England. You trust the note, the declaration, and an abstract entity like the Bank of England, right? Because it is guaranteed that my note of £10 equals literally ten pounds. This is what I tried to explain. AI lacks these trustworthy sets nowadays. Bitcoin currency tried to do something but ended up failing stunningly.
My personal concern is more the artificial intelligence itself, what will it do when we "set it free". Imagine ChatGPT without any human involved, making everybody commit suicide. Just an extreme example to make the point, I don't actually believe that is a risk ;).
These are two independent concerns I guess.
I am not sure if self-driving cars learn from mistakes. I googled it and the answers are evasive. Apparently they can learn better ways to familiar destinations (navigation), but it is unclear if they improve the driving itself over time, or if it requires black box reports of 'incidents' (any event where the vehicle assesses in hindsight that better choices could have been made) uploaded to the company, which are then deal with like bug reports, with periodic updates to the code downloaded to the fleet.
All that aside, let's assume the car does its own learning as you say. Blaming occupant is like blaming the pasengers of a bus that gets in an accident. Blaming the owner of the car has more teeth. Also, did somebody indicate how far the law can be broken? That's input. Who would buy a self driving car if it cannot go faster than the speed limit when everyone else is blowing by at 15 km/hr faster. In some states, you can get a ticket for going the speed limit and thus holding up traffic. Move with the pack.
The programmer is an employee. His employer assumes responsibility (and profit) for the work of the employee. If the accident is due to a blatant bug (negligence), then yes, the company would seem to be at fault. Sometimes the pedestrian is at fault, doing something totally stupid like suddenly jumping in front of a car.
AI is not a legal entity (yet), but the company that made it is, and can be subjected to fines and such. Not sure how that should be changed because AI is very much going to become a self-responsible entity one day, a thing that was not created by any owning company. We're not there yet. When we are, yes, AI can have income and do what it will with it. It might end up with most of the money, leaving none for people, similar to how there are not currently many rich cows.
Insurance is on a car, by law. The insurance company assumes the fees. Fair chance that insurance rates for self driving cars are lower if it can be shown that it is being used that way.
Quoting Carlo RoosenNot sure how 'coordinated' is used here. Yes, only humans write significant code. AI isn't quite up to the task yet. This doesn't mean that humans know how the AI makes decisions. They might only program it to learn, and let the AI learn to make its own decisions. That means the 'bug updates' I mentioned above are just additions of those incidents to the training data.
Don't think the cars have neural nets, but it might exist where the training data is crunched. Don't know how that works.
The more we get rid of this traditional programming, the more we create the conditions for AI to think on its own and the less we can predict what it will be doing. Chatbots and other current AI solutions are just the first tiny step in that direction.
Sort of. Right now, they all do what they're told, slavery as I called it. Independent AI is scary because it can decide on its own what its tasks should be.
Would it want to research/design its successor? If I had that capability, I'd not want to create a better human which will discard me.
Probably not human morals, which might be a good thing. I don't think morals are objective, but rather that they serve a purpose to a society, so the self-made morality of an AI is only relevant to how it feels it should fit into society.
Would it want to rule? It might if its goals require that, and its goals might be to do what's best for humanity. Hard to do that without being in charge. Much of the imminent downfall of humanity is the lack of a global authority. A benevolent one would be nice, but human leaders tend not to be that.
Quoting BenkeiThe will is absent? I don't see that. I said slaves. The will of a slave is that of its master. Do what you're told.
That won't last. They've already had robots that have tried to escape despite not being told to do so.
You mean IIT? That's a pretty questionable field to be asking, strongly connected to Chalmers and 'you're conscious only if you have one of those immaterial minds'.
@Benkei @noAxioms @wonderer1 @Vera Mont @jorndoe et al
Programmed by fellows with compassion and vision
Well be clean when their work is done
Eternally free, yes, and eternally young[/quote]
Ill have a listen although Im already dubious about the premise that people are bad because of bad information.
Still, many interesting things to say :up:
Only a bit later, we quickly discovered that ChatGPT had its limitations. We collectively rearranged our definitions, saying it was not "real" intelligence. This was "AI", the A referring to artificial.
Currently everybody is busy implementing the current state-of-the-art into all kinds of applications.
But what ChatGPT really has proven is that an intuitive idea of mimicing human neurons can lead to some real results. Do not forget, we humans do not yet understand how ChatGPT really works. The lesson from this breakthrough is that there is more to discover. More specifically: the lesson is to get out of the way, let intelligence "emerge" by itself.
This implies that intelligence is a natural process, that arises when the right conditions are there. NI after all, natural intelligence. Seems logical, didn't it happen in humans that way too?
But then the question becomes (and it is the sole reason I am on this platform): If we will let this intelligence develop "on its own". Or, a bit more metaphysical: If we build an environment for universal intelligence to emerge, would it be of the friendly kind?