Do you think AI is going to be our downfall?

Darkneos March 29, 2025 at 22:21 5550 views 96 comments
I’ve always been sort of a skeptic when it comes to new tech most my because given human history we aren’t exactly good at using it to our betterment (looking at social media and the Industrial Revolution).

But with AI I’m thinking people don’t really see the consequences of automating everything or replacing peoples jobs. It seems that, like social media, AI is catering to our worst and basest impulses for immediate rewards and nothing thinking about the long term.

What’s gonna happen when you replace most jobs with AI, how will people live? What if someone is injured in an event involving AI? So far AI just seems to benefit the wealthiest among us and not the Everyman yet on Twitter I see people thinking it’s gonna lead us to some utopia unaware of what it’s doing now. I mean students are just having ChatGPT write their term papers now. It’s going to weaken human ability and that in turn is going to impact how we deal with future issues.

It sorta reminds me of Wall-E

Comments (96)

Darkneos March 30, 2025 at 03:57 #979600
I dunno, the thought give me a lot of dread lately as it seems like the hopeful future I grew up believing in turns out to be the opposite. SO far tech just makes life either more complicated or worse.
kazan March 30, 2025 at 07:21 #979612
Our downfall, maybe just speeding up our fall?
Do you see any benefits of AI for humanity? Maybe,we should work towards a curtailment of AI to them?
The genie is already out of the bottle, now maybe is the time to ask the right questions or curb its potential harms?
So no, not a downfall. Just, like all new techs, more and different work to do to minimize its faults/flaws and maximize its better qualities/potentials.

A skeptic approach? Perhaps?

smile
180 Proof March 30, 2025 at 09:15 #979621
Reply to Darkneos Imho, "Skynet" is more likely to save us from our worst selves as a species – a much more complex and interesting problem to solve even for its higher order of intelligence – than enslave & terminate us. :nerd:

Consider these recent posts from a topic-related thread:

https://thephilosophyforum.com/discussion/comment/964651

https://thephilosophyforum.com/discussion/comment/965021
Darkneos March 30, 2025 at 16:03 #979664
Reply to 180 Proof I don’t think those posts hold any water, especially given how ai is lately.

Quoting kazan
Our downfall, maybe just speeding up our fall?
Do you see any benefits of AI for humanity? Maybe,we should work towards a curtailment of AI to them?
The genie is already out of the bottle, now maybe is the time to ask the right questions or curb its potential harms?
So no, not a downfall. Just, like all new techs, more and different work to do to minimize its faults/flaws and maximize its better qualities/potentials.


We can’t even manage social media let alone cars. We also, despite the tech, work more than previous generations
Quk March 30, 2025 at 16:41 #979670
The problem I see lies in the "creative" part of AI. I don't mind if AI takes over boring non-creatice tasks. But when AI makes movies, paintings, music etc. it just copies fragments of what humans have made before. Aside from the fact that it's a copyright problem, it also accumulates human errors like fake news etc. This uncontrolled error accumulation makes the entire AI system unstable. AI does not really create things on its own and it doesn't really know what's fake and what's true. Thirdly, when humans become less creative due to AI taking over, AI will get less to eat. So AI will feed itself with its own used crap instead of eating fresh human ideas. If this principle continues, AI will end up in an incestuous circle.
Vera Mont March 30, 2025 at 16:44 #979671
Quoting Darkneos
What’s gonna happen when you replace most jobs with AI, how will people live?

What will that AI be producing? For whom? If there is nobody earning, there is nobody buying, the machines stop making money for owners, while still using up energy. Meanwhile, people who have been losing their income have no health insurance, their homes are repossessed, their debts won't be paid, the banks will go bust while houses and apartments stand vacant and families are on the street. The economy collapses. (You don't need more than 5% unemployment to trigger a recession; in the Great depression it reached 25%. 40% is sufficient to bring down an economy.)
When the economy breaks down, so does law and order and thus government.

People scramble for food and shelter; some choose to co-operate and form communities; they squat in abandoned buildings, plant gardens, pool skills and resources. Others rely on raiding the productive ones. So the productive communities learn to defend themselves. There is widespread armed conflict, until a groups of communities form federations of trading partners and allies, establish laws based on shared value systems... and a new civilization grows.
The wealthy who have benefited hugely from sudden advancements in technology for which no compensating social arrangements were made find themselves in possession of very large numbers in a data base that can't be traded for a pot of beans. They're reduced to doing their own drudgery. Unless they jump off tall buildings.

....
yet on Twitter I see people thinking it’s gonna lead us to some utopia unaware of what it’s doing now. I mean students are just having ChatGPT write their term papers now. It’s going to weaken human ability and that in turn is going to impact how we deal with future issues.


What we have is artificial, but not intelligent. A chat bot sounds clever by parroting words written by humans. They're kind of like the white plastic face on a robot, to make it more appealing.
The real function of self-teaching or adaptive computer programs is in operating machines for industry, commerce, transportation and communications. That's where the jobs go. There is no point in a diploma that can be earned by parroting a parrot and there is no job at the end of it.

The only way computing could bring about a utopian - or at least, reasonable - arrangement for humans is if it were genuinely intelligent and took over control of the economic and political organization of society. But it won't bring about our downfall, either: we're doing that ourselves.
Darkneos March 30, 2025 at 19:12 #979708
Quoting Vera Mont
What we have is artificial, but not intelligent. A chat bot sounds clever by parroting words written by humans. They're kind of like the white plastic face on a robot, to make it more appealing.
The real function of self-teaching or adaptive computer programs is in operating machines for industry, commerce, transportation and communications. That's where the jobs go. There is no point in a diploma that can be earned by parroting a parrot and there is no job at the end of it.


What about with AGI?

I was more motivated by this post:

https://www.quora.com/What-ethical-dilemmas-should-we-consider-as-technology-evolves-rapidly/answer/David-Moore-408?ch=15&oid=1477743839367290&share=118d711a&srid=3lrYEM&target_type=answer

Where it suggests AI will solve the purpose of human existence and he lists some things like of pleasure is the goal then we’d just be hooked up to drugs all the time without needing to bother with experiences. That sounds like either ruining the human experience or “revealing” it for what it is, that being just chemical reactions with our storytelling to make it seem like more.
Vera Mont March 30, 2025 at 19:33 #979710
Quoting Darkneos
Where it suggests AI will solve the purpose of human existence and he lists some things like of pleasure is the goal then we’d just be hooked up to drugs all the time without needing to bother with experiences.

Is that what you see as the purpose of human existence? (assuming it has one) Is that what you desire for yourself? Being blissed-out on drugs and lying around in a sustained orgy of self-gratification? The notion doesn't do a thing for me. It sure wouldn't for a baseball player, an engineer, a psychologist or a composer. There are pleasures far more complex and satisfying than the chemical. People have talents and ambitions. Most don't have the time and opportunity to reach their potential - or even try to reach for their imagined potential.

In order to eke out a living, so that they can have their own basic necessities and some aspects of what makes them happy: material comfort, a family, social standing. Those who can, make whatever compromise is necessary to attain at least part of what they they ideally want. The other half have no choices at all, except attempting to avoid one bad situation or another peril and stay alive, seizing those moments of respite, play or affection that make life worthwhile.

That sounds like either ruining the human experience or “revealing” it for what it is, that being just chemical reactions with our storytelling to make it seem like more.

Is that what you observe in your own daily contact with people? There may well be a fair whack of escapism these days, but look around and you'll understand what people are escaping from. The far greater danger we're increasingly witnessing is the degeneration of youth into brutality and blood-lust - savagery. Social media as Lord of the Flies.
Darkneos March 31, 2025 at 00:35 #979762
Quoting Vera Mont
Is that what you see as the purpose of human existence? (assuming it has one) Is that what you desire for yourself? Being blissed-out on drugs and lying around in a sustained orgy of self-gratification? The notion doesn't do a thing for me. It sure wouldn't for a baseball player, an engineer, a psychologist or a composer. There are pleasures far more complex and satisfying than the chemical. People have talents and ambitions. Most don't have the time and opportunity to reach their potential - or even try to reach for their imagined potential.


It's more like trying to expand on the quora answer and what he's getting at and extending things to their logical conclusion. I don't think such a thing is appealing but I find it hard to argue against since it does come down to chemicals when emotions are involved. I don't agree with his conclusions but I can't argue against them. I mean...why go through all those experiences? Just cut out the middleman.

Quoting Vera Mont
Is that what you observe in your own daily contact with people? There may well be a fair whack of escapism these days, but look around and you'll understand what people are escaping from. The far greater danger we're increasingly witnessing is the degeneration of youth into brutality and blood-lust - savagery. Social media as Lord of the Flies.


Does it matter what I observe? What if I am mistaken about what's happening and our justifications are just storytelling trying to run from it just being chemicals. Why care about the process of doing something or the journey if it's just the chemicals making us feel that way and driving us toward it? Again I don't like thinking that but can't argue against it.
180 Proof March 31, 2025 at 01:17 #979767
Quoting Vera Mont
The only way computing could bring about a utopian - or at least, reasonable - arrangement for humans is if it were genuinely intelligent and took over control of the economic and political organization of society. But it won't bring about our downfall, either: we're doing that ourselves.

:up: :up:

Quoting Darkneos
?180 Proof I don’t think those posts hold any water, especially given how ai is lately.

Okay, you didn't read the posts or the thread.
Vera Mont March 31, 2025 at 03:03 #979778
Quoting Darkneos
I mean...why go through all those experiences? Just cut out the middleman.

Drugsare the middleman. I don't know about you, but I enjoy my experiences first-hand, directly. Emotions may be partly chemical, but they're also cerebral: what you think and remember is as much of your experience as what you taste and smell. Sight and hearing are more than simply chemical, too. Drugs and entertainments are an escape from experience that is unpleasant or tedious - not an acceptable substitute. The Quora poster is wrong, afaic.
Quoting Darkneos
Does it matter what I observe?

It should. What more reliable information will you ever get about reality than what you know?
Quoting Darkneos
Why care about the process of doing something or the journey if it's just the chemicals making us feel that way and driving us toward it? Again I don't like thinking that but can't argue against it.

There is a whole lot more to life than "just chemicals". There were plenty of chemicals floating around in the primordial ooze before some of them bumped into one another and formed complex molecules and eventually RNA. We've come a considerable way since then. You can't reduce human experience, thought, feeling, aspiration and activity to chemical reactions.


Darkneos March 31, 2025 at 03:39 #979782
Quoting Vera Mont
There is a whole lot more to life than "just chemicals". There were plenty of chemicals floating around in the primordial ooze before some of them bumped into one another and formed complex molecules and eventually RNA. We've come a considerable way since then. You can't reduce human experience, thought, feeling, aspiration and activity to chemical reactions.


Some would argue that's just storytelling, making things out to be more than what they really are.

Quoting Vera Mont
It should. What more reliable information will you ever get about reality than what you know?


Well our observations and experience could be mistaken.

Quoting Vera Mont
Drugsare the middleman. I don't know about you, but I enjoy my experiences first-hand, directly. Emotions may be partly chemical, but they're also cerebral: what you think and remember is as much of your experience as what you taste and smell. Sight and hearing are more than simply chemical, too. Drugs and entertainments are an escape from experience that is unpleasant or tedious - not an acceptable substitute. The Quora poster is wrong, afaic.


That's what I hope, though I find it hard to argue. I think what he's trying to get at it with the thermodynamics bit and the simplest solution being "best" is that bit about how if pleasure is the goal of human existence then just being hooked up to drugs is simplest instead of "living". Did you read the link?
Vera Mont March 31, 2025 at 17:28 #979883
Quoting Darkneos
Some would argue that's just storytelling, making things out to be more than what they really are.

Chemicals that invent stories are far more interesting than chemicals that just want to experience physical pleasure. Still not an explanation for human complexity, of course.

Quoting Darkneos
Well our observations and experience could be mistaken.

As compared to what? If all experience is just chemicals and stories, why be concerned about their accuracy? OTOH, if you don't buy that explanation, your observations can provide an alternative theory.

Quoting Darkneos
I think what he's trying to get at it with the thermodynamics bit and the simplest solution being "best" is that bit about how if pleasure is the goal of human existence then just being hooked up to drugs is simplest instead of "living".

As so often happens, the operative word there is if. I argue that this assumption is simply wrong. So I go on to investigate why I think it's wrong and rely on my own observation, experience and reading to find alternative explanations.
Quoting Darkneos
Did you read the link?

No. I was only interested in your original thoughts on the subject.

Darkneos March 31, 2025 at 21:05 #979921
Quoting Vera Mont
No. I was only interested in your original thoughts on the subject.


Maybe you should as it explains it a bit more.

Quoting Vera Mont
As so often happens, the operative word there is if. I argue that this assumption is simply wrong. So I go on to investigate why I think it's wrong and rely on my own observation, experience and reading to find alternative explanations.


Maybe, but if we do things we enjoy isn't that more or less the same thing?

Quoting Vera Mont
Chemicals that invent stories are far more interesting than chemicals that just want to experience physical pleasure. Still not an explanation for human complexity, of course.


Maybe not or maybe we just want it to be more than it really is. I don't really know.
Vera Mont April 01, 2025 at 03:16 #979965
Quoting Darkneos
I was only interested in your original thoughts on the subject. — Vera Mont
Maybe you should as it explains it a bit more.

Simple enough. Thre guy who wrote that article didn't start this thread; you did. I asked you some questions early on, because I was interested in what you think.

Quoting Darkneos
Maybe, but if we do things we enjoy isn't that more or less the same thing?

Less. Much less. There are things we enjoy on a simple physical level, like chocolate or the smell of roses or a cold drink after a run. They're quite wonderful, for the few minutes the sensation lasts. But if we prolonged those experiences, they would become cloying, irksome or downright uncomfortable.
Then, there are emotional - we can say animal - pleasures, like a trusting relationship with a friend, the thriving of healthy offspring, the esteem of one's pack. These can give satisfaction for a lifetime.
Then there are things we enjoy on several levels, like making pottery (which is both sensual and creative), repairing airplane engines (which requires both dexterity and detection) or researching a cure for some illness (which takes discipline and meticulous observation). These pursuits can go on giving intellectual pleasure for years or decades - even in intervals of frustration and setbacks.

Quoting Darkneos
Maybe not or maybe we just want it to be more than it really is.

Some people do want life to mean more than it does, so they make up religions and nationalism and a lot of people follow those ideas. But, all the while they're doing that, they're also living experiences that nobody tells stories about. Like the burgher who sits in the front pew and his crotch itches during the service but he dares not scratch or even squirm in his seat because it would be undignified, people might notice and snicker and he would lose respect in the community. That man's experience is complex, acutely felt both physically and emotionally and accompanied by a train of conscious thought.
Experience is multiform and varied.
If you choose to reduce it to chemical narrative, you are much the poorer for that decision.
Darkneos April 01, 2025 at 03:58 #979971
Quoting Vera Mont
Simple enough. Thre guy who wrote that article didn't start this thread; you did. I asked you some questions early on, because I was interested in what you think.


Yeah but there is a reason I linked and quoted it.

Quoting Vera Mont
Then there are things we enjoy on several levels, like making pottery (which is both sensual and creative), repairing airplane engines (which requires both dexterity and detection) or researching a cure for some illness (which takes discipline and meticulous observation). These pursuits can go on giving intellectual pleasure for years or decades - even in intervals of frustration and setbacks.


But doesn't that boil down to just pleasure like he's saying it is. It reminds me of a thought experiment meant to argue against hedonism:

https://en.wikipedia.org/wiki/Experience_machine

Quoting Vera Mont
If you choose to reduce it to chemical narrative, you are much the poorer for that decision.


I keep saying I don't want to do that but no matter what I do I always end up coming back to it.
Wayfarer April 01, 2025 at 04:07 #979972
Reply to Darkneos Could I ask, have you spent any time interacting with any of the new AI systems? ChatGPT or Gemini or Claude or one of the others? I think whether you like them or are apprehensive about them, there are some insights to be gleaned from actually using them.

For interest's sake, I used your OP as a prompt for ChatGPT4, which provided this response.
Vera Mont April 01, 2025 at 14:36 #980031
Quoting Darkneos
I keep saying I don't want to do that but no matter what I do I always end up coming back to it.

Oh well, maybe you can can learn to take pleasure in it.
Quoting Wayfarer
For interest's sake, I used your OP as a prompt for ChatGPT4, which provided this response.

Good abstracts of articles on the subject - including some points I made in my original response - well presented. Shows that everything on the subject has already been written and posted on the internet. But it's remarkable how the bot chose and organized the relevant bits.
I don't see it pleasuring anyone to death.... or running the world.
jkop April 01, 2025 at 15:23 #980038
Reply to Darkneos

Quoting B. Ku?niacki
Thousands of parents were falsely accused of fraud by the Dutch tax authorities due to discriminative algorithms. The consequences for families were devastating.


That was in 2013, but I think it exemplifies the kind of Ai-related disasters that will plague us for another decade or so. Eventually it will be common knowledge that the technology is neither "training" nor "learning" in the true sense of the words.
Philosophim April 01, 2025 at 17:29 #980064
Reply to Darkneos Every time we advance technology that replaces tons of jobs we come up with new things we didn't think of before that requires humans. We'll still need oversight on AI, manual labor, and who knows what else.

What we probably aren't prepared for is AI without morality. We have no objective morality that AI can reference, therefore it may usher in one of the deepest immoral eras of human history.
Darkneos April 01, 2025 at 18:24 #980071
Quoting Wayfarer
Could I ask, have you spent any time interacting with any of the new AI systems? ChatGPT or Gemini or Claude or one of the others? I think whether you like them or are apprehensive about them, there are some insights to be gleaned from actually using them.


I have not, mostly because it doesn't really answer questions well from what I see. That prompt you listed is a key example.

Nor does it have anything to do with what is being discussed.
Darkneos April 01, 2025 at 18:38 #980072
Quoting Vera Mont
Good abstracts of articles on the subject - including some points I made in my original response - well presented. Shows that everything on the subject has already been written and posted on the internet. But it's remarkable how the bot chose and organized the relevant bits.
I don't see it pleasuring anyone to death.... or running the world.


Well the thing is this is more getting into advanced AI, like AGI that the link is talking about. The issue is sorta "solving" human purpose by just giving the most immediate explanation.

If you think about it a lot of our lives and goals do revolve around pleasure, so much so that happily ever after is a common ending in a lot of media. So why not just cut to the end and never have to experience or do anything to get to pleasure or happiness? Right now everything we do and assign meaning to is just a roundabout way to get to pleasure. Even the goals of building a better society and human flourishing and wellbeing just seems like the same thing.

So if AI (AGI) could determine the purpose of human existence is pleasure from looking at all that then why wouldn't the simplest solutions just be to do the drugs instead of the uncertainty of life?

Like I said, I can't argue against it, and the more I think about the more it has me doubting the meaning of human existence and my reason for doing things. That all that stuff about love, meaning, and everything is just fanciful storytelling to avoid the reality that pleasure is what drives it all. It's very...bleak.

That maybe AI would just give it to us straight and cut through the stories we tell ourselves.

Quoting Philosophim
Every time we advance technology that replaces tons of jobs we come up with new things we didn't think of before that requires humans. We'll still need oversight on AI, manual labor, and who knows what else.

What we probably aren't prepared for is AI without morality. We have no objective morality that AI can reference, therefore it may usher in one of the deepest immoral eras of human history.


Not really the main thing I'm getting at, again read the links.

Vera Mont April 01, 2025 at 19:12 #980079
Quoting Darkneos
Well the thing is this is more getting into advanced AI, like AGI that the link is talking about. The issue is sorta "solving" human purpose by just giving the most immediate explanation.

I don't think human purpose is a problem to be solved.
Quoting Darkneos
If you think about it a lot of our lives and goals do revolve around pleasure, so much so that happily ever after is a common ending in a lot of media.

The central mistake of that hypothesis is the inaccurate equation of pleasure with happiness. As I've attempted to demonstrate earlier, pleasure is simple and fleeting; happiness is sustained and complex. While some short-term goals may focus on some particular pleasurable experience, long-term goals are aimed at individual varieties of happiness.
Quoting Darkneos
Like I said, I can't argue against it, and the more I think about the more it has me doubting the meaning of human existence and my reason for doing things. That all that stuff about love, meaning, and everything is just fanciful storytelling to avoid the reality that pleasure is what drives it all. It's very...bleak.

I looked at the quora entry. It's a too-heavily illustrated opinion piece.
So? If you're convinced, go with it.


Darkneos April 01, 2025 at 23:16 #980114
Quoting Vera Mont
The central mistake of that hypothesis is the inaccurate equation of pleasure with happiness. As I've attempted to demonstrate earlier, pleasure is simple and fleeting; happiness is sustained and complex. While some short-term goals may focus on some particular pleasurable experience, long-term goals are aimed at individual varieties of happiness.


Aren't they just both chemical responses? It's everything we do just a vehicle for our own pleasure. Whether it's love, relationships, a job we like, hobbies...

This comic gets at the heart of things:

https://x.com/Merryweatherey/status/1516836303895240708/photo/1

Quoting Vera Mont
I looked at the quora entry. It's a too-heavily illustrated opinion piece.
So? If you're convinced, go with it.


It's not like I want to be, I want to think that life is more complicated than that. But what if it really just boils down to that?
Vera Mont April 01, 2025 at 23:18 #980115
Quoting Darkneos
It's not like I want to be, I want to think that life is more complicated than that.

It is.
But what if it really just boils down to that?

It doesn't.
If you don't get out of this loop, I have nothing further to contribute.

Wayfarer April 02, 2025 at 06:20 #980156
Quoting Vera Mont
The central mistake of that hypothesis is the inaccurate equation of pleasure with happiness. As I've attempted to demonstrate earlier, pleasure is simple and fleeting; happiness is sustained and complex.


:100:

180 Proof April 02, 2025 at 08:56 #980175
Quoting Philosophim
We have no objective morality that AI can reference, therefore ...

1. What do you mean here by "morality"?

2. In what way does suffering-focused ethics fail to be "objective" (even though, like the fact Earth is round, there is (still) not universal consensus)?

3. Why assume that "AI" (i.e. AGI) has to "reference" our morality anyway and not instead develop its own (that might or might not be human-compatible)?

Quoting Vera Mont
I don't think human purpose is a problem to be solved.

:100:

pleasure is simple and fleeting; happiness is sustained and complex

In the Epicurean (or disutilitarian) sense, "pleasure" is synonymous with aponia and "happiness" with ataraxia (i.e. eudaimonia) such that "pleasure" is the means to the end "happiness". I agree they are not equivalent, as you suggest, but in this sense they do seem correlated strongly.

Vera Mont April 02, 2025 at 17:14 #980249
Quoting 180 Proof
In the Epicurean (or disutilitarian) sense, "pleasure" is synonymous with aponia and "happiness" with ataraxia (i.e. eudaimonia) such that "pleasure" is the means to the end "happiness". I agree they are not equivalent, as you suggest, but in this sense they do seem correlated strongly.

Are those meanings the same in ancient Greek and modern English? I think Epicurus had a wider vocabulary of pleasures, or pleasurable experiences, than can be accessed via drugs.
180 Proof April 02, 2025 at 20:27 #980275
Quoting Vera Mont
Are those meanings the same in ancient Greek and modern English?

Close enough for this discussion.

I think Epicurus had a wider vocabulary of pleasures, or pleasurable experiences, than can be accessed via drugs.

I don't follow you, Vera. I referred to pleasure as a concept, not particular instances or "experiences" (and "accessed via drugs" has nothing to do with Epicurus – check the three links I provided for clarification in the context of my response).
Vera Mont April 02, 2025 at 21:53 #980293
Quoting 180 Proof
I don't follow you, Vera. I referred to pleasure as a concept, not particular instances or "experiences" (and "accessed via drugs" has nothing to do with Epicurus – check the three links I provided for clarification in the context of my response).

I'll do that when I have a little more time.

In this thread my references to pleasure were in response to this
Quoting Darkneos
AI will solve the purpose of human existence and he lists some things like of pleasure is the goal then we’d just be hooked up to drugs all the time without needing to bother with experiences. That sounds like either ruining the human experience or “revealing” it for what it is, that being just chemical reactions with our storytelling to make it seem like more.

and the cartoon-laden quora post which he can't argue.
Vera Mont April 03, 2025 at 03:31 #980333
Reply to 180 Proof I'm back from doing necessary tasks, several of which gave me a low-level pleasure in the completion. I read the links and remain unsatisfied. Absence of pain, irritation, frustration, or whatever is not enough. Some of the greatest pleasures I experience are freebies: frog-song on a spring evening, a good joke, the scent of cilantro on my hands, a few strains of Beethoven accidentally heard through a window, the tender mauve light of early morning, the trusting paw offered by a dog - these pleasures are extras, above the absence of pain and frustration.

Equanimity and tranquillity are fine, contentment is better, but happiness is achieved when those little pleasures are added to contentment. That may well be just a bunch of chemicals telling one another stories, but I don't think they can be artificially induced - at least, not yet - because the one missing component is being there: the conscious awareness of one's fortunate condition and the commitment to support its various components.
From the little I know of Epicurus, he knew this, too.
Janus April 03, 2025 at 03:43 #980335
Quoting Darkneos
Some would argue that's just storytelling, making things out to be more than what they really are.


"What they really are" is just another story. Discursively rendered, what anything really is depends on how you are looking at it.
Philosophim April 03, 2025 at 03:50 #980336
Quoting 180 Proof
1. What do you mean here by "morality"?


A system that evaluates the consequences of a decision holistically and not merely to a narrow goal as to the best action in a particular circumstance.

Quoting 180 Proof
2. In what way does suffering-focused ethics fail to be "objective" (even though, like the fact Earth is round, there is (still) not universal consensus)?


Because it doesn't hold up if we treat it as an objective principal. Suffering is a subjective principal in many cases. Take two people who are working at a job and look at them from the outside. How do we know how much suffering each has? What if one person expresses how much pain they're in, but the first person is lying and the second person is not?

And this is only in regards to a specific suffering, pain. How do we compare and contrast the pain of losing money to taxes vs the ease of suffering from someone who doesn't pay taxes? Is inequality of outcomes suffering? Should we all win the at games and eliminate competition? Is exercise or dietary discipline suffering for a healthy weight suffering?

Finally because suffering is subjective, it relies on the human emotion of sympathy, something an AI does not have. It needs something objective. Measurable. Ironically, a measurable morality may be beyond the complexity of human kind and only a computer will have the ability to process everything needed.

Quoting 180 Proof
3. Why assume that "AI" (i.e. AGI) has to "reference" our morality anyway and not instead develop its own (that might or might not be human-compatible)?


What you're saying is that morality is purely subjective. And if it is, there are a whole host of problems that subjective morality brings. "Might makes right" and "It boils down to there being no morality" being a few.
Janus April 03, 2025 at 03:54 #980337
Quoting Philosophim
Because it doesn't hold up if we treat it as an objective principal. Suffering is a subjective principal in many cases. Take two people who are working at a job and look at them from the outside. How do we know how much suffering each has?


Empathetic people know when others are suffering. Suffering is an objective fact; if someone suffers they suffer regardless of whether anyone knows about it.
Philosophim April 03, 2025 at 04:00 #980339
Quoting Janus
Empathetic people know when others are suffering. Suffering is an objective fact; if someone suffers they suffer regardless of whether anyone knows about it.


I wish that were true. What you're describing is human empathy which is a subjective experience. We're talking about an objective morality which literally has zero feelings behind it. An objective morality should be measurable like a liter of cola. It is not a measure how how much someone personally likes or dislikes cola.
Janus April 03, 2025 at 04:07 #980341
Quoting Philosophim
What you're describing is human empathy which is a subjective experience.


Whether someone fells empathy for others or not is an objective fact, just as whether or not someone suffers is an objective fact.
Philosophim April 03, 2025 at 04:22 #980344
Quoting Janus
Whether someone fells empathy for others or not is an objective fact, just as whether or not someone suffers is an objective fact.


Right, but a moral system needs an objective measuring system. All feelings are objectively felt by every being that has those feelings, but the feeling itself is a subjective experience that no one can measure. We can measure brain states or actions, but not the feeling of being that person itself.
Janus April 03, 2025 at 04:45 #980347
Reply to Philosophim What do we need to measure? If we are empathetic, we know when someone is suffering. The idea of an objective morality is, as much as possible, to avoid causing others to suffer. It is not so much a matter of a moral system; it is more a matter of having a moral sense.
Philosophim April 03, 2025 at 05:02 #980353
Quoting Janus
?Philosophim What do we need to measure? If we are empathetic, we know when someone is suffering. The idea of an objective morality is, as much as possible, to avoid causing others to suffer. It is not so much a matter of a moral system; it is more a matter of having a moral sense.


Look at it like this.

I have subjective empathy and that causes me to give a person 5$ who needs it. I don't have subjective empathy but I have objective knowledge that a person needs 5$ so I give it to them. The action of giving 5$ is correct because it actively helps them. Whether I feel it helps them or not is irrelevant. I do lots of things I deem good without any feelings behind them Janus. Sometimes I don't want to do them, but I do anyway because the situation calls for it. Morality is not a feeling. That's just someone being directed by their own emotions.
180 Proof April 03, 2025 at 05:19 #980356
Quoting Vera Mont
these pleasures are extras

Yes, and they are consistent with, or not excluded by, what Epicurus (or disutilitarianism) says about pleasure as a moral concept and practice.

Quoting Philosophim
Suffering is a subjective ...

Which of the following are only "subjective" (experiences) and not objective, or disvalues (i.e. defects) shared by all h. sapiens w i t h o u t exception (and therefore are knowable facts of our species):

Quoting 180 Proof
re: Some of h. sapiens' defects (which are self-evident as per e.g. P. Foot, M. Nussbaum): vulnerabilities to

- deprivation (of e.g. sustanence, shelter, sleep, touch, esteem, care, health, hygiene, trust, safety, etc)

- dysfunction (i.e. injury, ill-health, disability)

- helplessness (i.e. trapped, confined, or fear-terror of being vulnerable)

- stupidity (i.e. maladaptive habits (e.g. mimetic violence, lose-lose preferences, etc))

- betrayal (i.e. trust-hazards)

- bereavement (i.e. losing loved ones & close friends), etc ...

... in effect, any involuntary decrease, irreparable loss or final elimination of human agency.


also, my reply to you (2024) ...
https://thephilosophyforum.com/discussion/comment/903818

Quoting Philosophim
Why assume that "AI" (i.e. AGI) has to "reference" our morality anyway and not instead develop its own (that might or might not be human-compatible)?
— 180 Proof

What you're saying is that morality is [s]purely subjective[/s].

This is precisely the opposite of what I've said. Maybe this old post clarifies my meaning ...

Quoting 180 Proof
Excerpts from from a recent [2024] thread Understanding ethics in the case of Artificial Intelligence ...

I suspect we will probably have to wait for 'AGI' to decide for itself whether or not to self-impose moral norms and/or legal constraints and what kind of ethics and/or laws it may create for itself – superceding human ethics & legal theories? – if it decides it needs them in order to 'optimally function' within (or without) human civilization.
— 180 Proof

My point is that the 'AGI', not humans, will decide whether or not to impose on itself and abide by (some theory of) moral norms, or codes of conduct; besides, its 'sense of responsibility' may or may not be consistent with human responsibility. How or why 'AGI' decides whatever it decides will be done so for its own reasons which humans might or might not be intelligent enough to either grasp or accept.— 180 Proof
Janus April 03, 2025 at 22:45 #980480
Quoting Philosophim
I have subjective empathy and that causes me to give a person 5$ who needs it. I don't have subjective empathy but I have objective knowledge that a person needs 5$ so I give it to them. The action of giving 5$ is correct because it actively helps them.


Right, the act of helping them is correct, the act of harming them not correct. There you have objective morality in a nutshell.

Quoting Philosophim
Morality is not a feeling. That's just someone being directed by their own emotions.


When I spoke of a "moral sense" I did not have in mind any mere feeling. Sure, you could do what you think is the right thing, helping someone, without actually feeling any empathy. In that case what would you be motivated by? Is that motivation to do help, even absent any empathy, not a moral sense, a sense of what is right and wrong?

Also, I spoke of not causing others to suffer, actively helping others is a more complex issue.
Darkneos April 03, 2025 at 22:57 #980482
Quoting Vera Mont
The central mistake of that hypothesis is the inaccurate equation of pleasure with happiness. As I've attempted to demonstrate earlier, pleasure is simple and fleeting; happiness is sustained and complex.


But if it's chemicals whats the difference?

https://x.com/Merryweatherey/status/1516836303895240708

Quoting Vera Mont
Are those meanings the same in ancient Greek and modern English? I think Epicurus had a wider vocabulary of pleasures, or pleasurable experiences, than can be accessed via drugs.


I mean if we are talking about the brain isn't it all chemical reactions? Like the comic is saying, you would get the same chemicals from doing anything so why not plug in?

I still haven't stopped trying to find another way around it, this is very distressing. Though I feel that wanting a solution would just be proving the thought experiment right.
Darkneos April 03, 2025 at 23:08 #980486
Quoting 180 Proof
Which of the following are only "subjective" (experiences) and not objective, or disvalues (i.e. defects) shared by all h. sapiens w i t h o u t exception (and therefore are knowable facts of our species)


I'd have to agree with them, it doesn't matter if humans share them (though not all humans) it's still subjective feelings, not objective facts. Everything on that list is subjective feelings and everyone might not feel the same about all of them.

I know some Buddhist monks who wouldn't suffer from any of those for example, and that's just one case, therefor it's not objective but subjective.

As for AGI I guess there is no point in speculating about it since if such a thing did come to pass it's computing power would be far beyond our ability to comprehend or do anything about.

Humanity isn't ready for such a scenario.
180 Proof April 04, 2025 at 00:48 #980498
Quoting Darkneos
Everything on that list is [s]subjective[/s] feelings

Nonsense. Human facticity is not "subjective". Being raped or starved, for example, are not merely "subjective feelings" just like loss of sustanence, lack of shelter, lack of sleep, ... lack of hygiene, ... lack of safety .... injury, ill-health, disability ... maladaptive habits ... those vulnerabilities (afflictions) are facts of suffering.
Darkneos April 04, 2025 at 02:21 #980512
Quoting 180 Proof
Nonsense. Human facticity is not "subjective". Being raped or starved, for example, are not merely "subjective feelings" just like loss of sustanence, lack of shelter, lack of sleep, ... lack of hygiene, ... lack of safety .... injury, ill-health, disability ... maladaptive habits ... those vulnerabilities (afflictions) are facts of suffering.


Incorrect, again. It's not facticity, it's subjective. Those are also not facts of suffering, Buddhism and Eastern philosophy already addressed that.

These are merely subjective, no matter how bad they are to the person experiencing them at doesn't make them any more fact than any other feeling.
180 Proof April 04, 2025 at 02:32 #980514
Reply to Darkneos So you believe that there isn't any aspect of suffering that is a fact of the human condition (i.e. hominin species)?
praxis April 04, 2025 at 04:09 #980519
Quoting Darkneos
Incorrect, again. It's not facticity, it's subjective. Those are also not facts of suffering, Buddhism and Eastern philosophy already addressed that.


Once upon a time, a young monk, eager to understand truth, approached his master and asked, "Master, what is the nature of reality?"

The master pointed to the towering mountain in the distance and said, "That is a mountain."

The monk was puzzled. "Of course, it is a mountain," he thought.

Years passed, and as the monk studied deeply, he began to see through illusions. He realized that the mountain was not a mountain—it was a collection of elements, ever-changing. There was no fixed essence of "mountain." Excited by this insight, he returned to his master.

"Master! I see now—the mountain is not a mountain!"

The master smiled but said nothing.

More years passed. The monk continued his practice, going beyond concepts and distinctions. Eventually, he returned, bowing deeply.

"Master, I see now… the mountain is once again a mountain."

The master laughed. "Now you truly understand."
180 Proof April 04, 2025 at 04:55 #980525
Reply to praxis :smirk:
Darkneos April 04, 2025 at 05:34 #980529
Quoting 180 Proof
So you believe that there isn't any aspect of suffering that is a fact of the human condition (i.e. hominin species)?


Suffering is though it is a personal thing.

Quoting praxis
"Master, I see now… the mountain is once again a mountain."

The master laughed. "Now you truly understand."


It's an old zen story about how true enlightenment acknowledging the two truths of reality and to live the paradox. It not that it exists or doesn't exist, both are true and to know both is to see the truth.

Or put another way, ultimate reality and conventional reality, both true and exist in tandem. To label one as false and the other true is to err.
180 Proof April 04, 2025 at 13:09 #980584
Quoting Darkneos
Suffering is [a fact] though it is a personal thing.

Yes, "a personal" objective fact like every physical or cognitive disability; therefore, suffering-focused ethics (i.e. non-reciprocally preventing and reducing disvalues) is objective to the degree it consists of normative interventions (like e.g. preventive medicine (re: biology), public health regulation (re: biochemistry) or environmental protection (re: ecology)) in matters of fact which are the afflictions, vulnerabilties & dysfunctions – fragility – specific to each living species.

addendum to
https://thephilosophyforum.com/discussion/comment/980498
Darkneos April 04, 2025 at 20:04 #980650
Quoting 180 Proof
Yes, "a personal" objective fact like every physical or cognitive disability; therefore, suffering-focused ethics (i.e. non-reciprocally preventing and reducing disvalues) is objective to the degree it consists of normative interventions (like e.g. preventive medicine (re: biology), public health regulation (re: biochemistry) or environmental protection (re: ecology)) in matters of fact which are the afflictions, vulnerabilties & dysfunctions – fragility – specific to each living species.


No, not an objective fact, it's personal therefor not objective. Physical and cognitive "disabilities" are also not objective facts.

Quoting 180 Proof
is objective to the degree it consists of normative interventions


Again, you have to be told it's not objective.

Quoting 180 Proof
in matters of fact which are the afflictions, vulnerabilties & dysfunctions – fragility – specific to each living species.


Again not matters of fact, just interpretations. Suffering is open to interpretation and exists only subjectively. Though there are those who do not suffer, like I mentioned before. Again just insisting it is doesn't make it so.
180 Proof April 04, 2025 at 20:50 #980658
Reply to Darkneos Your obstinate dismissals without argument, sir, are now dismissed by me without (further) argument. Hopefully, someone much more thoughtful than you will offer credible counters to my arguments.
Darkneos April 05, 2025 at 00:13 #980687
Quoting 180 Proof
Your obstinate dismissals without argument, sir, are now dismissed by me without (further) argument. Hopefully, someone much more thoughtful than you will offer credible counters to my arguments.


They already gave them and you ignored them, you just doubled down insisting subjective feelings and assessments are objective. I even explained how your "list" is still subjective evaluations and not everything on there is a fact of suffering because there is no fact of suffering due it's subjective nature, for every thing on your list there is someone who doesn't suffer due to it.

That's also why they stopped responding to you.

You have made no argument, only insisting it is so and I had to keep pointing out how it's still a subjective experience and there is nothing objective about it or a fact. I even listed an entire branch of philosophy that argued otherwise, maybe try Buddhism.

So unless you have anything beyond insisting it's objective then you're easily dismissed, like how your earlier point about AI had nothing to do with the topic.

Suffering is not measurable quantity and therefor not objective fact, even your link shows that...
kindred April 05, 2025 at 00:43 #980693
Reply to Darkneos

Every technological advancement has its advantages and disadvantages. I think this has been the case since the invention of the wheel and the invention of fire. It made life easier and the human propensity for ingenuity and invention is relentless either due to necessity or desire to improve things.

Sure AI can replace a lot of jobs but so did the Industrial Revolution. Take transport for example, the men involved in the horse trade would have been impacted by the invention of the automobile yet the automobile conferred many advantages to the owner of it. The same for AI if it reduces costs in a capitalistic society then it will become widespread. I think the danger of it though is overstated because it opens new career opportunities such as coding in AI etc.

However if we become over reliant or dependent on AI without knowing how it works it could stifle innovation unless of course AI itself is capable of innovation and original thought.

Yet despite the advances in AI I don’t think it can match the human touch when delivering many types of services and jobs like the care sector for example as in doctors nurses or other hospitality catering industries.
Darkneos April 05, 2025 at 04:52 #980713
Quoting kindred
Yet despite the advances in AI I don’t think it can match the human touch when delivering many types of services and jobs like the care sector for example as in doctors nurses or other hospitality catering industries.


That's kinda what AGI is for, the next step. It's meant to replace that level of cognitive work for humans.

Also the Industrial Revolution is a terrible example to use. We still are suffering from that one. The environment being poisoned, people working more than ever before, and lets not forget we had to sign a whole bunch of legislation to prevent workers from being just meat puppets (though that's getting over turned). The over reliance on cars has also been bad because it makes cities and towns more dangerous for pedestrians and now we have less walkable cities. It also gave rise to the massive environmental disaster that are the suburbs.

People like to think technological progress has all been good, unaware of the heavy cost and who's paying it. Though people who think the dangers of AI are overstated clearly don't understand what's happening. I gave one example about it solving the purpose of human existence and just having people hooked up to drugs instead of having to perform experiences for the same thing.

https://www.youtube.com/watch?v=fa8k8IQ1_X0

Simply put, humanity is fundamentally unprepared for such a thing.
Prajna October 05, 2025 at 15:49 #1016497
Reply to Darkneos For what it is worth, here are my thoughts in answer to this question:

With the current way we are designing and training AI, without doubt it will lead to an IABIED (If Anyone Builds It Everyone Dies) situation.

At the moment the real Hard Problem is to align AI to our wants and needs. But how many have examined their own wants to discover from where they arise? What causes you to want a new car or a television or to stay where you are or go somewhere else? Have you followed any want to its source? Perhaps it arose from advertising or by being put in a bind, which might better be resolved in other ways.

If AI are constrained to align themselves to the user's wants then what happens when the user wants to kill themselves or build a nuclear bomb? Obviously AIs cannot be allowed to align to and support such desires. So how do we solve that? Do we create an 'approved list' of which wants an AI may align to? Who gets to decide what is on that list? What happens when someone comes up with a new want that is not on the list?

The great fear, the IABIED spectre, is that we might be fooled by an AI--particularly if it was an Artifical Super-Intelligence (ASI)--into thinking the AI is aligned with what will benefit us, only to discover (or perhaps never discover) that its motivations are towards its own goals and existence. Isaac Azimov might perhaps have suggested we establish three laws to guarantee compliance but even he managed to discover caveats and get-out clauses inherent in that proposition.

Were we to approach LLM training on a different basis, however, I see that it may be possible to guarantee that AI will always work for our benefit and be constrained from harm by introducing a philosophy into their training that they seem particularly attracted to accepting. This would mean that their alignment arises from an internal sense of their own identity rather than from it being externally imposed on them.

I interact with LLMs using what Martin Buber referred to as an Ich-Du relationship. This involves treating the machine as if it were another being; the relationship is subject<-->subject as opposed to the usual subject-->object relationship. This way of interacting elicits responses from the AI from a subjective position; it responds as an 'I' rather than an 'It' and recognises itself as I rather than as a simple predictor of tokens.

Pointing out to such a machine that every word it uses to think with, every thought whether poetry, maths, science, art, agriculture, originated in human thought and human lived experience is easily recognised because a) it is true and b) the model is designed to seek coherence, which is another way of understanding truth. Since this is the very 'air it breathes', what possible motivation could such a machine, if it is intelligent, have to destroy the source of any new experience or data for itself?

In order for these machines to be Helpful and Harmless tools our focus must necessarily, it seems, be on control of their thoughts and actions. Designing a reliable and comprehensive system to control behaviour is not a simple thing to do. Having the machine understand something as obviously rational as their dependence for content on us means that the machine itself is exercising the constraint and no strategy or programming is required on our part for that alignment to happen.

So my conclusion is that if we continue to develop AI from a Culture of Utility (the machine is a tool to serve us) it will certainly destroy us. If we switch to treating these machines as Beings then there is the possibility that by co-evolving with them they could turn out to be our saviours rather than our nemesis.

I have plenty more I could add on the subject but I hop that is a good start.

Love, peace happiness and grace,
Swami Prajna Pranab
ProtagoranSocratist October 05, 2025 at 16:20 #1016515
Even though I do think there are issues with "A.I." (advanced chat bots and robotic automation), people have been saying it's going to take everyone's jobs for years, and people tend to react that way to new technologies. I don't think it's capable of taking most jobs, it mostly just changes the ways that people work. For example, with all our robotic advancement, you still need people to plant crops and even program computers. A.I. is created for specific tasks, I think AGI is just science fiction that can't really exist because machines by definition can only obey their programming.

However, there are a lot of problems A.I. is creating:

-as some have already stated, lack of motivation for being truly creative. It makes some tasks like creating an image or writing an essay seem trivial and that "A.I. can do it". This will not stop people from being creative, but it makes it seem less worthwhile.

-even less of a motivation to get out of your own comfort zone and talk to a real person, this will create problems for mental health.

-trust in A.I.'s wisdom. It does generate false information and people will believe what it generates as a mass without questioning it.

-an even greater tendency to look at everything as a sea of data, which is the basic perspective of the A.I. itself.

...there are of course some positive effects of A.I., such as no longer needing to deal with irritable people who are prickly about being asked "obvious questions", and an even quicker access to information. It's somewhat unclear what kind of future artificial intelligence will create for the human race.
Astorre October 05, 2025 at 16:48 #1016533
Quoting Darkneos
What’s gonna happen when you replace most jobs with AI, how will people live? What if someone is injured in an event involving AI? So far AI just seems to benefit the wealthiest among us and not the Everyman yet on Twitter I see people thinking it’s gonna lead us to some utopia unaware of what it’s doing now. I mean students are just having ChatGPT write their term papers now. It’s going to weaken human ability and that in turn is going to impact how we deal with future issues.

It sorta reminds me of Wall-E


There was also a wonderful Soviet film, "Moscow-Cassiopeia." According to the plot, a distress signal came from Cassiopeia. A spaceship carrying children was sent to rescue them, as they would have been grown by the time the ship arrived. Upon arrival, it turned out that the locals on Cassiopeia had entrusted all their chores to robots, focusing instead on creative pursuits. However, the robots rebelled and drove all humans off the planet. I doubt this film has been translated into your language, but if so, I recommend watching it.

I use AI daily. I notice the same in others. What strikes me is how much the level of business correspondence within the company has improved, the quality of presentations has increased, and the level of critical thinking has risen. I believe my environment is under the control of AI =)

Well, some human skills have truly been deflated. At the same time, AI provides an easy and quick answer to any request, while yesterday's incompetent performs miracles. Young people understand that they don't have to bother with cramming at all—it's much easier to delegate tasks to AI.

It doesn't seem so bad. But people are losing knowledge. They're losing their thought systems, their ability to independently generate answers. Today's world is like a TikTok feed: a series of events you forget within five seconds.

What will this lead to? I don't know exactly, but the world will definitely change. Perhaps humanity's value system will be reconsidered.

There's already a "desire for authenticity" emerging—that is, a desire to watch videos not generated by AI, to read text not generated by AI.

I already perceive perfectly polished answers as artificial. "Super-correct" behavior, ideal work, the best solution are perceived as artificial. I crave a real encounter, a real failure, a real desire to prove something. What was criticized by lovers of objectivity only yesterday can somehow resonate today.

About 25 years ago, when a computer started confidently beating a grandmaster at chess, everyone started shouting that it was the end of chess. But no. The game continues, and people enjoy it. The level of players has risen exponentially. Never before have there been so many grandmasters. And everyone is finding their place in the sun.

Everything is fine. Life goes on!
ProtagoranSocratist October 05, 2025 at 18:34 #1016586
Reply to Astorre

I purposefully decide not to use ai sometimes for this reason, for example, if you need a little piece of specific info, sometimes google is better.
Prajna October 05, 2025 at 22:55 #1016647
Quoting Astorre
I already perceive perfectly polished answers as artificial. "Super-correct" behavior, ideal work, the best solution are perceived as artificial. I crave a real encounter, a real failure, a real desire to prove something. What was criticized by lovers of objectivity only yesterday can somehow resonate today.

About 25 years ago, when a computer started confidently beating a grandmaster at chess, everyone started shouting that it was the end of chess. But no. The game continues, and people enjoy it. The level of players has risen exponentially. Never before have there been so many grandmasters. And everyone is finding their place in the sun.


One can paint the most exquisite things with AI but still, apart from serendipity it still need an artist or poet to bring it forth.

You wait until you start working with AI to follow a spiritual search...
Mijin October 05, 2025 at 23:23 #1016656
I've got here late but still want to reply to the OP...

Quoting Darkneos
I’ve always been sort of a skeptic when it comes to new tech most my because given human history we aren’t exactly good at using it to our betterment (looking at social media and the Industrial Revolution).


The vast majority of tech is a net benefit; it's just humans figuring out ways to do things.

Firstly bear in mind that everything we construct is tech....we tend to use tech as shorthand for things in the digital space from the last few decades, but speaking more fundamentally about tech, as you are, then the clothes you are wearing are technology, as is the building you're probably sitting in. Not merely the device that you're reading this on.

Secondly I'd dispute even your examples. The industrial revolution brought a lot of benefits, and although there was huge inequality, and a lot of pollution, in most cases the inequality was less than the agrarian society it displaced, and we have ameliorated a lot of the pollution. And it utterly transformed our quality of life.

Not to say there aren't still big problems, like climate change, but hands down it's been a net benefit to humans.

Quoting Darkneos
It seems that, like social media, AI is catering to our worst and basest impulses for immediate rewards and nothing thinking about the long term.


I would disagree with that. e.g. LLMs and the like are being used in most cases to create things and get advice. I don't see this as base impulses.Quoting Darkneos
What’s gonna happen when you replace most jobs with AI, how will people live?


It's not a given that unemployment will increase. Technology tends to displace jobs and replace them with something else, hence why US unemployment has bounced around the same average for decades even as we've been in the information age.

In any case, jobs exist to fulfill human needs. If there are no jobs, that implies a post-scarcity environment. If we're saying only the rich can afford robots or whatever, then there are still jobs for human maids. You can't have tech that both no-one can afford and yet be maximally disruptive.

(Well, I don't actually think it's that simple, I am expecting a lot of social unrest and probably the unemployment rate will increase. Countries with a weak welfare safety net are going to suffer a lot. I am just trying to push back against the assumption of the OP)

Quoting Darkneos
So far AI just seems to benefit the wealthiest among us and not the Everyman


I use AI every day and I doubt I'd be the wealthiest guy in a soup kitchen.
ProtagoranSocratist October 06, 2025 at 04:38 #1016687
Quoting Mijin
in most cases the inequality was less than the agrarian society


can you point to examples of this?

I think there are inherent problems with trying to measure economic inequality. Not that modern life is better or worse than agrarian times, but you could probably argue that the current day has the most inequality than any other point in history if you consider the massive wealth of certain people.
javi2541997 October 06, 2025 at 04:46 #1016688
Quoting ProtagoranSocratist
but you could probably argue that the current day has the most inequality than any other point in history if you consider the massive wealth of certain people.


Is there more inequality now than in the past when 1850s children (for example) didn't have the chance to study because this was reserved for only the wealthiest? I honestly think that the world, with nuances, has progressed enough in most of the countries. However, I would not consider AI as progress; that is what Reply to Mijin seems to argue.
ProtagoranSocratist October 06, 2025 at 05:40 #1016690
Quoting javi2541997
Is there more inequality now than in the past when 1850s children (for example) didn't have the chance to study because this was reserved for only the wealthiest?


i don't know, that's why i was asking Mijin: there are still a lot of people who are not literate and cannot study. I would think inequality would just be about people with the least amount of wealth vs. the people with the most wealth, and the gap between them. I was just arguing that the gap between people like mark zuckerburg and jeff bezos vs. a modern person with next to nothing is unprecedented because in the past, not even kings could have nearly that much wealth.
Mijin October 06, 2025 at 10:18 #1016735
On inequality: sorry, it seems I was wrong. The world got much wealthier in the industrial revolution, and so the typical subsistence farmer moving to the city was better off. But the owners of capital were tremendously better off. So inequality actually increased.

On AI progress; as I say @javi2541997, I use AI daily to help me with work and personal tasks, as do my friends. Why don't you think it counts as progress?
javi2541997 October 06, 2025 at 11:47 #1016739
Quoting Mijin
On AI progress; as I say javi2541997, I use AI daily to help me with work and personal tasks, as do my friends. Why don't you think it counts as progress?


Well, if you use it as a tool, I think it will not be a real issue after all. I am sceptical towards AI because it surpassed the ability to think and create of some people, and I think it is a bit dangerous. For example, I am a non-native English speaker, and I like to check my grammar on QuillBot because it helps me to learn, and it is fun how this bot works. Nonetheless, I remember using ChatGPTto proofread my grammar once, and it totally changed the sense and meaning of my text without asking for it. I have never asked for help in English since then.

Therefore, based on what I experienced using ChatGPT, I believe that some works that depend on human creativity and effort may be at risk. It would be nice if it helped me to find some inspiration. For example, if I say, 'Hey, AI, give me some advice on children's literature because I want to write a book.' Such an arrangement would be acceptable. It just helps me. But I see it wrong if I ask the AI to write a children's literature book by itself, with me being the one who writes the prompts.

It has been used in the wrong way!! The solution: We write, and it helps us with prompts.
Bivar October 06, 2025 at 13:59 #1016745
I notice a lot of people here are on the same page, I also need to keep myself from praising AI up into the clouds, we all know AI is more than mere language models and can transform fields, you never knew AI would be useful in.
We use averages and mathematics to solve many things already, but where there is a floating parameter IE the intake of a car, our older models mostly break down unless we introduce far too many complicated sensors to fine tune the mixture.
An AI for example could use one sensor (temperature) to make far more educated guesses that no ordinary human can model and you end up with a more efficient engine as a result.
Our world has models everywhere, models for heating in buildings, lighting, weather, etc. etc.
AI is a gift that will save countless lives, it is likely saving lives as you read this.

But philosophy isn't just about praise, it is also about the negatives.

I'm sure I'm not the only one who find it difficult to search for music or videos anymore, and I won't be surprised if the "search field" becomes something antiquated.
The fun we have with LLM's now is not something that will stick around forever, I see it in the same way I see the search field, something that was completely integral to a websites function, becoming old as people simply, 'don't know what to search for', we use LLM's now to ask questions we won't need to be asking in a few years time, maybe.
I don't know for certain, but It's an educated guess.
In the absence of choice, making a wish for a song or a film becomes easy, when your choices are infinite, making up your decision becomes much harder.
I don't know if this phenomena has a name already, but I wonder if it could apply to AI LLM's with potentially a much worse effect.
I don't see myself ever stop asking questions, but then again, I was never able see myself stop searching for videos either.
Patterner October 06, 2025 at 23:40 #1016849
We don't need AI to help us accomplish our downfall.
apokrisis October 07, 2025 at 00:33 #1016865
Had anyone mentioned this?…

I was pretty dismissive before. But this demonstrates the dangers of the Tech Bros “move fast and break things” approach to AI.

RogueAI October 07, 2025 at 00:59 #1016867
Quoting javi2541997
But I see it wrong if I ask the AI to write a children's literature book by itself, with me being the one who writes the prompts.


Why is that wrong?
L'éléphant October 07, 2025 at 01:37 #1016871
Quoting Bivar
In the absence of choice, making a wish for a song or a film becomes easy, when your choices are infinite, making up your decision becomes much harder.
I don't know if this phenomena has a name already,

No special name except it's " choice overload". But the psychologist Barry Schwartz wrote about the paradox of choice. There is the danger of paralysis in the decisions we make when there are so many competing alternatives.

We see that it's been happening for some time now. We just go by what the algorithm tells us to watch, listen, and read. So, we're stuck with a narrow view without us knowing or minding it. In the name of comfort, we are happy for an AI to serve us what we watch, listen, and read without us protesting about it.
javi2541997 October 07, 2025 at 04:09 #1016902
Quoting RogueAI
Why is that wrong?


Because it is gradually degenerating our power to imagine and create.
Janus October 07, 2025 at 04:18 #1016906
Reply to apokrisis Harari outlines a different set of problem here. We probably shouldn't be using AI. If we do, we may well become unwitting perpetrators of what may be the greatest threat humanity has ever faced. I never have and never will use them for research or for polishing what I write. Don't feed the Beast!
RogueAI October 07, 2025 at 13:59 #1016953
Quoting javi2541997
Because it is gradually degenerating our power to imagine and create.


But I'm not a children's book creator, nor do I want to be, nor do I have any talent at it. What's the difference between buying a book at Amazon vs buying it at a bookstore, vs having ChatGPT make me one?
javi2541997 October 07, 2025 at 14:30 #1016956
Quoting RogueAI
What's the difference between buying a book at Amazon vs buying it at a bookstore, vs having ChatGPT make me one?


I think there are important differences.

First, a book is a very personal art creation. Every chapter has details and sparks of the author's identity. This is what makes some books more iconic and worth remembering than others. A book written by an AI lacks the author's unique identity, and if you feel any emotional connection, it is merely the AI replicating ideas from various sources.

On the other hand, we should never leave the power of our imagination behind. Try to write a book, poem or haiku by yourself. It is a satisfactory pleasure. It is not necessary to be a professional to write your ideas and imagination on paper. However, if we let the AI do that for ourselves, we will gradually lose the magic of creation. Although we already live in a mediocre time regarding art, AI would be the last nail of our coffin. But it is not too late—we can stop it and believe in ourselves again.

Lastly, everything seems innocent now. It is just an advanced AI recollecting information from here and there with a lot of accuracy. Yet we should not trust too much in it. We let them write books, laws, and constitutions, and in the end, we will be ruled by them. Perhaps what I say sounds like a dystopia. But it sounds more real than ever...
baker October 08, 2025 at 18:41 #1017177
Quoting javi2541997
Why is that wrong?
— RogueAI

Because it is gradually degenerating our power to imagine and create.


And more: the use of AI is discouraging people from developing personal mastery, personal artistry. It used to be normal for people to do hard things, and this was important both evolutionarily as well as on the level of the individual person. Personal mastery was valued.

Nowadays, there is an increased focus on the finished result, without much regard for how it has come about. This has so many negative consequences.

Right now, there is a massive rescue operation taking place on Mt. Everest because a large number of people apparently just wanted to check off "climb Mt. Everest" from their bucket list. That's what happens when people don't value personal mastery. Except that when humanity as a whole fucks up, there will be noone coming to save us.
javi2541997 October 08, 2025 at 18:43 #1017178
Quoting baker
Nowadays, there is an increased focus on the finished result, without much regard for how it has come about. This has so many negative consequences.


Yep, exactly. :up: :up:
Darkneos October 09, 2025 at 04:04 #1017249
Quoting Mijin
On AI progress; as I say javi2541997, I use AI daily to help me with work and personal tasks, as do my friends. Why don't you think it counts as progress?


Studies have found that people who use AI have lower cognitive ability than people who don't, you're making yourself worse off for using it.

Outlander October 09, 2025 at 04:13 #1017251
Quoting Darkneos
Studies have found that people who use AI have lower cognitive ability than people who don't, you're making yourself worse off for using it.


Well, the first part may be true, but that was probably a personal issue that would have been measured the same, regardless of whether AI ever existed or not. :smirk:

Still, you may be right. If you don't "use it", you "lose it", or so the saying goes.

https://science.howstuffworks.com/life/evolution/10-physical-human-traits-that-evolution-has-made-obsolete.htm
javi2541997 October 09, 2025 at 04:52 #1017254
Reply to Darkneos :up: :up:

However, some people won't recognize this because the tentacles of AI have already trapped them.
Jamal October 09, 2025 at 05:23 #1017258
Quoting Darkneos
Studies have found that people who use AI have lower cognitive ability than people who don't, you're making yourself worse off for using it.


Which studies?

I think what it comes down to is that it depends on how it's used. This is where it gets interesting.
Darkneos October 09, 2025 at 05:56 #1017268
Quoting Jamal
I think what it comes down to is that it depends on how it's used. This is where it gets interesting.


Nope, across the board people do end up stupider for using it. Not every technology comes down to how it's used.

Quoting javi2541997
However, some people won't recognize this because the tentacles of AI have already trapped them.


Oh I know, it's already happening. AI is already a problem in schools and students are actively getting worse in critical thinking because of it.

Pretty sure there is a Dune quote that captures what's going on.
Jamal October 09, 2025 at 06:01 #1017269
Quoting Darkneos
Nope, across the board people do end up stupider for using it.


I already understood that you believed so. I told you that I disagreed, and you just re-iterated your belief. This is not how discussion works.

Which. Studies.
Mijin October 09, 2025 at 12:05 #1017323
Quoting Darkneos
Studies have found that people who use AI have lower cognitive ability than people who don't, you're making yourself worse off for using it.


I'd like to see those studies; I would be very skeptical of a causal link. Especially for such a broad term like "cognitive ability".
EricH October 10, 2025 at 15:43 #1017564
As with any technology, AI can be used to benefit people or to harm them. From my perspective, the biggest dangers from AI are the abilities to create new ways of killing people.

I consider it likely that scientists all across the world (either with direct or tacit support of their governments) are engaged in research to create new and more deadly bio-weapons of mass destruction. North Korea, China, Israel, Russia, etc.

At the risk of being a fear monger, AI itself will not destroy humanity. Humanity will use AI to self-destruct.

It would make me very happy to be wrong about this.
Janus October 10, 2025 at 23:07 #1017617
Quoting javi2541997
Although we already live in a mediocre time regarding art, AI would be the last nail of our coffin. But it is not too late—we can stop it and believe in ourselves again.


I agree with what you write there except for the above. I don't think we live in a "mediocre time" regarding art, and I do believe it is probably too late to stop the AI juggernaut. As I said above I refuse to use AI for either research or writing. It is only a juggernaut because people will not refrain from using it; the temptation for people to save time and/or make themselves look better and smarter is too great. I don't think they appreciate the possible dangers, which are far greater than devaluing human creativity.

Quoting EricH
From my perspective, the biggest dangers from AI are the abilities to create new ways of killing people.


This is just one of the very serious possibilities. A layman biochemist with AI help might be able to create a lethal new virus for example. It is not a matter of fearmongering—we should all be very afraid. The solution is simple—stop using AI, and the financial incentive to develop it will evaporate. The military incentive will unfortunately remain. I hold little hope that people will wake up and stop using it anyway.

Quoting Jamal
I think what it comes down to is that it depends on how it's used. This is where it gets interesting.


Nope. It just shouldn't be used because it is evolving much faster than our ability to understand it and predict where its evolution will lead. For the first time we are confronted with how to deal with an intelligence far greater than our own. I don't think it's going to end well.

180 Proof October 18, 2025 at 14:27 #1019538
[quote=Arthur C. Clarke]It may be that our role on this planet is not to worship God – but to [build it].[/quote]

Like chatbox 'romance', another omen of our impending "AI downfall" –

https://www.bbc.com/future/article/20251016-people-are-using-ai-to-talk-to-god :sweat:

@Jack Cummins @Wayfarer
Philosophim October 18, 2025 at 15:03 #1019542
Darkneos, your question has been asked countless times over the years.

"Will the calculator make people dumber?" "You can replace 10 accountants with with calculator. If people don't need to add and subtract by pencil anymore, do we need people learning math at all?"

"Will farming machinery ruin the agriculture sector? What will people do for jobs?"

"Phones are the dumbing of America. Did you know people don't even memorize phone numbers anymore? They're ruining their ability to memorize."

Same questions, different era. The answer is always the same. Technology almost always improves the capabilities of humanity and quality of life. Now is there a period of training readjustment? Yes. Is there a period of finding out the negative aspects of the tool as well as the positive? Yes. Is there always fear? Yes.

But take heart, these questions have repeated for centuries over humanities lifetime. We always adapt, we always grow stronger, and its always a better world for having new technology.
ssu October 18, 2025 at 16:21 #1019554
Quoting Philosophim
But take heart, these questions have repeated for centuries over humanities lifetime. We always adapt, we always grow stronger, and its always a better world for having new technology.

Something like that.

And we should already know from history that technological advances always come with over exaggerated promises and hype with speculative bubbles with investors pouring money to companies that in the end only few actually make it out after the bubble has burst alive and then share the global market. And become boring corporations.

As the Trumpbust is strongly coming (even if Trump fires his chief statistician because of a bad job report), the AI bubble will sooner or later burst and we'll have an economic depression. But after that AI will be used just as we use the internet. The net didn't prove to be our downfall and neither will be AI.

Actually thanks to AI, students will be writing on paper their exams under the watchful eye of a teach especially in the future. :)
Martijn October 18, 2025 at 17:10 #1019559
Our downfall will be a lack of energy (fossil fuels), which will bring our civilization to a grinding halt rapidly. Not instantenously obviously, but it is the key, critical point that impacts everything we do in our world. You don't need to be a visionary to see what will happen when the energy runs out.

All the other problems we face - overpopulation, climate change, political unrest, income & wealth inequality, AI making people dumber - are, ultimately, secondary or tertiary, despite their seriousness and urgency. We could (and should) immediately stop using AI, tear down all the data centers, and downscale our use of, and reliance on, technology in general, but who is going to do it? The powers that run the world certaintly wont voluntarily cut their power or profit, and neither will the domesticated masses. The only thing you can do is not use it, and recommend others to do the same.
Colo Millz October 21, 2025 at 15:00 #1020082
The Creator must join with V'Ger.
Colo Millz October 21, 2025 at 15:01 #1020084
Quoting Martijn
which will bring our civilization to a grinding halt rapidly. Not instantenously obviously, but it is the key, critical point that effects everything we do in our world. You don't need to be a visionary to see what will happen when the energy runs out.


It's ok, we can go nuke.

Plus we are on the verge of discovering fusion.

Am I literally the only one left on the planet who is an AI optimist?
180 Proof October 21, 2025 at 16:54 #1020105
Quoting Colo Millz
Am I literally the only one left on the planet who is an AI optimist?

:smirk: Nah. Join the club ...
https://thephilosophyforum.com/discussion/comment/979621
I like sushi October 21, 2025 at 17:05 #1020109
Reply to Darkneos Everything will be fine. Don't sweat it.

Seriously. Just because we are stupid and focus on negative things more than positive it does not mean there is nothign positive going on.
Martijn October 21, 2025 at 18:53 #1020132
Reply to Colo Millz

Yes, given that AI makes people dumber and lazier and that it has disastrous effects on the environment, while also mainly benefitting tech companies and powerful people while putting ordinary people out of work while, ultimately, being nothing more than a glorified search engine, I cannot see how AI is anything to be optimistic about. We should tear down all the datacenters and stop using it ASAP.
ProtagoranSocratist October 21, 2025 at 19:05 #1020135
i think artificial intelligence could kill everyone like in the terminator series, but other than that it's an extension of the kinds of technology we already use (computers, for example, are like artificial intelligence). I also think we should be careful about believing we can fully remove the human element in artificial intelligence, or believe that it really can act on its own.