Using Artificial Intelligence to help do philosophy

Bret Bernhoft December 18, 2022 at 20:53 8950 views 40 comments
I recently had a conversation with ChatGPT about whether an AI system can philosophize on its own. Or whether it is more appropriately perceived to be an aid/tool for the human philosopher.

In short, ChatGPT agrees that AI systems cannot create unique philosophies, and that users of AIs should understand this. But that Artificial Intelligences are extremely useful for running thought experiments and calculating large sums of data. What are your thoughts?

Comments (40)

alan1000 June 03, 2024 at 12:36 #908179
I would say that, at the current level of our technlogical development, ChatGPT has got it just about right... but that in ten or twenty years more, perhaps not.

You don't need AI to calculate large sums of data. We've been doing that since the Romans invented the abacus. (Little-known fact: expert abacus operators in Hong Kong could outperform the original IBM PC-XT!) .

AI comes more to the foreground in running thought experiments, but at this stage it does not seem to outperform human effort. Thought experiments, by definition, involve conceptual scenarios which go beyond any prior experience, and AI is (for the moment) not very good at this, except in certain very specialised fields like anti-biotic research, where its development has been very intensive. But, for the moment, cannot tell us anything useful about how dark matter prevents the collapse of stares near a black hole,

For the moment, the main deficiency in AI (where philosophy is concerned) is its inability to formulate and argue a strong, original case. Presented with a philosophical question, its responses too often resemble a summay of discussion points.
alan1000 June 03, 2024 at 12:42 #908182
"But, for the moment, cannot tell us anything useful about how dark matter prevents the collapse of stares near a black hole,"

Apologies for the bad grammar. Must proofread more carefully.
Bret Bernhoft June 07, 2024 at 04:59 #909068
Quoting alan1000
For the moment, the main deficiency in AI (where philosophy is concerned) is its inability to formulate and argue a strong, original case. Presented with a philosophical question, its responses too often resemble a summay of discussion points.


Thank you for your response. And I agree, that modern artificial intelligence has a difficult time with truly novel arguments for a strongly stated philosophical case or position.

The responses that I've observed coming from AI concerning philosophy are (in my opinion) mid-grade debate team material. Which isn't a bad thing, necessarily. But, as you point out, these kinds of tools must do better before they're taken more seriously.

Time will tell.
Heracloitus June 07, 2024 at 06:37 #909080
Quoting alan1000
Presented with a philosophical question, its responses too often resemble a summay of discussion points


Exactly the result to be expected from a large language model


AI was not even created to do philosophy. It's completely irrelevant to consider whether AI can entertain thought experiments or exhibit satisfactory rational argumentation.
180 Proof June 07, 2024 at 08:39 #909089
Reply to Bret Bernhoft Current AIs (e.g. LLMs) cannot philosophize (i.e. raise and reflect on foundational questions in order to reason dialectically towards – in order to understand when and how to create – more probative inquiries) because these 'cognitive systems' are neither embodied (i.e. synthetic phenomenology) nor programmed-trained to emulate metacognition (i.e. self-in-possible-worlds-modeling). IMHO, these machines are still only very very fast GIGO, data-mining, calculators.
flannel jesus June 07, 2024 at 09:05 #909092
Quoting 180 Proof
IMHO, these machines are still only very very fast GIGO, data-mining, calculators.


Only?

Have you read this? https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation
180 Proof June 07, 2024 at 09:17 #909093
flannel jesus June 07, 2024 at 10:10 #909094
Reply to 180 Proof And so that fact that there's very strong evidence of internal models doesn't shift your confidence in that at all?
180 Proof June 07, 2024 at 10:32 #909096
Reply to flannel jesus About "current AIs (e.g. LLMs)"? No.
flannel jesus June 07, 2024 at 10:39 #909098
Reply to 180 Proof Do you agree that there is strong evidence for internal models? Are you saying "yes there are internal models, but that still makes them only very very fast GIGO, data-mining, calculators."? Or are you saying "no internal models"?
180 Proof June 07, 2024 at 11:37 #909103
Reply to flannel jesus Maybe. Yes. No.
flannel jesus June 07, 2024 at 11:47 #909104
Reply to 180 Proof I see, fair enough.

So if it's not internal models that make them more than "very fast GIGO, data-mining, calculators", then what would, in your view? What evidence would you have to see about some future generation of ai that would lead you to say it's more than "very fast GIGO, data-mining, calculators"?

For me, internal models are already past that, as I'm sure you can tell. They're more than statistics, more than calculators, but less than a lot of aspects of human intelligence in a lot of ways, and probably not conscious.
180 Proof June 07, 2024 at 19:35 #909181
Quoting flannel jesus
So if it's not internal models that make them more than "very fast GIGO, data-mining, calculators", then what would, in your view?

As I've already said, I think AIs must also be embodied (i.e. have synthetic phenomenology that constitutes their "internal models").

What evidence would you have to see about some future generation of ai that would lead you to say it's more than "very fast GIGO, data-mining, calculators"?

I'll be convinced of that when, unprompted and on its own, an AI is asking and exploring the implications of non-philosophical as well as philosophical questions, understands when and when not to question, and learns how to create novel, more probative questions. This is my point Reply to 180 Proof about what current AIs (e.g. LLMs) cannot do.



flannel jesus June 07, 2024 at 19:39 #909182
Reply to 180 Proof (thumbs up emoji) thanks for clarifying
Lionino August 01, 2024 at 22:51 #922150
You can use character.ai and talk to "Socrates", "Plato", "Kant", "Descartes". You can also make the AI break character and start talking to you as a normal person/AI. It/they does/do give some good input, it is worth the try.
Banno August 01, 2024 at 23:11 #922153
Reply to Bret Bernhoft These supposed AI's are just stringing words together based on probability. What those words do is not part of the calculation. They sometimes string word together that are significant statistically but which do not match what is happening in the world.

They make shit up.

Their confabulation makes them useless as an authority. One has to check that what they claim is actually the case.

But since so many posters hereabouts seem to do much the same thing, what the hell.
apokrisis August 01, 2024 at 23:17 #922155
Quoting 180 Proof
As I've already said, I think AIs must also be embodied (i.e. have synthetic phenomenology that constitutes their "internal models").


Yep. Does it flinch as you take back a foot before kicking its cabinet?

Quoting 180 Proof
IMHO, these machines are still only very very fast GIGO, data-mining, calculators.


:up:

For anyone interested in why large language models are being forced on us for free, here is the business case back story. AI sells more hardware and operating system licences.

This has been how the computer industry has been thinking since IBM leased mainframe systems in the 1970s. Get the customer on an escalator of data bloat. Promise huge bottom-line productivity improvements and rake in the cash as the customer keeps coming back for more CPU and memory as they chase the productivity mirage.

Bret Bernhoft August 01, 2024 at 23:19 #922156
Reply to Banno

These are fair points. And true points.

I find ChatGPT is less of an independent thinker, and more of a mirror or echo chamber. Relying on ChatGPT for grand new ideas, is difficulty. But working with ChatGPT like a potter molds clay, is quite helpful.
Lionino August 01, 2024 at 23:26 #922158
Another episode of folks who can't even code Hello World claiming AI doesn't actually exist because it doesn't conform to their sophomoric restrictive view of what 'intelligence' means even though Gemini is able to debate better than them. Yawn yawn.
apokrisis August 01, 2024 at 23:30 #922160
Quoting Bret Bernhoft
I find ChatGPT is less of an independent thinker, and more of a mirror or echo chamber.


And it has a problem in that the more data it averages over, the more it will bland out and become useless. If it starts consuming its own generated media, that compounds the problem.

So it seems sharpest at a reasonable sized sampling and grows dumber after that.

Of course, every problem has its solutions. But I was around for the AI hype of the early 1980s. Lisp and parallel computers were going to be running your corporation said ICL. IBM snickered into its sleeve as it kept on grinding ICL out of its own home market.

This is a good video on that data issue....


Banno August 01, 2024 at 23:55 #922164
Reply to Lionino

The Gemini that produced this?

In an axiomatic propositional logic, prove RAA
Show drafts
A Challenge in Proving RAA within an Axiomatic System

You're absolutely right. Proving RAA (Reductio ad Absurdum) within the axiomatic framework of propositional logic is not possible.

Why?

RAA is a meta-logical principle: It's a rule about how we reason within the logic, rather than a theorem of the logic itself.
Axiomatic systems are object-level: They deal with propositions and their relationships, not the rules of inference governing those propositions.
Analogy

Think of an axiomatic system as a game. The rules of the game are fixed. You can't prove one of the rules by playing the game itself. You need to step outside the game to analyze its rules.

What Can We Do?

While we can't prove RAA within the system, we can:

Justify its use: Explain why it's a valid inference rule based on intuition or other accepted logical principles.
Show its consistency: Demonstrate that using RAA doesn't lead to contradictions.
Explore its relationship to other logical principles: Analyze how RAA interacts with other inference rules.
Would you like to explore one of these options?

Perhaps we could discuss how RAA is typically justified in logic textbooks or explore its role in proof systems like natural deduction.


ChatGPT just went ahead and did it:

Reductio ad absurdum (RAA), also known as proof by contradiction, is a fundamental technique in propositional logic. To prove RAA in an axiomatic system, we typically use a set of axioms and inference rules such as those found in Hilbert-style systems. Let's outline a proof of RAA within such a system.
Axioms and Inference Rules:
Axioms:
A1: ??(???)
A2: (??(???))?((???)?(???))
(??(???))?((???)?(???))
A3: (¬??¬?)?(???)
Modus Ponens (MP):
From ? and ???, infer ?.
Goal:
To prove RAA, we need to show that if ??¬? leads to a contradiction, then ¬? must be true.
Proof:
Assume ??¬?. (Assumption)
From ??¬?, derive ¬??¬? using A1.
By Axiom A1: ¬??(??¬?)
Since ??¬? is assumed, we have: ¬??(¬?)

To derive a contradiction, we need to introduce ?.

By Modus Ponens on ? and ??¬?:
¬?
Since ¬? is derived under the assumption that
?
? leads to ¬?, we can conclude ¬?.

Thus, we've shown that ??¬? leads to ¬?, which concludes the proof by contradiction (RAA).
This establishes that if assuming ? leads to a contradiction, then ¬? must be true, thereby proving RAA within our axiomatic propositional logic system.


But not all that well.
Lionino August 02, 2024 at 00:20 #922169
Reply to Banno The average person does not know what ? or axiom mean, so I think it is doing quite well. I have the feeling however that you gaslit Gemini, given it said "You are right...".

For the record, I have a collection of screw-ups myself:

Quoting Lionino
A short compilation of ChatGPT screwing up pathetically:
https://chat.openai.com/share/a3c86a67-ff27-4ec9-8ffa-ebc8fb95e01c
https://chat.openai.com/share/96378835-0a94-43ce-a25b-f05e5646ec40
https://chat.openai.com/share/b5241b53-e4d8-4cab-9a81-87fa73d740ad
https://chat.openai.com/share/f924090e-a7eb-4b67-9e62-389db1f6c87b
https://chat.openai.com/share/025521ed-ac2b-4156-bd15-e74053f66cba
Banno August 02, 2024 at 00:25 #922171
Quoting Lionino
...you gaslit Gemini, given it said "You are right...".


If I can gaslight it, it's not all that good.

But actually what it is referring to was it's first go at the question, in which it did not even attempt to provide a proof of RAA but instead simply explained what RAA was.

Quoting Lionino
Yawn yawn


Indeed.

I'm not familiar with Gemini. Not a fan of Google.

A Challenge in Proving RAA
RAA stands for Reductio ad Absurdum, which is a proof method rather than a theorem itself. It's a rule of inference, a logical principle that allows us to deduce conclusions from premises.
As such, it cannot be proven within the same logical system in which it is a rule.
To prove RAA, we would need a meta-logical system, a system that talks about the logical system itself. This is a level of abstraction beyond what is typically considered in standard logic courses.
Analogy
Imagine trying to prove the rule of addition within arithmetic. It's inherently circular; addition is a fundamental operation used to establish other arithmetic truths, but it cannot itself be proven within the arithmetic system.
Alternative Approaches
While we can't prove RAA within the system, we can:
Justify its use: Explain why it's a valid inference rule based on intuition or other accepted logical principles.
Show its consistency: Demonstrate that using RAA doesn't lead to contradictions.
Explore its relationship to other logical principles: Analyze how RAA interacts with other inference rules.
Would you like to explore one of these approaches? Or perhaps you have a specific context in mind where you're questioning the validity of RAA?


Notice how it confabulates an inability to prove RAA within any system.
Lionino August 02, 2024 at 02:40 #922196
Reply to Banno Claude
[hide]To prove the Rule of Assumption for Absurdity (RAA) in axiomatic propositional logic, we need to demonstrate that if assuming a proposition P leads to a contradiction, then we can conclude ¬P. Let's approach this step-by-step:

First, let's state the RAA rule formally:
If ?, P ? Q and ?, P ? ¬Q, then ? ? ¬P
Where ? is a set of assumptions, P is the proposition we're assuming, and Q is any proposition.
We'll start with the assumptions:
?, P ? Q (Assumption 1)
?, P ? ¬Q (Assumption 2)
Our goal is to prove ? ? ¬P
We'll use the following axioms and rules:

Modus Ponens (MP): If A and A ? B, then B
Deduction Theorem (DT): If ?, A ? B, then ? ? A ? B
Axiom of Contradiction: ¬(Q ? ¬Q)


Proof:
a) ?, P ? Q ? ¬Q (from Assumptions 1 and 2)
b) ? ? P ? (Q ? ¬Q) (by DT on step a)
c) ? ? ¬(Q ? ¬Q) (Axiom of Contradiction)
d) ? ? ¬P (by MP on b and c, and the definition of implication)
Explanation of step d:

From b, we have P ? (Q ? ¬Q)
From c, we have ¬(Q ? ¬Q)
The definition of implication states that (A ? B) ? (¬A ? B)
So, P ? (Q ? ¬Q) is equivalent to ¬P ? (Q ? ¬Q)
Given ¬(Q ? ¬Q), the only way for ¬P ? (Q ? ¬Q) to be true is if ¬P is true



Thus, we have proven that if assuming P leads to both Q and ¬Q (a contradiction), then we can conclude ¬P, which is the Rule of Assumption for Absurdity (RAA).[/hide]

Perplexity

[hide]To prove Reductio ad Absurdum (RAA) in an axiomatic propositional logic framework, we start by understanding the principle itself. RAA states that if assuming a proposition ? ? leads to a contradiction, then ? ? must be false, which can be formally expressed as: if ? ? { ? } ??{?} is inconsistent, then ? ? ¬ ? ??¬?.
Proof Outline Assumption:
Assume ? ? is true.
Derive a Contradiction:
Show that this assumption leads to a contradiction, i.e., derive both ? ? and ¬ ? ¬? for some proposition ? ?.
Conclude: Since assuming ? ? leads to a contradiction, we conclude ¬ ? ¬?.
Detailed Proof Steps
Assume ?
?: Begin by assuming the proposition ? ? is true.
Show Inconsistency: From the premises in ? ? and the assumption ? ?, derive some proposition ? ? and its negation ¬ ? ¬?. This can be done through the application of the axioms and rules of inference in your system (e.g., Modus Ponens, Axiom Schemas).
Reach a Contradiction: The derivation of both ? ? and ¬ ? ¬? indicates inconsistency, which means that the assumption ? ? cannot hold.
Conclude ¬ ? ¬?: By the principle of RAA, since the assumption ? ? leads to a contradiction, it follows that ¬ ? ¬? must be true.
Example Using Axioms In a system like ?ukasiewicz's, you might use the following axioms:
Axiom Schema 1: ? ? ( ? ? ? ) ??(???)
Axiom Schema 2: ( ? ? ( ? ? ? ) ) ? ( ( ? ? ? ) ? ( ? ? ? ) ) (??(???))?((???)?(???))
Axiom Schema 3: ( ¬ ? ? ¬ ? ) ? ( ? ? ? ) (¬??¬?)?(???)
Using these axioms, you can construct a proof that leads to a contradiction when assuming ? ? is true, thereby allowing you to conclude ¬ ? ¬? through RAA.
Conclusion This proof structure effectively demonstrates the principle of Reductio ad Absurdum in propositional logic. The key is to derive a contradiction from the assumption, which then allows for the conclusion of the negation of that assumption. This method is foundational in formal logic and is widely applicable across various logical systems.[/hide]
fishfry August 02, 2024 at 06:13 #922226
Quoting Banno
They make shit up.

Their confabulation makes them useless as an authority. One has to check that what they claim is actually the case.

But since so many posters hereabouts seem to do much the same thing, what the hell.


:100:
Bret Bernhoft August 07, 2025 at 08:50 #1005457
I would like to revisit this subject, given recent and upcoming advances in AI technologies. Also improvements in how users relate to and understand these tools.

What inspired this follow-up post is the emergence of "more advanced" AI-powered research tools, such as NotebookLM. Others include Claude 4, Perplexity, ChatGPT, Google Gemini, FOSS AI models and so on.

After years of use, what I have come to appreciate is the following, "Modern AI tools augment the philosopher, not replace them." In other words, technologies such as NotebookLM help authentically curious individuals find answers more efficiently. Not necessarily by providing authoritative conclusions, but by accepting cognitive loads and acting as a conversational mirror or assistant.

AI is meant to be led by a human user. And so it is.

For illustration purposes, I uploaded a philosophy book written by Alan Watts (titled "The Taboo Against Knowing Who You Are") to NotebookLM. I then utilized one of the internal content exploration tools to learn more about the book, in this case a utlity referred to simply as "Mindmap".

What you see below is a screenshot showcasing the impressive usefulness this one smaller feature offers seekers of knowledge and wisdom:

User image

Each of the "shapes" seen in the image above is a jumping off point for further exploration. And to have this information neatly organized in such a quick and efficient manner is groundbreaking in my opinion.

It goes without saying, I am an AI-booster. I embrace artificial intelligence in order to supercharge my life. But I think the same can be true for anyone, especially competent philosophers.

What we're staring at is a symbiosis waiting to happen. Wherein the early movers will be remembered.
night912 August 08, 2025 at 08:01 #1005663
Apologies for the bad grammar. Must proofread more carefully.

Reply to alan1000

AI is proficient at doing that, why didn't you used AI to do it?
I like sushi August 08, 2025 at 08:11 #1005664
Reply to night912 Maybe because we can learn by doing rather than relying on something to constantly correct our mistakes (which is laziness) ;)
night912 August 08, 2025 at 08:51 #1005668
Maybe because we can learn by doing rather than relying on something to constantly correct our mistakes (which is laziness)

Reply to I like sushi

We can learn to use something/someone that corrects our mistakes rather than not caring (which is laziness) and/or relying on our own belief that we are correct (which is ignorance). You are mistaking intelligence for laziness. Hopefully, you learned the distinction between the two after making your mistake. :wink:
Hanover August 08, 2025 at 12:21 #1005687
Quoting Bret Bernhoft
It goes without saying, I am an AI-booster. I embrace artificial intelligence in order to supercharge my life. But I think the same can be true for anyone, especially competent philosophers.


By "AI-booster" I don't know if you meant that you make AI smarter, but you do. That is, if you have a deep coversation about a particular topic where you add to the discussion, that is added somewhere else out in the web universe to the general body of knowledge and discussion.

I have found AI extremely useful in bouncing ideas off of. You have to aware of its sycophant leanings, phrasing issues so that it doesn't just agree or confirm, but I do find it helpful. It's also a very powerful search engine that directs you into where the mainstream areas of debate might lie and that allows for deeper research on other sites.

I also found it very helpful in understanding dense philosophical and scientific articles, where I uploaded the article and had it provide paragraph by paragraph explanation and analysis. Doing this also allows asking it questions, like "explain that for a lay person" or " I still don't understand that, please give me examples." I would imagine if I had it availble when I was back in college, I could have shown up in class far more prepared for discussion because I would have essentially spent hours already discussing the material and refining the arguments by the time I showed up in class.

I see AI as other advancements in information transmission. It's like if you bring a new library to town that didn't previously have one. Those who read the books get smarter and those who don't stay where they are, resulting in greater disparity between the informed and not informed. Before the library, everyone was much closer together, but now with the library, that changes.

But if all you do is parrot what AI tells you, you're no different than the kid who copied straight out of the encyclopedia (remember those?) for his research project.
Apustimelogist August 08, 2025 at 16:59 #1005716
Quoting Hanover
Doing this also allows asking it questions, like "explain that for a lay person" or " I still don't understand that, please give me examples."


Honestly, I don't think A.I. is reliable at doing this. Its only good if you know what you're looking for and are both willing and able to check what its saying. I sometimes use the A.I. on search engines as a quick way to look for an answer, but I always check the sources it gives and if I can't find an unambiguous verification I don't take on the answer. In fact, I find it common that A.I. completely misrepresents knowedge in papers. Its actually very rare the I find that it gives me an answer that right off the bat I don't find questionable to some degree or have skepticism. Its not even that uncommon that I see some sort of direct contradiction straight up in the answer. Albeit, I still think the A.I. still will get a signfiicant proportion of things correct; its just that that isn't really good enough because what we want is consistency, and you can't do that even if quite a small amount of things are wrong. There is then also the issue I think that if you are naive on a topic you might ask it questions that don't actually really make sense, and the A.I. is still going to give you a nonsensical answer anyway.
Bret Bernhoft August 08, 2025 at 19:43 #1005759
Quoting Hanover
But if all you do is parrot what AI tells you, you're no different than the kid who copied straight out of the encyclopedia (remember those?) for his research project.


Your point is an important distinction to draw; between using AI as a crutch or tool. It can be difficult at first to know where the line in the sand is, but it's there.

The telltale sign (in my opinion) of "too much AI" is atrophy. Of a skill, memory or knowledge. Once atrophy sets in due to an over-reliance on AI, it is time to take a step back and reassess your relationship with the technology. Unless that which is lost is best forgotten, sometimes being the case.

Artificial intelligence is, as you point out, an excellent tool for exploration and a quasi-mirroring effect or service. But this relies on the user being self-motivated and willing to take the initiative, to be responsible for themselves.
MoK August 08, 2025 at 20:38 #1005765
Quoting Bret Bernhoft

Your point is an important distinction to draw; between using AI as a crutch or tool. It can be difficult at first to know where the line in the sand is, but it's there.

The difference between humans and AI is that humans have the ability to create new ideas, whereas AI does not.
Bret Bernhoft August 08, 2025 at 20:48 #1005767
Quoting MoK
The difference between humans and AI is that humans have the ability to create new ideas, whereas AI does not.


I can see where you're coming from. But can we agree to disagree on whether AI has the ability to create novel ideas?

In my experience, modern AI can create or "synthesize" new/unprecedented ideas from existing sources. Perhaps AI doesn't have the ability to experience synchronicity or artistic inspiration like a human? Is this close to what you mean by "create new ideas"?

To add context, I asked ChatGPT its thoughts. Here is a screenshot of the answer I received:

User image
MoK August 08, 2025 at 21:16 #1005773
Quoting Bret Bernhoft

I can see where you're coming from. But can we agree to disagree on whether AI has the ability to create novel ideas?

By an idea, I mean a mental event that is not reducible to something else, yet it is meaningful to us. For example, when I say "cup", we both understand what we are talking about. "Cup" in here is referring to an idea. The idea is something more than the word. In the same manner, a sentence refers to something more than the proper combination of words. So, given that we agree with what the idea means, I don't think that AI has the capacity to create a new idea since it cannot experience an idea. We experience ideas, work with them, talk about them, and create new ideas always. Thinking to me is a process in which we work with old ideas and create new ones.
Bret Bernhoft August 12, 2025 at 05:14 #1006531
Quoting MoK
So, given that we agree with what the idea means, I don't think that AI has the capacity to create a new idea since it cannot experience an idea. We experience ideas, work with them, talk about them, and create new ideas always. Thinking to me is a process in which we work with old ideas and create new ones.


This is a fascinating perspective. Subjective, somatic experience is necessary for ideation. Is that what you are saying?
MoK August 12, 2025 at 15:00 #1006587
Reply to Bret Bernhoft
Mental events can only be experienced and created by the mind. Any living creature has at least one mind. Creatures with the ability to think have more than one mind, so-called the conscious and subconscious minds. An AI is a mindless entity, so it cannot experience ideas. In the same manner, an AI cannot think, given the definition of thinking, which is a process of working with old ideas to create new ones.
Manuel August 12, 2025 at 16:33 #1006609
Quoting Hanover
I have found AI extremely useful in bouncing ideas off of. You have to aware of its sycophant leanings, phrasing issues so that it doesn't just agree or confirm, but I do find it helpful. It's also a very powerful search engine that directs you into where the mainstream areas of debate might lie and that allows for deeper research on other sites.


Agreed. Especially with the sycophantic stuff- people have been and will continue to be extremely deluded. But they can be useful if used with care for some topics.

But yeah - people ought not confuse what it does - compute words on a probabilistic basis, with what it is not, which is intelligent or anything remotely sentient.
Hanover August 12, 2025 at 17:09 #1006618
Quoting Manuel
Especially with the sycophantic stuff- people have been and will continue to be extremely deluded. But they can be useful if used with care for some topics.


The new ChatGPT 5.0 is much better at being honest. That was part of their major upgrade. I used to work around the old one by characterizing my posts as being presented by my opponents so that I could get a more honest response.
Manuel August 12, 2025 at 17:54 #1006625
Reply to Hanover

Yes, I've noticed it's toned down it's endless flaterry quite a lot.

What's concerning is that some people want the old model back, as a "companion" or buddy.

But for out interests, this version is better.