Using Artificial Intelligence to help do philosophy
I recently had a conversation with ChatGPT about whether an AI system can philosophize on its own. Or whether it is more appropriately perceived to be an aid/tool for the human philosopher.
In short, ChatGPT agrees that AI systems cannot create unique philosophies, and that users of AIs should understand this. But that Artificial Intelligences are extremely useful for running thought experiments and calculating large sums of data. What are your thoughts?
In short, ChatGPT agrees that AI systems cannot create unique philosophies, and that users of AIs should understand this. But that Artificial Intelligences are extremely useful for running thought experiments and calculating large sums of data. What are your thoughts?
Comments (40)
You don't need AI to calculate large sums of data. We've been doing that since the Romans invented the abacus. (Little-known fact: expert abacus operators in Hong Kong could outperform the original IBM PC-XT!) .
AI comes more to the foreground in running thought experiments, but at this stage it does not seem to outperform human effort. Thought experiments, by definition, involve conceptual scenarios which go beyond any prior experience, and AI is (for the moment) not very good at this, except in certain very specialised fields like anti-biotic research, where its development has been very intensive. But, for the moment, cannot tell us anything useful about how dark matter prevents the collapse of stares near a black hole,
For the moment, the main deficiency in AI (where philosophy is concerned) is its inability to formulate and argue a strong, original case. Presented with a philosophical question, its responses too often resemble a summay of discussion points.
Apologies for the bad grammar. Must proofread more carefully.
Thank you for your response. And I agree, that modern artificial intelligence has a difficult time with truly novel arguments for a strongly stated philosophical case or position.
The responses that I've observed coming from AI concerning philosophy are (in my opinion) mid-grade debate team material. Which isn't a bad thing, necessarily. But, as you point out, these kinds of tools must do better before they're taken more seriously.
Time will tell.
Exactly the result to be expected from a large language model
AI was not even created to do philosophy. It's completely irrelevant to consider whether AI can entertain thought experiments or exhibit satisfactory rational argumentation.
Only?
Have you read this? https://www.lesswrong.com/posts/yzGDwpRBx6TEcdeA5/a-chess-gpt-linear-emergent-world-representation
So if it's not internal models that make them more than "very fast GIGO, data-mining, calculators", then what would, in your view? What evidence would you have to see about some future generation of ai that would lead you to say it's more than "very fast GIGO, data-mining, calculators"?
For me, internal models are already past that, as I'm sure you can tell. They're more than statistics, more than calculators, but less than a lot of aspects of human intelligence in a lot of ways, and probably not conscious.
As I've already said, I think AIs must also be embodied (i.e. have synthetic phenomenology that constitutes their "internal models").
I'll be convinced of that when, unprompted and on its own, an AI is asking and exploring the implications of non-philosophical as well as philosophical questions, understands when and when not to question, and learns how to create novel, more probative questions. This is my point about what current AIs (e.g. LLMs) cannot do.
They make shit up.
Their confabulation makes them useless as an authority. One has to check that what they claim is actually the case.
But since so many posters hereabouts seem to do much the same thing, what the hell.
Yep. Does it flinch as you take back a foot before kicking its cabinet?
Quoting 180 Proof
:up:
For anyone interested in why large language models are being forced on us for free, here is the business case back story. AI sells more hardware and operating system licences.
This has been how the computer industry has been thinking since IBM leased mainframe systems in the 1970s. Get the customer on an escalator of data bloat. Promise huge bottom-line productivity improvements and rake in the cash as the customer keeps coming back for more CPU and memory as they chase the productivity mirage.
These are fair points. And true points.
I find ChatGPT is less of an independent thinker, and more of a mirror or echo chamber. Relying on ChatGPT for grand new ideas, is difficulty. But working with ChatGPT like a potter molds clay, is quite helpful.
And it has a problem in that the more data it averages over, the more it will bland out and become useless. If it starts consuming its own generated media, that compounds the problem.
So it seems sharpest at a reasonable sized sampling and grows dumber after that.
Of course, every problem has its solutions. But I was around for the AI hype of the early 1980s. Lisp and parallel computers were going to be running your corporation said ICL. IBM snickered into its sleeve as it kept on grinding ICL out of its own home market.
This is a good video on that data issue....
The Gemini that produced this?
In an axiomatic propositional logic, prove RAA
ChatGPT just went ahead and did it:
But not all that well.
For the record, I have a collection of screw-ups myself:
Quoting Lionino
If I can gaslight it, it's not all that good.
But actually what it is referring to was it's first go at the question, in which it did not even attempt to provide a proof of RAA but instead simply explained what RAA was.
Quoting Lionino
Indeed.
I'm not familiar with Gemini. Not a fan of Google.
Notice how it confabulates an inability to prove RAA within any system.
[hide]To prove the Rule of Assumption for Absurdity (RAA) in axiomatic propositional logic, we need to demonstrate that if assuming a proposition P leads to a contradiction, then we can conclude ¬P. Let's approach this step-by-step:
First, let's state the RAA rule formally:
If ?, P ? Q and ?, P ? ¬Q, then ? ? ¬P
Where ? is a set of assumptions, P is the proposition we're assuming, and Q is any proposition.
We'll start with the assumptions:
?, P ? Q (Assumption 1)
?, P ? ¬Q (Assumption 2)
Our goal is to prove ? ? ¬P
We'll use the following axioms and rules:
Modus Ponens (MP): If A and A ? B, then B
Deduction Theorem (DT): If ?, A ? B, then ? ? A ? B
Axiom of Contradiction: ¬(Q ? ¬Q)
Proof:
a) ?, P ? Q ? ¬Q (from Assumptions 1 and 2)
b) ? ? P ? (Q ? ¬Q) (by DT on step a)
c) ? ? ¬(Q ? ¬Q) (Axiom of Contradiction)
d) ? ? ¬P (by MP on b and c, and the definition of implication)
Explanation of step d:
From b, we have P ? (Q ? ¬Q)
From c, we have ¬(Q ? ¬Q)
The definition of implication states that (A ? B) ? (¬A ? B)
So, P ? (Q ? ¬Q) is equivalent to ¬P ? (Q ? ¬Q)
Given ¬(Q ? ¬Q), the only way for ¬P ? (Q ? ¬Q) to be true is if ¬P is true
Thus, we have proven that if assuming P leads to both Q and ¬Q (a contradiction), then we can conclude ¬P, which is the Rule of Assumption for Absurdity (RAA).[/hide]
Perplexity
[hide]To prove Reductio ad Absurdum (RAA) in an axiomatic propositional logic framework, we start by understanding the principle itself. RAA states that if assuming a proposition ? ? leads to a contradiction, then ? ? must be false, which can be formally expressed as: if ? ? { ? } ??{?} is inconsistent, then ? ? ¬ ? ??¬?.
Proof Outline Assumption:
Assume ? ? is true.
Derive a Contradiction:
Show that this assumption leads to a contradiction, i.e., derive both ? ? and ¬ ? ¬? for some proposition ? ?.
Conclude: Since assuming ? ? leads to a contradiction, we conclude ¬ ? ¬?.
Detailed Proof Steps
Assume ?
?: Begin by assuming the proposition ? ? is true.
Show Inconsistency: From the premises in ? ? and the assumption ? ?, derive some proposition ? ? and its negation ¬ ? ¬?. This can be done through the application of the axioms and rules of inference in your system (e.g., Modus Ponens, Axiom Schemas).
Reach a Contradiction: The derivation of both ? ? and ¬ ? ¬? indicates inconsistency, which means that the assumption ? ? cannot hold.
Conclude ¬ ? ¬?: By the principle of RAA, since the assumption ? ? leads to a contradiction, it follows that ¬ ? ¬? must be true.
Example Using Axioms In a system like ?ukasiewicz's, you might use the following axioms:
Axiom Schema 1: ? ? ( ? ? ? ) ??(???)
Axiom Schema 2: ( ? ? ( ? ? ? ) ) ? ( ( ? ? ? ) ? ( ? ? ? ) ) (??(???))?((???)?(???))
Axiom Schema 3: ( ¬ ? ? ¬ ? ) ? ( ? ? ? ) (¬??¬?)?(???)
Using these axioms, you can construct a proof that leads to a contradiction when assuming ? ? is true, thereby allowing you to conclude ¬ ? ¬? through RAA.
Conclusion This proof structure effectively demonstrates the principle of Reductio ad Absurdum in propositional logic. The key is to derive a contradiction from the assumption, which then allows for the conclusion of the negation of that assumption. This method is foundational in formal logic and is widely applicable across various logical systems.[/hide]
:100:
What inspired this follow-up post is the emergence of "more advanced" AI-powered research tools, such as NotebookLM. Others include Claude 4, Perplexity, ChatGPT, Google Gemini, FOSS AI models and so on.
After years of use, what I have come to appreciate is the following, "Modern AI tools augment the philosopher, not replace them." In other words, technologies such as NotebookLM help authentically curious individuals find answers more efficiently. Not necessarily by providing authoritative conclusions, but by accepting cognitive loads and acting as a conversational mirror or assistant.
AI is meant to be led by a human user. And so it is.
For illustration purposes, I uploaded a philosophy book written by Alan Watts (titled "The Taboo Against Knowing Who You Are") to NotebookLM. I then utilized one of the internal content exploration tools to learn more about the book, in this case a utlity referred to simply as "Mindmap".
What you see below is a screenshot showcasing the impressive usefulness this one smaller feature offers seekers of knowledge and wisdom:
Each of the "shapes" seen in the image above is a jumping off point for further exploration. And to have this information neatly organized in such a quick and efficient manner is groundbreaking in my opinion.
It goes without saying, I am an AI-booster. I embrace artificial intelligence in order to supercharge my life. But I think the same can be true for anyone, especially competent philosophers.
What we're staring at is a symbiosis waiting to happen. Wherein the early movers will be remembered.
AI is proficient at doing that, why didn't you used AI to do it?
We can learn to use something/someone that corrects our mistakes rather than not caring (which is laziness) and/or relying on our own belief that we are correct (which is ignorance). You are mistaking intelligence for laziness. Hopefully, you learned the distinction between the two after making your mistake. :wink:
By "AI-booster" I don't know if you meant that you make AI smarter, but you do. That is, if you have a deep coversation about a particular topic where you add to the discussion, that is added somewhere else out in the web universe to the general body of knowledge and discussion.
I have found AI extremely useful in bouncing ideas off of. You have to aware of its sycophant leanings, phrasing issues so that it doesn't just agree or confirm, but I do find it helpful. It's also a very powerful search engine that directs you into where the mainstream areas of debate might lie and that allows for deeper research on other sites.
I also found it very helpful in understanding dense philosophical and scientific articles, where I uploaded the article and had it provide paragraph by paragraph explanation and analysis. Doing this also allows asking it questions, like "explain that for a lay person" or " I still don't understand that, please give me examples." I would imagine if I had it availble when I was back in college, I could have shown up in class far more prepared for discussion because I would have essentially spent hours already discussing the material and refining the arguments by the time I showed up in class.
I see AI as other advancements in information transmission. It's like if you bring a new library to town that didn't previously have one. Those who read the books get smarter and those who don't stay where they are, resulting in greater disparity between the informed and not informed. Before the library, everyone was much closer together, but now with the library, that changes.
But if all you do is parrot what AI tells you, you're no different than the kid who copied straight out of the encyclopedia (remember those?) for his research project.
Honestly, I don't think A.I. is reliable at doing this. Its only good if you know what you're looking for and are both willing and able to check what its saying. I sometimes use the A.I. on search engines as a quick way to look for an answer, but I always check the sources it gives and if I can't find an unambiguous verification I don't take on the answer. In fact, I find it common that A.I. completely misrepresents knowedge in papers. Its actually very rare the I find that it gives me an answer that right off the bat I don't find questionable to some degree or have skepticism. Its not even that uncommon that I see some sort of direct contradiction straight up in the answer. Albeit, I still think the A.I. still will get a signfiicant proportion of things correct; its just that that isn't really good enough because what we want is consistency, and you can't do that even if quite a small amount of things are wrong. There is then also the issue I think that if you are naive on a topic you might ask it questions that don't actually really make sense, and the A.I. is still going to give you a nonsensical answer anyway.
Your point is an important distinction to draw; between using AI as a crutch or tool. It can be difficult at first to know where the line in the sand is, but it's there.
The telltale sign (in my opinion) of "too much AI" is atrophy. Of a skill, memory or knowledge. Once atrophy sets in due to an over-reliance on AI, it is time to take a step back and reassess your relationship with the technology. Unless that which is lost is best forgotten, sometimes being the case.
Artificial intelligence is, as you point out, an excellent tool for exploration and a quasi-mirroring effect or service. But this relies on the user being self-motivated and willing to take the initiative, to be responsible for themselves.
The difference between humans and AI is that humans have the ability to create new ideas, whereas AI does not.
I can see where you're coming from. But can we agree to disagree on whether AI has the ability to create novel ideas?
In my experience, modern AI can create or "synthesize" new/unprecedented ideas from existing sources. Perhaps AI doesn't have the ability to experience synchronicity or artistic inspiration like a human? Is this close to what you mean by "create new ideas"?
To add context, I asked ChatGPT its thoughts. Here is a screenshot of the answer I received:
By an idea, I mean a mental event that is not reducible to something else, yet it is meaningful to us. For example, when I say "cup", we both understand what we are talking about. "Cup" in here is referring to an idea. The idea is something more than the word. In the same manner, a sentence refers to something more than the proper combination of words. So, given that we agree with what the idea means, I don't think that AI has the capacity to create a new idea since it cannot experience an idea. We experience ideas, work with them, talk about them, and create new ideas always. Thinking to me is a process in which we work with old ideas and create new ones.
This is a fascinating perspective. Subjective, somatic experience is necessary for ideation. Is that what you are saying?
Mental events can only be experienced and created by the mind. Any living creature has at least one mind. Creatures with the ability to think have more than one mind, so-called the conscious and subconscious minds. An AI is a mindless entity, so it cannot experience ideas. In the same manner, an AI cannot think, given the definition of thinking, which is a process of working with old ideas to create new ones.
Agreed. Especially with the sycophantic stuff- people have been and will continue to be extremely deluded. But they can be useful if used with care for some topics.
But yeah - people ought not confuse what it does - compute words on a probabilistic basis, with what it is not, which is intelligent or anything remotely sentient.
The new ChatGPT 5.0 is much better at being honest. That was part of their major upgrade. I used to work around the old one by characterizing my posts as being presented by my opponents so that I could get a more honest response.
Yes, I've noticed it's toned down it's endless flaterry quite a lot.
What's concerning is that some people want the old model back, as a "companion" or buddy.
But for out interests, this version is better.